Home
Jobs

1516 Clustering Jobs - Page 6

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

6.0 years

10 - 20 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Senior Data Engineer (On-Site, India) About The Opportunity A high-growth innovator in the Analytics & Enterprise Data Management sector, we architect and deliver cloud-native data platforms that power real-time reporting, AI/ML workloads, and intelligent decisioning for global retail, fintech, and manufacturing leaders. Our engineering teams transform raw, high-velocity data into trusted, analytics-ready assets that drive revenue acceleration and operational excellence. Role & Responsibilities Design, build, and optimise batch and streaming ETL/ELT pipelines on Apache Spark and Kafka, ensuring sub-minute latency and 99.9% uptime. Develop modular, test-driven Python code to ingest, cleanse, and enrich terabyte-scale datasets from relational, NoSQL, and API sources. Model data for analytics and AI, implementing star/snowflake schemas, partitioning, and clustering in BigQuery, Redshift, or Snowflake. Automate workflow orchestration with Apache Airflow, defining DAGs, dependency management, and robust alerting for SLA adherence. Collaborate with Data Scientists and BI teams to expose feature stores, curated marts, and self-service semantic layers. Enforce data-governance best practices—lineage, cataloguing, RBAC, and encryption—in compliance with GDPR and SOC 2 standards. Skills & Qualifications Must-Have 3–6 years hands-on engineering large-scale data pipelines in production. Expertise in Python and advanced SQL for ETL, optimisation, and performance tuning. Proven experience with Spark (PySpark or Scala) and streaming technologies such as Kafka or Kinesis. Deep knowledge of relational modelling, data-warehousing concepts, and at least one cloud DWH (BigQuery, Redshift, or Snowflake). Solid command of CI/CD, Git workflows, and containerisation (Docker). Preferred Exposure to infrastructure-as-code (Terraform, CloudFormation) and Kubernetes. Experience integrating ML feature stores and monitoring data quality with Great Expectations or similar tools. Certification on AWS, GCP, or Azure data services. Benefits & Culture Highlights On-site, engineer-first environment with dedicated lab space and latest Mac/Linux gear. Rapid career progression through technical mentorship, sponsored certifications, and conference budgets. Inclusive, innovation-driven culture that rewards outcome ownership and creative problem-solving. Ready to architect next-gen data pipelines that power AI at scale? Apply now and join a mission-focused team turning data into competitive advantage. Skills: airflow,docker,snowflake,apache spark,data engineering,redshift,sql,pyspark,python,data modeling,bigquery,ci/cd,apache airflow,data warehousing,spark,etl,kafka,git

Posted 3 days ago

Apply

6.0 years

10 - 20 Lacs

Mumbai Metropolitan Region

On-site

Linkedin logo

Senior Data Engineer (On-Site, India) About The Opportunity A high-growth innovator in the Analytics & Enterprise Data Management sector, we architect and deliver cloud-native data platforms that power real-time reporting, AI/ML workloads, and intelligent decisioning for global retail, fintech, and manufacturing leaders. Our engineering teams transform raw, high-velocity data into trusted, analytics-ready assets that drive revenue acceleration and operational excellence. Role & Responsibilities Design, build, and optimise batch and streaming ETL/ELT pipelines on Apache Spark and Kafka, ensuring sub-minute latency and 99.9% uptime. Develop modular, test-driven Python code to ingest, cleanse, and enrich terabyte-scale datasets from relational, NoSQL, and API sources. Model data for analytics and AI, implementing star/snowflake schemas, partitioning, and clustering in BigQuery, Redshift, or Snowflake. Automate workflow orchestration with Apache Airflow, defining DAGs, dependency management, and robust alerting for SLA adherence. Collaborate with Data Scientists and BI teams to expose feature stores, curated marts, and self-service semantic layers. Enforce data-governance best practices—lineage, cataloguing, RBAC, and encryption—in compliance with GDPR and SOC 2 standards. Skills & Qualifications Must-Have 3–6 years hands-on engineering large-scale data pipelines in production. Expertise in Python and advanced SQL for ETL, optimisation, and performance tuning. Proven experience with Spark (PySpark or Scala) and streaming technologies such as Kafka or Kinesis. Deep knowledge of relational modelling, data-warehousing concepts, and at least one cloud DWH (BigQuery, Redshift, or Snowflake). Solid command of CI/CD, Git workflows, and containerisation (Docker). Preferred Exposure to infrastructure-as-code (Terraform, CloudFormation) and Kubernetes. Experience integrating ML feature stores and monitoring data quality with Great Expectations or similar tools. Certification on AWS, GCP, or Azure data services. Benefits & Culture Highlights On-site, engineer-first environment with dedicated lab space and latest Mac/Linux gear. Rapid career progression through technical mentorship, sponsored certifications, and conference budgets. Inclusive, innovation-driven culture that rewards outcome ownership and creative problem-solving. Ready to architect next-gen data pipelines that power AI at scale? Apply now and join a mission-focused team turning data into competitive advantage. Skills: airflow,docker,snowflake,apache spark,data engineering,redshift,sql,pyspark,python,data modeling,bigquery,ci/cd,apache airflow,data warehousing,spark,etl,kafka,git

Posted 3 days ago

Apply

6.0 years

10 - 20 Lacs

Greater Kolkata Area

On-site

Linkedin logo

Senior Data Engineer (On-Site, India) About The Opportunity A high-growth innovator in the Analytics & Enterprise Data Management sector, we architect and deliver cloud-native data platforms that power real-time reporting, AI/ML workloads, and intelligent decisioning for global retail, fintech, and manufacturing leaders. Our engineering teams transform raw, high-velocity data into trusted, analytics-ready assets that drive revenue acceleration and operational excellence. Role & Responsibilities Design, build, and optimise batch and streaming ETL/ELT pipelines on Apache Spark and Kafka, ensuring sub-minute latency and 99.9% uptime. Develop modular, test-driven Python code to ingest, cleanse, and enrich terabyte-scale datasets from relational, NoSQL, and API sources. Model data for analytics and AI, implementing star/snowflake schemas, partitioning, and clustering in BigQuery, Redshift, or Snowflake. Automate workflow orchestration with Apache Airflow, defining DAGs, dependency management, and robust alerting for SLA adherence. Collaborate with Data Scientists and BI teams to expose feature stores, curated marts, and self-service semantic layers. Enforce data-governance best practices—lineage, cataloguing, RBAC, and encryption—in compliance with GDPR and SOC 2 standards. Skills & Qualifications Must-Have 3–6 years hands-on engineering large-scale data pipelines in production. Expertise in Python and advanced SQL for ETL, optimisation, and performance tuning. Proven experience with Spark (PySpark or Scala) and streaming technologies such as Kafka or Kinesis. Deep knowledge of relational modelling, data-warehousing concepts, and at least one cloud DWH (BigQuery, Redshift, or Snowflake). Solid command of CI/CD, Git workflows, and containerisation (Docker). Preferred Exposure to infrastructure-as-code (Terraform, CloudFormation) and Kubernetes. Experience integrating ML feature stores and monitoring data quality with Great Expectations or similar tools. Certification on AWS, GCP, or Azure data services. Benefits & Culture Highlights On-site, engineer-first environment with dedicated lab space and latest Mac/Linux gear. Rapid career progression through technical mentorship, sponsored certifications, and conference budgets. Inclusive, innovation-driven culture that rewards outcome ownership and creative problem-solving. Ready to architect next-gen data pipelines that power AI at scale? Apply now and join a mission-focused team turning data into competitive advantage. Skills: airflow,docker,snowflake,apache spark,data engineering,redshift,sql,pyspark,python,data modeling,bigquery,ci/cd,apache airflow,data warehousing,spark,etl,kafka,git

Posted 3 days ago

Apply

6.0 years

10 - 20 Lacs

Bhubaneswar, Odisha, India

On-site

Linkedin logo

Senior Data Engineer (On-Site, India) About The Opportunity A high-growth innovator in the Analytics & Enterprise Data Management sector, we architect and deliver cloud-native data platforms that power real-time reporting, AI/ML workloads, and intelligent decisioning for global retail, fintech, and manufacturing leaders. Our engineering teams transform raw, high-velocity data into trusted, analytics-ready assets that drive revenue acceleration and operational excellence. Role & Responsibilities Design, build, and optimise batch and streaming ETL/ELT pipelines on Apache Spark and Kafka, ensuring sub-minute latency and 99.9% uptime. Develop modular, test-driven Python code to ingest, cleanse, and enrich terabyte-scale datasets from relational, NoSQL, and API sources. Model data for analytics and AI, implementing star/snowflake schemas, partitioning, and clustering in BigQuery, Redshift, or Snowflake. Automate workflow orchestration with Apache Airflow, defining DAGs, dependency management, and robust alerting for SLA adherence. Collaborate with Data Scientists and BI teams to expose feature stores, curated marts, and self-service semantic layers. Enforce data-governance best practices—lineage, cataloguing, RBAC, and encryption—in compliance with GDPR and SOC 2 standards. Skills & Qualifications Must-Have 3–6 years hands-on engineering large-scale data pipelines in production. Expertise in Python and advanced SQL for ETL, optimisation, and performance tuning. Proven experience with Spark (PySpark or Scala) and streaming technologies such as Kafka or Kinesis. Deep knowledge of relational modelling, data-warehousing concepts, and at least one cloud DWH (BigQuery, Redshift, or Snowflake). Solid command of CI/CD, Git workflows, and containerisation (Docker). Preferred Exposure to infrastructure-as-code (Terraform, CloudFormation) and Kubernetes. Experience integrating ML feature stores and monitoring data quality with Great Expectations or similar tools. Certification on AWS, GCP, or Azure data services. Benefits & Culture Highlights On-site, engineer-first environment with dedicated lab space and latest Mac/Linux gear. Rapid career progression through technical mentorship, sponsored certifications, and conference budgets. Inclusive, innovation-driven culture that rewards outcome ownership and creative problem-solving. Ready to architect next-gen data pipelines that power AI at scale? Apply now and join a mission-focused team turning data into competitive advantage. Skills: airflow,docker,snowflake,apache spark,data engineering,redshift,sql,pyspark,python,data modeling,bigquery,ci/cd,apache airflow,data warehousing,spark,etl,kafka,git

Posted 3 days ago

Apply

6.0 years

10 - 20 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Senior Data Engineer (On-Site, India) About The Opportunity A high-growth innovator in the Analytics & Enterprise Data Management sector, we architect and deliver cloud-native data platforms that power real-time reporting, AI/ML workloads, and intelligent decisioning for global retail, fintech, and manufacturing leaders. Our engineering teams transform raw, high-velocity data into trusted, analytics-ready assets that drive revenue acceleration and operational excellence. Role & Responsibilities Design, build, and optimise batch and streaming ETL/ELT pipelines on Apache Spark and Kafka, ensuring sub-minute latency and 99.9% uptime. Develop modular, test-driven Python code to ingest, cleanse, and enrich terabyte-scale datasets from relational, NoSQL, and API sources. Model data for analytics and AI, implementing star/snowflake schemas, partitioning, and clustering in BigQuery, Redshift, or Snowflake. Automate workflow orchestration with Apache Airflow, defining DAGs, dependency management, and robust alerting for SLA adherence. Collaborate with Data Scientists and BI teams to expose feature stores, curated marts, and self-service semantic layers. Enforce data-governance best practices—lineage, cataloguing, RBAC, and encryption—in compliance with GDPR and SOC 2 standards. Skills & Qualifications Must-Have 3–6 years hands-on engineering large-scale data pipelines in production. Expertise in Python and advanced SQL for ETL, optimisation, and performance tuning. Proven experience with Spark (PySpark or Scala) and streaming technologies such as Kafka or Kinesis. Deep knowledge of relational modelling, data-warehousing concepts, and at least one cloud DWH (BigQuery, Redshift, or Snowflake). Solid command of CI/CD, Git workflows, and containerisation (Docker). Preferred Exposure to infrastructure-as-code (Terraform, CloudFormation) and Kubernetes. Experience integrating ML feature stores and monitoring data quality with Great Expectations or similar tools. Certification on AWS, GCP, or Azure data services. Benefits & Culture Highlights On-site, engineer-first environment with dedicated lab space and latest Mac/Linux gear. Rapid career progression through technical mentorship, sponsored certifications, and conference budgets. Inclusive, innovation-driven culture that rewards outcome ownership and creative problem-solving. Ready to architect next-gen data pipelines that power AI at scale? Apply now and join a mission-focused team turning data into competitive advantage. Skills: airflow,docker,snowflake,apache spark,data engineering,redshift,sql,pyspark,python,data modeling,bigquery,ci/cd,apache airflow,data warehousing,spark,etl,kafka,git

Posted 3 days ago

Apply

6.0 - 8.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

ROLE- SQL DBA Location- Chennai Experience - 5-7 yrs Role and Responsibilities 1. Extensive working knowledge on SQL Server and PostgreSQL 2. Configure DBMS monitoring utilities to minimize false alarms. Implement Automation using PowerShell Scripts. Good Knowledge on PowerShell Scripts is must. 3. Comprehensive implementation knowledge on SSIS, SSAS and SSRS. 4. Ensure all database servers are backed up in a way that meets the business’s Recovery Point Objectives (RPO) 5. Test backups to ensure we can meet the business’ Recovery Time Objectives (RTO) 6. Troubleshoot Database service outages as they occur, including after-hours and weekends. 7. As new systems are brought in-house, choose whether to use clustering, log shipping, mirroring, SQL Azure, Always ON. 8. Implementation Knowledge on High Availability of Oracle Data Guard, Golden Gate, Always ON & SQL Azure. 9. Install and configure SQL Server, PostgreSQL and Oracle. 10. Deploy database change scripts provided by third party vendors 11. SQL Server Migration experience from SQL older to newer versions. 12. Cross Platform Migration experience (Oracle to SQL & Oracle to DB2 & MySQL to PostgreSQL). 13. When performance issues arise, determine the most effective way to increase performance including hardware purchases, server configuration changes, or index/query changes. 14. Document the company’s database environment. 15. Ensure that new database code meets company standards for readability, reliability, and performance. 16. Each week, give developers a list of the top 10 most resource-intensive queries on the server and suggest ways to improve performance on each. 17. Design indexes for existing applications, choosing when to add or remove indexes 18. When users complain about the performance of a particular query, help developers improve performance of that query by tweaking it or modifying indexes. 19. Advise developers on the most efficient database designs (tables, datatypes, stored procedures, functions, etc.). 20. Implementation experience in Application Performance Tuning and Optimization. Following will the major responsibilities: 1. L2/L3 Microsoft SQL and PostgreSQL Administration 2. Experience in installing, configuring, designing SQL instances 3. Must demonstrate experience to support, optimize and maintain Microsoft SQL 2016/2019 infrastructure with high availability. 4. Working knowledge and hands on experience in On-prem to Azure SQL Server Migration is highly desirable. 5. PowerShell Scripting Knowledge is must. 6. Good communication. 7. Ability to interact and co-ordinate with multiple stakeholders. Some tasks below 1. Monthly patching 2. Cumulative patching. 3. Conduct SQL Server lunch-and-learn sessions for application developers. 4. Knowledge on Virtualization (VMware, Cloud) & Storage concepts Additional Comments: Over all 6-8 Years of Experience with ITIL/ITSM Service Management. Should be willing to work in night shift.

Posted 3 days ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

https://forms.office.com/r/JT9GG2968G Kindly fill the form. The profiles will be considered based only on the responses in the forms. Summary We are seeking a highly skilled and experienced DBA to join our expanding Information Technology team. In this role, you will help develop and design technology solutions that are scalable, relevant, and critical to our company’s success. You will join the team working on our new platform being built using MS SQL Server and MYSQL Server . You will participate in all phases of the development lifecycle, implementation, maintenance and support and must have a solid skill set, a desire to continue to grow as a Database Administrator, and a team-player mentality. Key Responsibilities 1. Primary responsibility will be the management of production databases servers, including security, deployment, maintenance and performance monitoring. 2. Setting up SQL Server replication, mirroring and high availability as would be required across hybrid environments. 3. Design and implementation of new installations, on Azure, AWS and cloud hosting with no specific DB services. 4. Deploy and maintain on premise installations of SQL Server on Linux/ MySQL installation. 5. Database security and protection against SQL injection, exploiting of intellectual property, etc., 6. To work with development teams assisting with data storage and query design/optimization where required. 7. Participate in the design and implementation of essential applications. 8. Demonstrate expertise and add valuable input throughout the development lifecycle. 9. Help design and implement scalable, lasting technology solutions. 10. Review current systems, suggesting updates as would be required. 11. Gather requirements from internal and external stakeholders. 12. Document procedures to setup and maintain a highly available SQL Server database on Azure cloud, on premise and Hybrid environments. 13. Test and debug new applications and updates 14. Resolve reported issues and reply to queries in a timely manner. 15. Remain up to date on all current best practices, trends, and industry developments. 17. Identify potential challenges and bottlenecks in order to address them proactively. Key Competencies/Skillsets SQL Server management on Hybrid environments (on premise and cloud, preferably, Azure, AWS) MySQL Backup, SQL Server Backup, Replication, Clustering, Log shipping experience on Linux/ Windows. Setting up, management and maintenance of SQL Server/ MySQL on Linux. Experience with database usage and management Experience in implementing Azure Hyperscale database Experience in Financial Services / E-Commerce / Payments industry preferred. Familiar with multi-tier, object-oriented, secure application design architecture Experience in cloud environments preferably Microsoft Azure on Database service tiers. Experience of PCI DSS a plus SQL development experience is a plus Linux experience is a plus Proficient in using issue tracking tools like Jira, etc. Proficient in using version control systems like Git, SVN etc. Strong understanding of web-based applications and technologies Sense of ownership and pride in your performance and its impact on company’s success Critical thinker and problem-solving skills Excellent communication skills and ability to communicate with client’s via different modes of communication email, phone, direct messaging, etc Preferred Education and Experience 1. Bachelor’s degree in computer science or related field 2. Minimum 3 years’ experience as SQL Server DBA and as MySQL DBA and 2 + years of experience as MySQL DBA including Replication, InnoDB Cluster, Upgrading and Patching. 3. Ubuntu Linux knowledge is perferred. 3. MCTS, MCITP, and/or MVP/ Azure DBA/MySQL certifications a plus

Posted 3 days ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

At PDI Technologies, we empower some of the world's leading convenience retail and petroleum brands with cutting-edge technology solutions that drive growth and operational efficiency. By “Connecting Convenience” across the globe, we empower businesses to increase productivity, make more informed decisions, and engage faster with customers through loyalty programs, shopper insights, and unmatched real-time market intelligence via mobile applications, such as GasBuddy. We’re a global team committed to excellence, collaboration, and driving real impact. Explore our opportunities and become part of a company that values diversity, integrity, and growth. Role Overview: Do you love building software that thrills your customers? Do you insist on the highest standards for the software your team develops? Are you a progressive software engineer, an advocate of agile development practices, and a proponent of continuous improvement? If this is you, then join and energetic team of engineers building next generation of solutions at PDI! As an engineering leader, you will lead Agile engineering resources & provide guidance from inception through release of major & point product releases, including ongoing maintenance. You will be working closely with your product managers, product owners, engineering leaders, your team and other stakeholders. You will be leading developers, quality engineers and partnering with CloudOps, TechOps, UX Design other cross functional functional groups to evolve our solutions while continuing to improve your teams’ adoption of SDLC processes, CI/CD integration, code quality & automation test coverage. Key Responsibilities: Lead an organization of 4-20 development & test engineers globally to efficiently produce high quality deliverables Manage team leads, direct reports or a mix of both Manage several deliverables for a product line on time, on scope and on quality Instrument your processes, produce scorecards of progress regularly and establish a regular cadence of operational reviews with your management including quality metrics, coding efficiencies, improvements, challenges, remediation needs Correlate, report, and drive the adoption of Process/Continuous Improvement initiatives Recruit & provide leadership, coaching & career planning for engineering talent Be accountable for design decisions for new and existing application development, proactively escalating issues and seeking assistance to overcome obstacles Partner with Product Management to consult on solution feasibility and high-level effort estimation Communicate with customers to ensure that expectations and support needs are met Provide architectural guidance to your teams towards our PDI Cloud & Platform strategy Make recommendation for technology adoption and framework improvement, analyzing trends, patterns and best practices for software Serve as the evangelist and custodian of technology, architecture, and product development practices Participate in the design & implementation of production cloud grade services supporting high availability Actively talent manage your team providing career planning & performance improvement activities when needed Qualifications: 5+ years of experience leading software engineers for product development Experience managing capitalized software processes Preferred: experience with managing teams' operational health by analyzing product teams' work distribution Capex, OpenX, Maintenance, Billable and OH Preferred: experience managing the organizational structure of teams as well as headcount & non-headcount budgets 10+ years of combined experience in software engineering, enterprise architecture and/or DevOps Working experience with scaled software architecture & domain: performance, redundancy, failover, clustering, vertical scaling  Working experience with source code management patterns and DevOps automation Proficient in API design, development & production operation Working experience with at least one mainstream operating system and IP networking Working experience managing production client & server code bases across one or more technology stacks Working experience with production SQL schema design, queries & administration in one or more mainstream relational and/or no-SQL databases Preferred: working experience with orchestration, automation, and configuration management processes & related DevOps tools & cloud platforms Preferred: working experience with event-based systems, streaming architecture & related technologies Highly motivated self-starter with a desire to help others and take action Requires strong written and verbal communication skills with the ability to translate technical concepts into non-technical terms Ability to independently work as a contributing member in a high-paced and focused team Ability to multi-task and prioritize tasks with competing deadlines Strong problem-solving and analytical skills with the ability to work under pressure Ability to socialize ideas and influence decisions without direct authority Collaborative in nature with a strong desire to dig in and learn independently and as well as through asking questions Considers ‘best-practice’ standards, as well as departmental policies and procedures Behavioral Competencies: Ensures Accountability Manages Complexity Communicates Effectively Balances Stakeholders Collaborates Effectively PDI is committed to offering a well-rounded benefits program, designed to support and care for you, and your family throughout your life and career. This includes a competitive salary, market-competitive benefits, and a quarterly perks program. We encourage a good work-life balance with ample time off [time away] and, where appropriate, hybrid working arrangements. Employees have access to continuous learning, professional certifications, and leadership development opportunities. Our global culture fosters diversity, inclusion, and values authenticity, trust, curiosity, and diversity of thought, ensuring a supportive environment for all.

Posted 3 days ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

At PDI Technologies, we empower some of the world's leading convenience retail and petroleum brands with cutting-edge technology solutions that drive growth and operational efficiency. By “Connecting Convenience” across the globe, we empower businesses to increase productivity, make more informed decisions, and engage faster with customers through loyalty programs, shopper insights, and unmatched real-time market intelligence via mobile applications, such as GasBuddy. We’re a global team committed to excellence, collaboration, and driving real impact. Explore our opportunities and become part of a company that values diversity, integrity, and growth. Role Overview: Do you love building software that thrills your customers? Do you insist on the highest standards for the software your team develops? Are you a progressive software engineer, an advocate of agile development practices, and a proponent of continuous improvement? If this is you, then join and energetic team of engineers building next generation of solutions at PDI! As an engineering leader, you will lead Agile engineering resources & provide guidance from inception through release of major & point product releases, including ongoing maintenance. You will be working closely with your product managers, product owners, engineering leaders, your team and other stakeholders. You will be leading developers, quality engineers and partnering with CloudOps, TechOps, UX Design other cross functional functional groups to evolve our solutions while continuing to improve your teams’ adoption of SDLC processes, CI/CD integration, code quality & automation test coverage. Key Responsibilities: Lead an organization of 4-20 development & test engineers globally to efficiently produce high quality deliverables Manage team leads, direct reports or a mix of both Manage several deliverables for a product line on time, on scope and on quality Instrument your processes, produce scorecards of progress regularly and establish a regular cadence of operational reviews with your management including quality metrics, coding efficiencies, improvements, challenges, remediation needs Correlate, report, and drive the adoption of Process/Continuous Improvement initiatives Recruit & provide leadership, coaching & career planning for engineering talent Be accountable for design decisions for new and existing application development, proactively escalating issues and seeking assistance to overcome obstacles Partner with Product Management to consult on solution feasibility and high-level effort estimation Communicate with customers to ensure that expectations and support needs are met Provide architectural guidance to your teams towards our PDI Cloud & Platform strategy Make recommendation for technology adoption and framework improvement, analyzing trends, patterns and best practices for software Serve as the evangelist and custodian of technology, architecture, and product development practices Participate in the design & implementation of production cloud grade services supporting high availability Actively talent manage your team providing career planning & performance improvement activities when needed Qualifications: 5+ years of experience leading software engineers for product development Experience managing capitalized software processes Preferred: experience with managing teams' operational health by analyzing product teams' work distribution Capex, OpenX, Maintenance, Billable and OH Preferred: experience managing the organizational structure of teams as well as headcount & non-headcount budgets 10+ years of combined experience in software engineering, enterprise architecture and/or DevOps Working experience with scaled software architecture & domain: performance, redundancy, failover, clustering, vertical scaling  Working experience with source code management patterns and DevOps automation Proficient in API design, development & production operation Working experience with at least one mainstream operating system and IP networking Working experience managing production client & server code bases across one or more technology stacks Working experience with production SQL schema design, queries & administration in one or more mainstream relational and/or no-SQL databases Preferred: working experience with orchestration, automation, and configuration management processes & related DevOps tools & cloud platforms Preferred: working experience with event-based systems, streaming architecture & related technologies Highly motivated self-starter with a desire to help others and take action Requires strong written and verbal communication skills with the ability to translate technical concepts into non-technical terms Ability to independently work as a contributing member in a high-paced and focused team Ability to multi-task and prioritize tasks with competing deadlines Strong problem-solving and analytical skills with the ability to work under pressure Ability to socialize ideas and influence decisions without direct authority Collaborative in nature with a strong desire to dig in and learn independently and as well as through asking questions Considers ‘best-practice’ standards, as well as departmental policies and procedures Behavioral Competencies: Ensures Accountability Manages Complexity Communicates Effectively Balances Stakeholders Collaborates Effectively PDI is committed to offering a well-rounded benefits program, designed to support and care for you, and your family throughout your life and career. This includes a competitive salary, market-competitive benefits, and a quarterly perks program. We encourage a good work-life balance with ample time off [time away] and, where appropriate, hybrid working arrangements. Employees have access to continuous learning, professional certifications, and leadership development opportunities. Our global culture fosters diversity, inclusion, and values authenticity, trust, curiosity, and diversity of thought, ensuring a supportive environment for all.

Posted 3 days ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

At PDI Technologies, we empower some of the world's leading convenience retail and petroleum brands with cutting-edge technology solutions that drive growth and operational efficiency. By “Connecting Convenience” across the globe, we empower businesses to increase productivity, make more informed decisions, and engage faster with customers through loyalty programs, shopper insights, and unmatched real-time market intelligence via mobile applications, such as GasBuddy. We’re a global team committed to excellence, collaboration, and driving real impact. Explore our opportunities and become part of a company that values diversity, integrity, and growth. Role Overview: Do you love building software that thrills your customers? Do you insist on the highest standards for the software your team develops? Are you a progressive software engineer, an advocate of agile development practices, and a proponent of continuous improvement? If this is you, then join and energetic team of engineers building next generation of solutions at PDI! As an engineering leader, you will lead Agile engineering resources & provide guidance from inception through release of major & point product releases, including ongoing maintenance. You will be working closely with your product managers, product owners, engineering leaders, your team and other stakeholders. You will be leading developers, quality engineers and partnering with CloudOps, TechOps, UX Design other cross functional functional groups to evolve our solutions while continuing to improve your teams’ adoption of SDLC processes, CI/CD integration, code quality & automation test coverage. Key Responsibilities: Lead an organization of 4-20 development & test engineers globally to efficiently produce high quality deliverables Manage team leads, direct reports or a mix of both Manage several deliverables for a product line on time, on scope and on quality Instrument your processes, produce scorecards of progress regularly and establish a regular cadence of operational reviews with your management including quality metrics, coding efficiencies, improvements, challenges, remediation needs Correlate, report, and drive the adoption of Process/Continuous Improvement initiatives Recruit & provide leadership, coaching & career planning for engineering talent Be accountable for design decisions for new and existing application development, proactively escalating issues and seeking assistance to overcome obstacles Partner with Product Management to consult on solution feasibility and high-level effort estimation Communicate with customers to ensure that expectations and support needs are met Provide architectural guidance to your teams towards our PDI Cloud & Platform strategy Make recommendation for technology adoption and framework improvement, analyzing trends, patterns and best practices for software Serve as the evangelist and custodian of technology, architecture, and product development practices Participate in the design & implementation of production cloud grade services supporting high availability Actively talent manage your team providing career planning & performance improvement activities when needed Qualifications: 5+ years of experience leading software engineers for product development Experience managing capitalized software processes Preferred: experience with managing teams' operational health by analyzing product teams' work distribution Capex, OpenX, Maintenance, Billable and OH Preferred: experience managing the organizational structure of teams as well as headcount & non-headcount budgets 10+ years of combined experience in software engineering, enterprise architecture and/or DevOps Working experience with scaled software architecture & domain: performance, redundancy, failover, clustering, vertical scaling  Working experience with source code management patterns and DevOps automation Proficient in API design, development & production operation Working experience with at least one mainstream operating system and IP networking Working experience managing production client & server code bases across one or more technology stacks Working experience with production SQL schema design, queries & administration in one or more mainstream relational and/or no-SQL databases Preferred: working experience with orchestration, automation, and configuration management processes & related DevOps tools & cloud platforms Preferred: working experience with event-based systems, streaming architecture & related technologies Highly motivated self-starter with a desire to help others and take action Requires strong written and verbal communication skills with the ability to translate technical concepts into non-technical terms Ability to independently work as a contributing member in a high-paced and focused team Ability to multi-task and prioritize tasks with competing deadlines Strong problem-solving and analytical skills with the ability to work under pressure Ability to socialize ideas and influence decisions without direct authority Collaborative in nature with a strong desire to dig in and learn independently and as well as through asking questions Considers ‘best-practice’ standards, as well as departmental policies and procedures Behavioral Competencies: Ensures Accountability Manages Complexity Communicates Effectively Balances Stakeholders Collaborates Effectively PDI is committed to offering a well-rounded benefits program, designed to support and care for you, and your family throughout your life and career. This includes a competitive salary, market-competitive benefits, and a quarterly perks program. We encourage a good work-life balance with ample time off [time away] and, where appropriate, hybrid working arrangements. Employees have access to continuous learning, professional certifications, and leadership development opportunities. Our global culture fosters diversity, inclusion, and values authenticity, trust, curiosity, and diversity of thought, ensuring a supportive environment for all.

Posted 3 days ago

Apply

0 years

0 Lacs

Greater Bengaluru Area

On-site

Linkedin logo

Senior analyst with ML model experience for Regions Bank Job Description: Primary Responsibilities for a Risk Data Scientist on the BSA/AML/OFAC Model Development and Monitoring Team: Design and develop transaction monitoring scenarios. Improves segmentation for the scenarios/models using techniques such as clustering. Executes periodic tuning for the threshold parameters of the scenarios through sample collection. Develops post processing models to reduce the number of false positives generated by the scenarios using techniques such as rare event logistic regression and machine learning algorithms. Researches and develops algorithms that incorporates fuzzy logic for OFAC sanction screening and other types of screening processes. Develops post processing models that incorporate Natural Language Processing techniques to reduce the number of false positives. Implementation of models. Develops and executes ongoing monitoring plan to monitor the performance of the models. Documentation of the model development, especially the implementation process. Supports model validation activities. Ad hoc analysis to address requests from business partners. Requirements Bachelor’s degree in Statistics, Data Science, Operations Research, Industrial Engineering, Mathematics, or Physics, AND six (6) years related experience. Master’s degree in Statistics, Data Science, Operations Research, Industrial Engineering, Mathematics, or Physics, AND four (4) years related experience. OR PhD degree in Statistics, Operations Research, Industrial Engineering, Mathematics, or Physics, AND two (2) years related experience. Skills and Competencies High proficiency with Python, R, SAS, and SQL. Advanced data sourcing and management skills. Enhanced experience with web scraping. Knowledge and experience with CDSW platform. Experience with software development, especially deployment. Experience with classification models. Location: DGS India - Bengaluru - Manyata N1 Block Brand: Merkle Time Type: Full time Contract Type: Permanent

Posted 4 days ago

Apply

3.0 years

0 Lacs

Navi Mumbai, Maharashtra, India

On-site

Linkedin logo

Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title And Summary Data Scientist II-3 Who is Mastercard? Mastercard is a global technology company in the payments industry. Our mission is to connect and power an inclusive, digital economy that benefits everyone, everywhere by making transactions safe, simple, smart, and accessible. Using secure data and networks, partnerships and passion, our innovations and solutions help individuals, financial institutions, governments, and businesses realize their greatest potential. Our decency quotient, or DQ, drives our culture and everything we do inside and outside of our company. With connections across more than 210 countries and territories, we are building a sustainable world that unlocks priceless possibilities for all. Overview Finicity, a Mastercard company, is leading the Open Banking Initiative to increase the Financial Health of consumers and businesses. The Data Science and Analytics team is looking for a Data Scientist II. The Data Science team works on Intelligent Decisioning; Financial Certainty; Attribute, Feature, and Entity Resolution; Verification Solutions and much more. Join our team to make an impact across all sectors of the economy by consistently innovating and problem-solving. The ideal candidate is passionate about leveraging data to provide high quality customer solutions. Also, the candidate is a strong technical leader who is extremely motivated, intellectually curious, analytical, and possesses an entrepreneurial mindset. Role Manipulates large data sets and applies various technical and statistical analytical techniques (e.g., OLS, multinomial logistic regression, LDA, clustering, segmentation) to draw insights from large datasets. Apply various Machine learning (i.e. SVM, Radom Forest, XGBoost, LightGBM, CATBoost etc), Deep learning techniques (i.e. LSTM, RNN, Transformer etc.) to solve analytical problem statement. Design and implement machine learning models for a number of financial applications including but not limited to: Transaction Classification, Temporal Analysis, Risk modeling from structured and unstructured data. Measure, validate, implement, monitor and improve performance of both internal and external facing machine learning models. Propose creative solutions to existing challenges that are new to the company, the financial industry and to data science. Present technical problems and findings to business leaders internally and to clients succinctly and clearly. Leverage best practices in machine learning and data engineering to develop scalable solutions. Identify areas where resources fall short of needs and provide thoughtful and sustainable solutions to benefit the team Be a strong, confident, and excellent writer and speaker, able to communicate your analysis, vision and roadmap effectively to a wide variety of stakeholders All About You: 3-5 years in data science/ machine learning model development and deployments Exposure to financial transactional structured and unstructured data, transaction classification, risk evaluation and credit risk modeling is a plus. A strong understanding of NLP, Statistical Modeling, Visualization and advanced Data Science techniques/methods. Gain insights from text, including non-language tokens and use the thought process of annotations in text analysis. Solve problems that are new to the company, the financial industry and to data science SQL / Database experience is preferred Experience with Kubernetes, Containers, Docker, REST APIs, Event Streams or other delivery mechanisms. Familiarity with relevant technologies (e.g. Tensorflow, Python, Sklearn, Pandas, etc.). Strong desire to collaborate and ability to come up with creative solutions. Additional Finance and FinTech experience preferred. Bachelor’s or Master’s Degree in Computer Science, Information Technology, Engineering, Mathematics, Statistics. Corporate Security Responsibility Every Person Working For, Or On Behalf Of, Mastercard Is Responsible For Information Security. All Activities Involving Access To Mastercard Assets, Information, And Networks Comes With An Inherent Risk To The Organization And Therefore, It Is Expected That The Successful Candidate For This Position Must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines. #AI Corporate Security Responsibility All Activities Involving Access To Mastercard Assets, Information, And Networks Comes With An Inherent Risk To The Organization And, Therefore, It Is Expected That Every Person Working For, Or On Behalf Of, Mastercard Is Responsible For Information Security And Must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines. R-247687

Posted 4 days ago

Apply

0 years

2 - 3 Lacs

Hyderābād

On-site

GlassDoor logo

Overview: As Sales Sr. Mgr., ensure that exceptional leadership & operational direction is provided by his/her analysts team to sales employees across multiple teams and markets. His/her Planogram Analysts deliver visually appealing planograms based on store clustering, space definitions and defined flow. Work closely with Category Management and Space teams to ensure planograms meet approved parameters. Conduct planogram quality audit ensuring all planograms meet assortment requirements, visual appeal, innovation opportunities and shelving metrics. Continuously identify opportunities and implement processes to improve quality, timeliness of output and process efficiency through automation. Responsibilities: Head the DX Sector Planogram Analyst team and ensure efficient, effective and comprehensive support of the sales employees across multiple teams and markets Lead and manage the Planogram Analysts work stream by working closely with Sector Space & Planogram team Ensure accurate and timely delivery of tasks regarding: deliver visually appealing versioned planograms based on store clustering, space definitions and defined flow work closely with Category Management and Space teams to ensure planogram meet approved parameters conduct planogram quality control ensuring all planograms meet assortment requirements, visual appeal, innovation opportunities and shelving metrics electronically deliver planograms to both internal teams and external customer specific systems manage multiple project timelines simultaneously ensure timelines are met by tracking project process, coordinating activities and resolving issues build and maintain relationships with internal project partners manage planogram version/store combination and/or store planogram assignments and to provide reporting and data as needed maintain planogram database with most updated planogram files retain planogram models and files for historical reference, as needed Invest and drive adoption of industry best practices across regions/sector, as required Partner with global teams to define strategy for End to End execution ownership and accountability. Lead workload forecasting and effectively drive prioritization conversation to support capacity management. Build stronger business context and elevate the team’s capability from execution focused to end to end capability focused. Ensure delivery of accurate and timely planograms in accordance with agreed service level agreements (SLA) Work across multiple functions to aid in collecting insights for action-oriented cause of change analysis Ability to focus against speed of execution and quality of service delivery rather than achievement of SLAs Recognize opportunities and take action to improve delivery of work Implement continued improvements and simplifications of processes and optimal use of technology Scale-up operation in-line with business growth, both within existing scope, as well as new areas of opportunity Create an inclusive and collaborative environment People Leadership Enable direct report’s capabilities and enforce consistency in execution of key capability areas; planogram QC, development and timely delivery Responsible for Hiring, talent assessment, competency development, performance management, productivity improvement, talent retention, career planning and development Provide and receive feedback about the global team and support effective partnership. Qualifications: 10+ yrs. of retail/merchandizing experience (including JDA) 2+ yrs. of people leadership experience in a Space planning/planogram environment Bachelors in commerce/business administration/marketing, Master’s degree is a plus Advanced level skill in Microsoft Office, with demonstrated intermediate-advanced Excel skills necessary Experience with analyzing and reporting data to identify issues, trends, or exceptions to drive improvement of results and find solutions Advanced knowledge and experience of space management technology platform JDA Propensity to learn PepsiCo software systems Ability to provide superior customer service Best-in-class time management skills, ability to multitask, set priorities and plan

Posted 4 days ago

Apply

4.0 - 7.0 years

2 - 3 Lacs

Hyderābād

On-site

GlassDoor logo

Overview: As Planogram Analyst deliver visually appealing versioned planograms based on store clustering, space definitions and defined flow. Work closely with Category Management and Space teams to ensure planograms meet approved parameters. Conduct planogram quality control ensuring all planograms meet assortment requirements, visual appeal, innovation opportunities and shelving metrics. Continuously identify opportunities and implement processes to improve quality and timeliness of output. Responsibilities: Be a single point of contact for category/region by mastering Process and Category knowledge. Partner with Category Manager / KAM’s to building business context and creating effortless partnership Acquire Project management skills to lead multiple projects seamlessly and ensuring timely delivery of projects. Knowledge Sharing: Gain in-depth knowledge of PepsiCo business, categories, products, tools and share new learnings with the team on a continual basis. Ensure accurate and timely delivery of Projects regarding: Deliver visually appealing versioned planograms based on store clustering, space definitions and defined flow Conduct planogram quality control ensuring all planograms meet assortment requirements, visual appeal, innovation opportunities and shelving metrics Ensure timelines are met by tracking project process, coordinating activities, and resolving issues Leverage data to allocate right space for right product. Avoid redundancy in reporting and call out best practices to the team Display a high sense of accountability when completing requests with high visibility or tight turnaround times. Scale-up growth by identifying areas where CI is required, both within existing scope, as well as new areas of opportunity. Create an inclusive and collaborative environment Work in a team environment with focus on achieving team goals vs individual goals Actively learn and apply advanced level of expertise in JDA, Intermediate - MS Excel, all other relevant applications. Work alongside of peers and inculcate best practices and elevate the team's ability to tackle business questions with value adds. Qualifications: 4 - 7 years of experience in Space Planning – JDA, Retail or FMCG Experience. Bachelor’s degree. Intermediate level skill in Microsoft Office, with demonstrated intermediate Excel skills necessary Ability to solve problems. Advanced knowledge and experience of space management technology platform JDA Ability to work collaboratively and proactively with multi-functional teams / Stake holders. Best-in-class time management and protization skills. Excellent written and oral communication skills; proactively communicates using appropriate methods for situation and audience in clear, concise and professional manner Strong at data analysis with strong attention to detail Basic project management skills

Posted 4 days ago

Apply

0 years

4 - 10 Lacs

Pune

On-site

GlassDoor logo

Infra and DevOps Engineer, AS Job ID: R0391182 Full/Part-Time: Full-time Regular/Temporary: Regular Listed: 2025-06-20 Location: Pune Position Overview Job Title: Infra and DevOps Engineer Location: Pune, India Corporate Title: AS Role Description The Infra & DevOps team within DWS India , sits horizontally over the project delivery, committed to provide best in class shared services across build, release and QA Automation space. Its’ main functional areas encompass Environment build, I ntegration of QA Automation suite, Release and Deployment Automation Management, Technology Management and Compliance Management. This role will be key to our programme delivery and include working closely with stakeholders including client Product Owners, Digital Design Organisation, Business Analysts, Developers and QA to advise and contribute from Infra and DevOps capability perspective by Building and maintaining non-prod and prod environments, setting up end to end alerting and monitoring for ease of operation and oversee transition of the project to L2 support teams as part of Go Live. What we’ll offer you As part of our flexible scheme, here are just some of the benefits that you’ll enjoy Best in class leave policy Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your key responsibilities Drives automation (incl. automated build, test and deploy) Supports and manages Data / Digital systems architecture (underlying platforms, APIs, UI, Datasets …) in line with the architectural vision set by the Digital Design Organisation across environments. Drives integration across systems, working to ensure service layer integrates with the core technology stack whilst ensuring that services integrate to form a service ecosystem Monitors digital architecture to ensure health and identify required corrective action Serve as a technical authority, working with developers to drive architectural standards on the specific platforms that are developing upon Build security into the overall architecture, ensuring adherence to security principles set within IT and adherence to any required industry standards Liaises with IaaS and PaaS service provides within the Bank to enhance their offerings. Liaises with other technical areas, conducting technology research, and evaluating software required for maintaining the development environment. Works with the wider QA function within the business to drive Continuous Testing by integrating QA Automation suites with available toolsets. Your skills and experience Proven technical hands on in Linux/Unix is a must have. Proven experience on Infrastructure Architecture - Clustering, High Availability, Performance and Tuning, Backup and Recovery. Hands-on experience with DevOps build and deploy tools like TeamCity, GIT / Bit Bucket / Artifactory and knowledge about automation/ configuration management using tools such as Ansible or relevant. A working understanding of code and scripting language such as (Python, Perl, Ruby or JS). In depth Knowledge and experience in Docker technology, OpenShift and Kubernetes containerisation Ability to deploy complex solutions based on IaaS, PaaS and public and private cloud-based infrastructures. Basic understanding of networking and firewalls. Knowledge of best practices and IT operations in an agile environment Ability to deliver independently: confidently able to translate requirements into technical solutions with minimal supervision Collaborative by nature: able to work with scrum teams, technical teams, the wider business, and IT&S to provide platform-related knowledge Flexible: finds a way to say yes and to make things happen, only exercising authority as needed to prevent the architecture from breaking Coding and scripting: Able to develop in multiple languages in order to mobilise, configure and maintain digital platforms and architecture. Automation and tooling: strong knowledge of the automation landscape, with ability to rapidly identify and mobilise appropriate tools to support testing, deployment etc Security: understands security requirements and can independently drive compliance Education / Certification Any relevant DevOps certification. Bachelor’s degree from an accredited college or university with a concentration in Science, Engineering, or an IT-related discipline (or equivalent). How we’ll support you Training and development to help you excel in your career Coaching and support from experts in your team A culture of continuous learning to aid progression A range of flexible benefits that you can tailor to suit your needs About us and our teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment.

Posted 4 days ago

Apply

2.0 years

6 - 10 Lacs

Bengaluru

On-site

GlassDoor logo

Job Description About Us: As a Fortune 50 company with more than 400,000 team members worldwide, Target is an iconic brand and one of America's leading retailers. Target in India operates as a fully integrated part of Target’s global team and has more than 4,000 team members supporting the company’s global strategy and operations. Tech Overview: Every time a guest enters a Target store or browses Target.com, they experience the impact of Target’s investments in technology and innovation. We’re the technologists behind one of the most loved retail brands, delivering joy to millions of our guests, team members, and communities. Join our global in-house technology team of more than 4,000 of engineers, data scientists, architects, coaches and product managers striving to make Target the most convenient, safe and joyful place to shop. We use agile practices and leverage open-source software to adapt and build best-in-class technology for our team members and guests—and we do so with a focus on diversity and inclusion, experimentation and continuous learning. Pyramid Overview: Target.com and Mobile translates the in-store experience our guests love to the digital environment. Our Mobile Engineers develop native apps like Cartwheel and Target’s flagship app, which are high-impact and high-visibility assets that are game-changers for literally millions of guests. Here, you’ll get to explore emerging retail and mobile technologies, playing a key role in revolutionary product launches with tech giants like Apple and Google. You’ll be a visionary for the future of Target’s app ecosystem. You’ll have the advantage of Target’s unmatched brand recognition and special marketplace foothold—making us the partner of choice for innovative technologies like indoor mapping, iBeacons and Apple Pay. You’ll help Target evolve by using the latest open source tools and technologies and staying true to strong agile practices. You’ll lend your passion for engineering technologies that fix problems and meet needs guests didn’t even know they had. You’ll work on autonomous teams and incorporate the newest technical practices. You’ll have the chance to perform by writing rock-solid code that stands up to our massive scale. Plus, and perhaps best of all, you’ll have the right balance of self-rule and accountability for how technical products perform. Team Overview: We are dedicated to ensuring a seamless and efficient checkout experience for Guests shopping on our digital channels, including web and mobile apps. Our team plays a crucial role in the overall shopping journey, focusing on the final and most critical steps of the purchase process. We are responsible for managing the seamless payments experience during Checkout , from the moment a Guest adds a payment to their cart to the final purchase confirmation. Our goal is to provide a smooth, secure, and user-friendly checkout process that enhances customer satisfaction and drives conversions. Our team is cross-geo located, with members driving different features and collaborating from both India and the US. This diverse setup allows us to leverage a wide range of expertise and perspectives, fostering innovative solutions and effective problem-solving. As part of the Digital Payments team , you will have the opportunity to work with cutting-edge technologies and innovative solutions to continuously improve the Checkout experience. Our collaborative and dynamic environment encourages creative problem-solving and the sharing of ideas to meet the evolving needs of our Guests. Position Overview: Able to implement new features/fixes within the current framework with little or no direction. Able to troubleshoot problems and devise solutions for root cause. Hands-on development, often taking on the more complicated tasks. Ensures solution is production ready, deployable, scalable and resilient. Has advanced skills around technology for their area. Examples may include: computing topics, threading models, performance considerations, caching, database indexing, operating system internals, networking, infrastructure systems and operations. Researches the best design and new technologies for given problem. Evaluates technologies and documents decision making. Understands how the solution is deployed, examples may include: VMs, containers, clustering, load balancing, DNS, networking, and scalability. Recommends changes to internal processes and procedures when deficiencies are observed. Articulates the value of a technology. Approaches all engineering work with a security lens and actively looks for security vulnerabilities within code/infrastructure architecture when providing peer reviews. Contributes to open source where applicable. Helps tune and change the observability on their team accordingly. Is aware of the operational data for their team’s domain and uses it as a basis for suggesting stability and performance improvements. About You: Experience: 2 years - 4 years 4 year degree or equivalent experience Excellent communication skills with both business partners and other engineering teams Familiar with Agile principles and possess a team attitude Strong problem solving and debugging skills Strong sense of ownership and the ability to work with a limited set of requirements Experience engineering applications for the JVM. Java or Kotlin experience is definitely needed. Experience in micro services, Spring Boot, and event driven architecture Experience building CI/CD pipelines Exposure to building high-performance scalable APIs is a plus. Knowledge of NoSQL technologies Cassandra, Elastic search, MongoDB is a plus Good at writing unit and functional tests and test-driven development Know More About Us Here: Life at Target- https://india.target.com/ Benefits- https://india.target.com/life-at-target/workplace/benefits Culture- https://india.target.com/life-at-target/belonging

Posted 4 days ago

Apply

2.0 - 4.0 years

7 - 9 Lacs

Chennai

On-site

GlassDoor logo

Comcast brings together the best in media and technology. We drive innovation to create the world's best entertainment and online experiences. As a Fortune 50 leader, we set the pace in a variety of innovative and fascinating businesses and create career opportunities across a wide range of locations and disciplines. We are at the forefront of change and move at an amazing pace, thanks to our remarkable people, who bring cutting-edge products and services to life for millions of customers every day. If you share in our passion for teamwork, our vision to revolutionize industries and our goal to lead the future in media and technology, we want you to fast-forward your career at Comcast. Job Summary Responsible for working cross-functionally to collect data and develop models to determine trends utilizing a variety of data sources. Retrieves, analyzes and summarizes business, operations, employee, customer and/or economic data in order to develop business intelligence, optimize effectiveness, predict business outcomes and decision-making purposes. Involved with numerous key business decisions by conducting the analyses that inform our business strategy. This may include: impact measurement of new products or features via normalization techniques, optimization of business processes through robust A/B testing, clustering or segmentation of customers to identify opportunities of differentiated treatment, deep dive analyses to understand drivers of key business trends, identification of customer sentiment drivers through natural language processing (NLP) of verbatim responses to Net Promotor System (NPS) surveys and development of frameworks to drive upsell strategy for existing customers by balancing business priorities with customer activity. Works with moderate guidance in own area of knowledge. Job Description 1. 2–4 years of professional experience in software or data engineering roles. 2. Hands-on experience with Power BI, Power BI Desktop, Power Apps, and Power Automate. 3. Proficiency with Tableau and SharePoint. 4. Familiarity with Amazon Redshift and SAP integration and data extraction. 5. Strong analytical, troubleshooting, and communication skills. Comcast is proud to be an equal opportunity workplace. We will consider all qualified applicants for employment without regard to race, color, religion, age, sex, sexual orientation, gender identity, national origin, disability, veteran status, genetic information, or any other basis protected by applicable law. Base pay is one part of the Total Rewards that Comcast provides to compensate and recognize employees for their work. Most sales positions are eligible for a Commission under the terms of an applicable plan, while most non-sales positions are eligible for a Bonus. Additionally, Comcast provides best-in-class Benefits to eligible employees. We believe that benefits should connect you to the support you need when it matters most, and should help you care for those who matter most. That’s why we provide an array of options, expert guidance and always-on tools, that are personalized to meet the needs of your reality – to help support you physically, financially and emotionally through the big milestones and in your everyday life. Please visit the compensation and benefits summary on our careers site for more details. Education Bachelor's Degree While possessing the stated degree is preferred, Comcast also may consider applicants who hold some combination of coursework and experience, or who have extensive related professional experience. Relevant Work Experience 2-5 Years

Posted 4 days ago

Apply

0 years

5 - 8 Lacs

Noida

On-site

GlassDoor logo

Senior Gen AI Engineer Job Description Brightly Software is seeking an experienced candidate to join our Product team in the role of Gen AI engineer to drive best in class client-facing AI features by creating and delivering insights that advise client decisions tomorrow. As a Gen AI Engineer, you will play a critical role in building AI offerings for Brightly. You will partner with our various software Product teams to drive client facing insights to inform smarter decisions faster. This will include the following: Lead the evaluation and selection of foundation models and vector databases based on performance and business needs Design and implement applications powered by generative AI (e.g., LLMs, diffusion models), delivering contextual and actionable insights for clients. Establish best practices and documentation for prompt engineering, model fine-tuning, and evaluation to support cross-domain generative AI use cases. Build, test, and deploy generative AI applications using standard tools and frameworks for model inference, embeddings, vector stores, and orchestration pipelines. Key Responsibilities: Guide the design of multi-step RAG, agentic, or tool-augmented workflows Implement governance, safety layers, and responsible AI practices (e.g., guardrails, moderation, auditability) Mentor junior engineers and review GenAI design and implementation plans Drive experimentation, benchmarking, and continuous improvement of GenAI capabilities Collaborate with leadership to align GenAI initiatives with product and business strategy Build and optimize Retrieval-Augmented Generation (RAG) pipelines using vector stores like Pinecone, FAISS, or AWS Opensearch Perform exploratory data analysis (EDA), data cleaning, and feature engineering to prepare data for model building. Design, develop, train, and evaluate machine learning models (e.g., classification, regression, clustering, natural language processing) with strong exerience in predictive and stastical modelling. Implement and deploy machine learning models into production using AWS services, with a strong focus on Amazon SageMaker (e.g., SageMaker Studio, training jobs, inference endpoints, SageMaker Pipelines). Understanding and development of state management workflows using Langraph. Develop GenAI applications using Hugging Face Transformers, LangChain, and Llama related frameworks Engineer and evaluate prompts, including prompt chaining and output quality assessment Apply NLP and transformer model expertise to solve language tasks Deploy GenAI models to cloud platforms (preferably AWS) using Docker and Kubernetes Monitor and optimize model and pipeline performance for scalability and efficiency Communicate techn

Posted 4 days ago

Apply

2.0 years

5 - 8 Lacs

Noida

On-site

GlassDoor logo

Gen AI Engineer Job Description Brightly Software is seeking a high performer to join our Product team in the role of Gen AI engineer to drive best in class client-facing AI features by creating and delivering insights that advise client decisions tomorrow. As a Gen AI Engineer, you will play a critical role in building AI offerings for Brightly. You will partner with our various software Product teams to drive client facing insights to inform smarter decisions faster. This will include the following: Design and implement applications powered by generative AI (e.g., LLMs, diffusion models), delivering contextual and actionable insights for clients. Establish best practices and documentation for prompt engineering, model fine-tuning, and evaluation to support cross-domain generative AI use cases. Build, test, and deploy generative AI applications using standard tools and frameworks for model inference, embeddings, vector stores, and orchestration pipelines. Key Responsibilities: Build and optimize Retrieval-Augmented Generation (RAG) pipelines using vector stores like Pinecone, FAISS, or AWS OpenSearch Develop GenAI applications using Hugging Face Transformers, LangChain, and Llama related frameworks Perform exploratory data analysis (EDA), data cleaning, and feature engineering to prepare data for model building. Design, develop, train, and evaluate machine learning models (e.g., classification, regression, clustering, natural language processing) with strong exerience in predictive and stastical modelling. Implement and deploy machine learning models into production using AWS services, with a strong focus on Amazon SageMaker (e.g., SageMaker Studio, training jobs, inference endpoints, SageMaker Pipelines). Understanding and development of state management workflows using Langraph. Engineer and evaluate prompts, including prompt chaining and output quality assessment Apply NLP and transformer model expertise to solve language tasks Deploy GenAI models to cloud platforms (preferably AWS) using Docker and Kubernetes Monitor and optimize model and pipeline performance for scalability and efficiency Communicate technical concepts clearly to cross-functional and non-technical stakeholders Thrive in a fast-paced, lean environment and contribute to scalable GenAI system design Qualifications Bachelor’s degree is required 2-4 years of experience of total experience with a strong focus on AI and ML and 1+ years in core GenAI Engineering Demonstrated expertise in working with large language models (LLMs) and generative AI systems, including both text-based and multimodal models. Strong programming skills in Python, including proficiency with data science libraries such as NumPy, Pandas, Scikit-learn, TensorFlow, and/or PyTorch. Familiarity with MLOps principles and tools for automating and streamlining the ML lifecycle. Experience working with agentic AI. Capable of building Retrieval-Augmented Generation (RAG) pipelines leveraging vector stores like Pinecone, Chroma, or FAISS. Strong programming skills in Python, with experience using leading AI/ML libraries such as Hugging Face Transformers and LangChain. Practical experience in working with vector databases and embedding methodologies for efficient information retrieval. Possess experience in developing and exposing API endpoints for accessing AI model capabilities using frameworks like FastAPI. Knowledgeable in prompt engineering techniques, including prompt chaining and performance evaluation strategies. Solid grasp of natural language processing (NLP) fundamentals and transformer-based model architectures. Experience in deploying machine learning models to cloud platforms (preferably AWS) and containerized environments using Docker or Kubernetes. Skilled in fine-tuning and assessing open-source models using methods such as LoRA, PEFT, and supervised training. Strong communication skills with the ability to convey complex technical concepts to non-technical stakeholders. Able to operate successfully in a lean, fast-paced organization, and to create a vision and organization that can scale quickly Senior Gen AI Engineer

Posted 4 days ago

Apply

0.0 years

3 - 7 Lacs

Indore

On-site

GlassDoor logo

Role Overview: As a Machine Learning Engineer, you will be responsible for designing, developing, and deploying machine learning models that drive [specific application, e.g., predictive analytics, natural language processing, computer vision, recommendation systems, etc.]. You will collaborate closely with data scientists, software engineers, and product teams to build scalable, high-performance AI solutions. You should have a strong foundation in machine learning algorithms, data processing, and software development, along with the ability to take ownership of the full machine learning lifecycle— from data collection and model training to deployment and monitoring. Key Responsibilities: Model Development: Design and implement machine learning models for various applications, such as [insert specific use cases, e.g., predictive analytics, classification, clustering, anomaly detection, etc.]. Data Preparation & Processing: Work with large datasets, including preprocessing, feature engineering, and data augmentation to ensure high-quality input for model training. Model Training & Tuning: Train, optimize, and fine-tune models using modern machine learning frameworks and algorithms. Monitor performance metrics and adjust parameters for model improvement. Model Deployment: Deploy machine learning models into production environments using tools like Docker, Kubernetes, or cloud platforms such as AWS, Azure, or Google Cloud. Performance Monitoring & Optimization: Continuously monitor deployed models for performance, accuracy, and scalability. Implement model retraining pipelines and maintain model health. Collaboration: Work cross-functionally with data scientists, software engineers, product managers, and other stakeholders to integrate machine learning solutions into business workflows. Research & Innovation: Stay up-to-date with the latest trends and advancements in machine learning and AI to drive innovation within the company. Qualifications: Education: Bachelor's or Master's degree in Computer Science, Data Science, Statistics, Applied Mathematics, or a related field. A PhD is a plus but not required. Experience: 0-1years of experience in machine learning, data science, or a related technical role. Proven experience building and deploying machine learning models in a production environment. Experience with cloud platforms (e.g., AWS, GCP, Azure) for model deployment and scalability. Technical Skills: Proficiency in Python (preferred) or other programming languages (e.g., R, Java, C++). Strong knowledge of machine learning frameworks and libraries such as TensorFlow, PyTorch, Scikit-learn, Keras, or similar. Familiarity with deep learning techniques (e.g., CNNs, RNNs, transformers) is highly desirable. Experience with data processing tools such as Pandas, NumPy, and SQL. Knowledge of version control (e.g., Git), containerization (e.g., Docker), and CI/CD pipelines. Experience with big data tools and frameworks (e.g., Hadoop, Spark) is a plus. Soft Skills: Strong problem-solving and analytical skills. Excellent communication and collaboration skills. Ability to work independently and manage multiple priorities in a fast-paced environment. Detail-oriented and organized, with a passion for learning and innovation. Job Type: Full-time Pay: ₹5,000.00 - ₹30,000.00 per month Schedule: Day shift Work Location: In person

Posted 4 days ago

Apply

0 years

0 Lacs

India

Remote

Linkedin logo

Artificial Intelligence & Machine Learning Intern 📍 Location: Remote (100% Virtual) 📅 Duration: 3 Months 💸 Stipend for Top Interns: ₹15,000 🎁 Perks: Certificate | Letter of Recommendation | Full-Time Offer (Based on Performance) About INLIGHN TECH INLIGHN TECH is a forward-thinking edtech startup offering project-driven virtual internships that prepare students for today’s competitive tech landscape. Our AI & ML Internship is designed to immerse you in real-world applications of machine learning and artificial intelligence, helping you develop job-ready skills through hands-on projects. 🚀 Internship Overview As an AI & ML Intern , you will explore machine learning algorithms, build predictive models, and work on projects that mimic real-world use cases—ranging from recommendation systems to AI-based automation tools. You’ll gain experience with Python, Scikit-learn, TensorFlow , and more. 🔧 Key Responsibilities Work on datasets to clean, preprocess, and prepare for model training Implement machine learning algorithms (regression, classification, clustering, etc.) Build and test models using Scikit-learn, TensorFlow, Keras , or PyTorch Analyze model performance and optimize using evaluation metrics Collaborate with mentors to develop AI solutions for business or academic use cases Present findings and document all steps of the model-building process ✅ Qualifications Pursuing or recently completed a degree in Computer Science, Data Science, AI/ML, or related fields Proficient in Python and familiar with data science libraries (NumPy, Pandas, Matplotlib) Basic understanding of machine learning concepts and algorithms Experience with tools like Jupyter Notebook , Google Colab , or similar platforms Strong analytical mindset and interest in solving real-world problems using AI Enthusiastic about learning and exploring new technologies 🎓 What You’ll Gain Hands-on experience with real-world AI and ML projects Exposure to end-to-end model development workflows A strong project portfolio to showcase your skills Internship Certificate upon successful completion Letter of Recommendation for high-performing interns Opportunity for a Full-Time Offer based on performance

Posted 4 days ago

Apply

3.0 - 5.0 years

0 Lacs

New Delhi, Delhi, India

Remote

Linkedin logo

Company Description At Velvet Bond, we guide individuals and couples towards emotional clarity, profound connection, and holistic healing. Our mission is to empower people to cultivate deep, fulfilling relationships through compassionate, evidence-based guidance and personalized support. We offer comprehensive emotional support to overcome anxiety and stress, relationship and intimacy coaching, trauma therapy, and spiritual wellness practices. At Velvet Bond, you're not broken—you're evolving. Location: Remote, New Delhi, Bangalore, Anywhere in India Type: Full-Time We are seeking a Creative Business Specialist with 3 to 5 years of experience to join our fast-growing coaching and consulting firm. The ideal candidate will have hands-on experience scaling coaching or consulting businesses, especially within holistic, spiritual, life coaching, or transformational education industries —both domestic and international. Experience with industry leaders such as Success Gyan , Mindvalley , Hay House , or similar organizations is highly valued. The role also involves collaborating with top-tier coaches, managing marketing funnels, and building long-term partnerships that support global growth and expansion. As a Creative Business Specialist, you will: Develop and implement strategic business plans focused on growth, scalability, and client impact. Analyze market trends and consumer behavior to inform marketing and product development. Lead initiatives to streamline business operations and enhance customer experience. Collaborate with coaches and internal teams to optimize marketing funnels and client journeys. Establish and nurture strategic partnerships and collaborations, both in India and overseas. Maintain strong client relationships through effective communication and service excellence. Qualifications: 3–5 years of relevant experience in business development, preferably in coaching, consulting, or wellness industries. Proven track record in scaling up brands and working with holistic/spiritual or personal growth platforms . Solid understanding of marketing automation, funnels, and lead generation tools (e.g., ClickFunnels, Kajabi, HubSpot, landing pages, group clustering, community managing, communications tools). Strong analytical thinking and strategic planning abilities. Excellent interpersonal, communication, and client management skills. Demonstrated experience with international collaboration or overseas partnerships is a plus. Bachelor’s degree in Business, Marketing, Psychology , or a related field. MBA or specialized training in coaching/consulting strategy is an advantage. Passion for personal growth, emotional wellness, or conscious entrepreneurship is highly desirable. Shares and company partnership is offered

Posted 4 days ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Note: By applying to this position you will have an opportunity to share your preferred working location from the following: Pune, Maharashtra, India; Bengaluru, Karnataka, India . Minimum qualifications: Bachelor's degree in Computer Science or equivalent practical experience. 3 years of customer-facing experience focused on translating customer needs into cloud solutions. Experience with SAP technologies (e.g., SAP HANA, SAP Netweaver, Solution Manager, SAP Business Suite, Business Warehouse, SAP Hybris, etc.), their architecture, and infrastructure. Experience with SAP Operating System (OS)/Database (DB) migrations, downtime optimizations, and backup strategies. Experience troubleshooting clients technical issues and working with engineering teams, sales, services, and customers. Preferred qualifications: MBA or Master's degree in Computer Science or Engineering field. Certification in Google Cloud. 5 years of experience in technical client service. Experience implementing projects for live SAP environments on public cloud, and architecting around core production concepts (e.g., sizing, high availability, disaster recovery, multi-tenancy, scale out and scale up architectures, clustering, etc.). Excellent communication, writing, presentation and problem-solving skills. About The Job As a Technical Solutions Consultant, you will be responsible for the technical relationship of our largest advertising clients and/or product partners. You will lead cross-functional teams in Engineering, Sales and Product Management to leverage emerging technologies for our external clients/partners. From concept design and testing to data analysis and support, you will oversee the technical execution and business operations of Google's online advertising platforms and/or product partnerships. You will be able to balance business and partner needs with technical constraints, develop innovative, cutting edge solutions and act as a partner and consultant to those you are working with. You will also be able to build tools and automate products, oversee the technical execution and business operations of Google's partnerships, as well as develop product strategy and prioritize projects and resources. As a SAP Cloud Consultant, you will work directly with Google’s most strategic customers on critical projects to provide management, consulting and technical advisory to customer engagements while working with client executives and key technical leaders to deploy solutions on Google’s Cloud Platform. You will work closely with Google's partners servicing customer accounts to manage projects, deliver consulting services, and provide technical guidance and best practice expertise. To be successful, you know how to navigate ambiguity, use your extensive experience working with cloud technologies, have technical depth and enjoy working with client executives. You also possess excellent communication and project management skills. Travel approximately 25% of the time for client engagements. Google Cloud accelerates every organization’s ability to digitally transform its business and industry. We deliver enterprise-grade solutions that leverage Google’s cutting-edge technology, and tools that help developers build more sustainably. Customers in more than 200 countries and territories turn to Google Cloud as their trusted partner to enable growth and solve their most critical business problems. Responsibilities Work with customer technical leads, client executives, and partners to manage and deliver successful implementations of cloud solutions becoming a trusted advisor to decision makers throughout the engagement. Work with internal specialists, Product and Engineering teams to package best practices and lessons learned into thought leadership, methodologies and published assets. Interact with Sales teams, partners, and customer technical stakeholders to manage project scope, priorities, deliverables, risks/issues, and timelines for successful client outcomes. Advocate for customer needs in order to overcome adoption blockers and drive new feature development based on the field experience. Propose architectures for SAP products and manage the deployment of cloud based SAP solutions according to complex customer requirements and implementation best practices. Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form .

Posted 4 days ago

Apply

10.0 - 15.0 years

0 Lacs

India

On-site

Linkedin logo

About Us: At Articul8 AI, we relentlessly pursue excellence and create exceptional GenAI products that exceed customer expectations. We are a team of dedicated individuals who take pride in our work and strive for greatness in every aspect of our business. We believe in using our advantages to make a positive impact on the world and inspiring others to do the same Job Description: Articul8 AI is seeking a Data Scientist to design, develop, and deploy AI-driven solutions that solve real-world problems at scale. You will work on machine learning models, large language models (LLMs), and AI applications while optimizing performance for production environments. This role requires expertise in AI/ML frameworks, cloud platforms, and software engineering best practices. You will be developing and deploying advanced deep learning and generative AI models and algorithms to enhance existing products or to create new products that fulfill critical business needs. In this role, you will be working closely with Product Management and Engineering teams to build GenAI products at scale. You will be responsible for transforming business needs to technical requirements and for leveraging state of the art research to develop and deliver products. You will also support Engineering with testing and validation of the product. Responsibilities: Design, develop, and deploy AI-driven solution in production that solve real-world problems at scale. Train, fine-tune, and optimize deep learning and LLM-based solutions to enhance existing products or to create new products. Evaluate and implement state-of-the-art AI/ML algorithms to improve model accuracy and efficiency to enhance and deliver product. Optimize models ensuring low latency and high availability for cloud and on-prem environments. Collaborate closely with engineering teams and product management to build GenAI products at scale. Work with large-scale datasets, ensuring data quality, preprocessing, and feature engineering. Develop APIs and microservices to serve AI models in production at scale. Handle large-scale datasets, preprocessing, and feature engineering to ensuring data quality. Responsible for transforming business needs to technical requirements to develop and deliver products. Stay up to date with the latest AI trends, research, and best practices. Ensure ethical AI practices, data privacy, and security compliance Required Qualifications Master’s Degree in Science, Technology, Engineering and Mathematics (STEM) or Statistics with 10 to 15 years of experience In-depth knowledge and experience with algorithms for time series analysis including data pre-processing, pattern recognition, clustering, modeling and anomaly detection. Strong expertise in Deep Learning, Machine Learning and Generative AI models (including Language, Vision, Audio and Multi-modal models) Exposure to one or more of the following domains - Optimization, Stochastic Processes, Estimation theory Experience in deploying deep learning models on multiple GPUs Experience in developing models and algorithms using ML frameworks like PyTorch, TensorFlow Strong programming skills in one or more of the following languages - Python, Golang Experience in building Docker images in creating scalable, efficient, and portable applications Experience in Kubernetes for container orchestration and writing YAML manifests to define how applications and services should be deployed Knowledge of cloud platforms at least one of AWS, Azure, GCP and its services for deployment of applications Strong verbal and written communications skills. Preferred Qualifications Ph.D, in Science, Technology, Engineering and Mathematics (STEM) or Statistics with 6 to 8 years of experience. Deep expertise and experience in training/fine-tuning large language models on large GPU clusters. Experience in parallel programming including data, model and tensor parallelisms with PyTorch and TensorFlow Deep experience in building and scaling machine learning / deep learning or GenAI applications with Docker and Kubernetes. Strong working experience with at least two cloud service providers (AWS, Azure GCP). Knowledge of CI/CD pipelines, MLOps like MLflow, Kubeflow, or TensorBoard. Deep expertise and experience in one or more of the following areas like finance, healthcare, engineering. Ability to transform business needs to technical requirements, define tasks, metrics and milestones. Ability to communicate technological challenges and achievements to various stakeholders. Professional Attributes: Problem Solving: ability to break down complex problems into manageable components, devising creative solutions, and iteratively refining ideas based on feedback and experimental evidence. Collaboration and Communication: proficiency in working cross-functionally—communicating clearly, providing constructive criticism, delegating responsibilities, and respecting diverse perspectives. Project Management and Prioritization: demonstrated aptitude in balancing multiple projects, deadlines, and allocating time efficiently between short-term objectives and long-term goals. Critical Thinking: ability to carefully evaluate assumptions, questioning established methodologies, challenging own biases, and maintaining skepticism when interpreting results. Curiosity and Continuous Learning: ability to stay curious about advances in related fields and constantly seeking opportunities to expand knowledge base. Emotional Intelligence and Intellectual Humility: capable of displaying empathy, resilience, adaptability, and self-awareness. Ability to recognize own limitations, embracing uncertainty, acknowledging mistakes, and valuing others' contributions. What We Offer: By joining our team, you become part of a community that embraces diversity, inclusiveness, and lifelong learning. We nurture curiosity and creativity, encouraging exploration beyond conventional wisdom. Through mentorship, knowledge exchange, and constructive feedback, we cultivate an environment that supports both personal and professional development. If you're ready to join a team that's changing the game, apply now to become a part of the Articul8 team. Join us on this adventure and help shape the future of Generative AI in the enterprise.

Posted 4 days ago

Apply

6.0 years

0 Lacs

Kochi, Kerala, India

On-site

Linkedin logo

About company: Our client is prominent Indian multinational corporation specializing in information technology (IT), consulting, and business process services and its headquartered in Bengaluru with revenues of gross revenue of ₹222.1 billion with global work force of 234,054 and listed in NASDAQ and it operates in over 60 countries and serves clients across various industries, including financial services, healthcare, manufacturing, retail, and telecommunications. The company consolidated its cloud, data, analytics, AI, and related businesses under the tech services business line. Major delivery centers in India, including cities like Chennai, Pune, Hyderabad, and Bengaluru, kochi, kolkatta, Noida. Job Title : IBM WebSphere and IBM MQ Exp: 6+ years – 12 years Location : Kochi Notice Period: 0-15 days/serving Salary : As per Market Mode Of Hire : Contract Job Summary Roles and Key Responsibilities: IBM WebSphere MQ Responsibilities Install, configure, and administer IBM WebSphere MQ (v9.x+) on Linux, Windows, and z/OS. Design and implement MQ clustering, high availability (HA), and disaster recovery (DR) strategies. Configure MQ objects (Queue Managers, Queues, Channels, Topics) for optimal performance. Implement MQ security (SSL/TLS, OAuth, LDAP, RBAC) and audit logging. Troubleshoot MQ performance bottlenecks, dead-letter queues, and connectivity issues. Automate MQ deployments using MQSC scripts, Ansible, or Python. Integrate IBM MQ with ACE/IIB, DataPower, Kafka, and other middleware. Monitor MQ environments using IBM MQ Explorer, Prometheus, Grafana, or Splunk. WebSphere Application Server (WAS) Responsibilities Install, configure, and maintain IBM WebSphere Application Server (WAS 9.x/8.5). Deploy and manage Java/J2EE applications on WAS. Configure WebSphere clustering, dynamic scaling, and workload management. Optimize JVM settings, connection pools, and thread pools for high performance. Implement WAS security (LDAP, SSL, OAuth, SAML) and role-based access control. Troubleshoot application server crashes, memory leaks, and performance issues. Automate WAS deployments using wsadmin, Jython, or DevOps tools (Jenkins, Docker, Kubernetes). Integrate WAS with IBM HTTP Server (IHS), DB2, Oracle, and other enterprise systems.

Posted 4 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies