Jobs
Interviews

1027 Normalization Jobs - Page 16

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 - 7.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

About Freshworks: Organizations everywhere struggle under the crushing costs and complexities of “solutions” that promise to simplify their lives. To create better experience for their customers and employees. To help them grow. Software is a choice that can make or break a business. Create better or worse experiences. Propel or throttle growth. Business software has become a blocker instead of ways to get work done. There’s another option. Freshworks. With a fresh vision for how the world works. At Freshworks, we build uncomplicated service software that delivers exceptional customer and employee experiences. Our enterprise-grade solutions are powerful, yet easy to use, and quick to deliver results. Our people-first approach to AI eliminates friction, making employees more effective and organizations more productive. Over 72,000 companies, including Bridgestone, New Balance, Nucor, S&P Global, and Sony Music, trust Freshworks’ customer experience (CX) and employee experience (EX) software to fuel customer loyalty and service efficiency. And over 4,500 Freshworks employees make this possible, all around the world. Fresh vision. Real impact. Come build it with us. Job Description Write scripts for automating DevOps tasks such as configuration management, provisioning, and deployments using Python, Ruby, or Go. Integrate scripts with DevOps tools and pipelines. Manage user accounts, permissions, and file systems. Perform advanced Linux administration and shell scripting tasks. Automate system administration tasks using shell scripts. Design and implement CI/CD pipelines for automating deployments and testing. Utilize popular CI/CD tools such as Jenkins and GitLab CI/CD. Integrate CI/CD pipelines with version control systems and container orchestration platforms. Set up and manage monitoring and logging solutions. Use tools for collecting, analyzing, and visualizing application and infrastructure logs. Troubleshoot issues based on monitoring and logging data. Utilize Git for version control and collaboration. Perform branching, merging, and conflict resolution using Git. Set up and manage Git repositories. Work effectively with developers, operations teams, and other stakeholders. Document DevOps processes and procedures. Troubleshoot complex DevOps issues. Identify root causes of problems and implement solutions. Qualifications Experience: 4-7 years Advanced understanding of programming concepts (data structures, algorithms, object-oriented programming). In-depth knowledge of Linux administration and shell scripting. Proficiency in using common Linux commands and tools for system administration. Extensive experience with Git. Proficiency in Git commands for branching, merging, and conflict resolution. Expert knowledge of CI/CD principles and best practices. Proficiency in CI/CD tools (e.g., Jenkins, GitLab CI/CD, Azure DevOps Pipelines). Experience in setting up and managing monitoring and logging solutions. Advanced communication and collaboration skills. Advanced problem-solving and analytical skills. Knowledge of RDBMS like MySQL, PostgreSQL. Expertise in database design, normalization, and optimization. Strong understanding of SQL and proficiency in writing complex queries. Experience in database administration tasks such as backup, recovery, security, and performance tuning. Extensive experience with Kubernetes environments. Deep understanding of Kubernetes architecture and components. Proficiency in using kubectl commands and managing Kubernetes resources. Experience in setting up and managing Kubernetes clusters. Experience with major cloud platforms (e.g., AWS, Azure, GCP). Familiarity with cloud-specific DevOps tools and services for deployments, monitoring, and scaling. Additional Information At Freshworks, we are creating a global workplace that enables everyone to find their true potential, purpose, and passion irrespective of their background, gender, race, sexual orientation, religion and ethnicity. We are committed to providing equal opportunity for all and believe that diversity in the workplace creates a more vibrant, richer work environment that advances the goals of our employees, communities and the business.

Posted 3 weeks ago

Apply

2.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Must Haves Strong knowledge in SQL development and relational databases (2 years with MS SQL Server, additional experience with databases such as PostgreSQL/Oracle a plus). Experience with database design, indexing, normalization, and query optimization techniques (including performance profiling). Priority focus on Microsoft products (MS SQL, Azure DevOps, Azure Data Factory, Visual Studio) Job Description Proven experience as an SQL Developer or in a similar role. Strong knowledge of SQL and relational databases (2 years working with MS-SQL Server). Experience with database design, indexing, normalization, and query optimization techniques – including the difference between join types and performance implications behind SQL commands and the syntax used to invoke them. Familiarity with tools such as Synapse is desirable.Familiarity with data modeling tools and practices. Capable of understanding existing materialized views, write new ones and eventually optimize their performance/execution time. Ability to work with large datasets and handle the associated performance challenges. Understanding of database security and best practices. Experience with ETL processes and tools (optional, but a plus).Strong problem-solving skills and attention to detail.Excellent communication skills and ability to collaborate with cross-functional teams.

Posted 3 weeks ago

Apply

4.0 - 7.0 years

0 Lacs

Hyderābād

Remote

Job Information Industry IT Services Date Opened 06/20/2025 Salary Confidential Job Type Contract Work Experience 4-7 Years City Hyderabad, open to remote State/Province Telangana Country India Zip/Postal Code 500081 Job Description Veltris is a Digital Product Engineering Services partner committed to driving technology-enabled transformation across enterprises, businesses, and industries. We specialize in delivering next-generation solutions for sectors including healthcare, technology, communications, manufacturing, and finance. With a focus on innovation and acceleration, Veltris empowers clients to build, modernize, and scale intelligent products that deliver connected, AI-powered experiences. Our experience-centric approach, agile methodologies, and exceptional talent enable us to streamline product development, maximize platform ROI, and drive meaningful business outcomes across both digital and physical ecosystems. In a strategic move to strengthen our healthcare offerings and expand industry capabilities, Veltris has acquired BPK Technologies. This acquisition enhances our domain expertise, broadens our go-to-market strategy, and positions us to deliver even greater value to the enterprise and mid-market clients in healthcare and beyond. Job Description for Business Analyst Roles & Responsibilities: Gather requirements and translate them into user stories that can be engineered and developed. Create requirements in Azure DevOps board. Document and communicate translated requirements to team members Attend daily stand-up and need basis meetings Will be working in IST time zone and will be required to have a few hours of overlap with the US/Canada time zone. You will be required to participate in product architecture, design, and requirement discussions Work with your product manager or senior Business Analyst. Must Have skills: Must have a good understanding of relational database. Should be able to understand client requirements and do research to break down requirements that can be engineered and developed. Should have hands-on experience in writing SQL queries, joins, filtering, data normalization, etc Should have good analytical skills and be able to analyse data in Excel sheet. Ability to multitask Excellent verbal and written communication in English Good to Have skills: Working knowledge of Agile methodology Understanding of Azure DevOps Able to understand and create ER diagram, DB schema ETL, DWH, BI knowledge will be an added advantage Dentistry and healthcare domains are preferred Experience : 4 - 8 yrs. Qualification - Bachelor's Degree in Computer Science, Management Information Sciences, Mathematics, Engineering, Business, or area of functional responsibility preferred, or a combination of equivalent education and experience. Disclaimer: The information provided herein is for general informational purposes only and reflects the current strategic direction and service offerings of Veltris. While we strive for accuracy, Veltris makes no representations or warranties regarding the completeness, reliability, or suitability of the information for any specific purpose. Any statements related to business growth, acquisitions, or future plans, including the acquisition of BPK Technologies, are subject to change without notice and do not constitute a binding commitment. Veltris reserves the right to modify its strategies, services, or business relationships at its sole discretion. For the most up-to-date and detailed information, please contact Veltris directly.

Posted 3 weeks ago

Apply

0 years

3 - 4 Lacs

Mumbai

On-site

Company Description Forbes Advisor i s a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. We believe in the power of entrepreneurial capitalism and use it on various platforms to ignite the conversations that drive systemic change in business, culture, and society. We celebrate success and are committed to using our megaphone to drive diversity, equity and inclusion. We are the world’s biggest business media brand and we consistently place in the top 20 of the most popular sites in the United States, in good company with brands like Netflix, Apple and Google. In short, we have a big platform and we use it responsibly. Job Description The Data Research Engineering Team is a brand new team with the purpose of managing data from acquisition to presentation, collaborating with other teams while also operating independently. Their responsibilities include acquiring and integrating data, processing and transforming it, managing databases, ensuring data quality, visualizing data, automating processes, working with relevant technologies, and ensuring data governance and compliance. They play a crucial role in enabling data-driven decision-making and meeting the organization's data needs. A typical day in the life of a Data Research Engineer- Team Lead will involve guiding team members through code standards, optimization techniques, and best practices in debugging and testing. They oversee the development and consistent application of testing protocols, including unit, integration, and performance testing, ensuring a high standard of code quality across the team. They work closely with engineers, offering technical mentorship in areas like Git version control, task tracking, and documentation processes, as well as advanced Python and database practices. Responsibilities Technical Mentorship and Code Quality: Guide and mentor team members on coding standards, optimization techniques, and debugging. Conduct thorough code reviews, provide constructive feedback, and enforce code quality standards to ensure maintainable and efficient code. Testing and Quality Assurance Leadership: Develop, implement, and oversee rigorous testing protocols, including unit, integration, and performance testing, to guarantee the reliability and robustness of all projects. Advocate for automated testing and ensure comprehensive test coverage within the team. Process Improvement and Documentation: Establish and maintain high standards for version control, documentation, and task tracking across the team. Continuously refine these processes to enhance team productivity, streamline workflows, and ensure data quality. Hands-On Technical Support: Serve as the team’s primary resource for troubleshooting complex issues, particularly in Python, MySQL, GitKraken, and Knime. Provide on-demand support to team members, helping them overcome technical challenges and improve their problem-solving skills. High-Level Technical Mentorship: Provide mentorship in advanced technical areas, including architecture design, data engineering best practices, and advanced Python programming. Guide the team in building scalable and reliable data solutions. Cross-Functional Collaboration: Work closely with data scientists, product managers, and quality assurance teams to align on data requirements, testing protocols, and process improvements. Foster open communication across teams to ensure seamless integration and delivery of data solutions. Continuous Learning and Improvement: Stay updated with emerging data engineering methodologies and best practices, sharing relevant insights with the team. Drive a culture of continuous improvement, ensuring the team’s skills and processes evolve with industry standards. Data Pipelines: Design, implement, and maintain scalable data pipelines for efficient data transfer, cleaning, normalization, transformation, aggregation, and visualization to support production-level workloads. Big Data: Leverage distributed processing frameworks such as PySpark and Kafka to manage and process massive datasets efficiently. Cloud-Native Data Solutions: Develop and optimize workflows for cloud-native data solutions, including BigQuery, Databricks, Snowflake, Redshift, and tools like Airflow and AWS Glue. Regulations: Ensure compliance with regulatory frameworks like GDPR and implement robust data governance and security measures. Skills and Experience Experience : 8 + years Technical Proficiency: Programming: Expert-level skills in Python, with a strong understanding of code optimization, debugging, and testing. Object-Oriented Programming (OOP) Expertise: Strong knowledge of OOP principles in Python, with the ability to design modular, reusable, and efficient code structures. Experience in implementing OOP best practices to enhance code organization and maintainability. Data Management: Proficient in MySQL and database design, with experience in creating efficient data pipelines and workflows. Tools: Advanced knowledge of Git and GitKraken for version control, with experience in task management, ideally on GitHub. Familiarity with Knime or similar data processing tools is a plus. Testing and QA Expertise: Proven experience in designing and implementing testing protocols, including unit, integration, and performance testing. Ability to embed automated testing within development workflows. Process-Driven Mindset: Strong experience with process improvement and documentation, particularly for coding standards, task tracking, and data management protocols. Leadership and Mentorship: Demonstrated ability to mentor and support junior and mid-level engineers, with a focus on fostering technical growth and improving team cohesion. Experience leading code reviews and guiding team members in problem-solving and troubleshooting. Problem-Solving Skills: Ability to handle complex technical issues and serve as a key resource for team troubleshooting. Expertise in guiding others through debugging and technical problem-solving. Strong Communication Skills: Effective communicator capable of aligning cross-functional teams on project requirements, technical standards, and data workflows. Adaptability and Continuous Learning: A commitment to staying updated with the latest in data engineering, coding practices, and tools, with a proactive approach to learning and sharing knowledge within the team. Data Pipelines: Comprehensive expertise in building and optimizing data pipelines, including data transfer, transformation, and visualization, for real-world applications. Distributed Systems: Strong knowledge of distributed systems and big data tools such as PySpark and Kafka. Data Warehousing: Proficiency with modern cloud data warehousing platforms (BigQuery, Databricks, Snowflake, Redshift) and orchestration tools (Airflow, AWS Glue). Regulations: Demonstrated understanding of regulatory compliance requirements (e.g., GDPR) and best practices for data governance and security in enterprise settings Perks: Day off on the 3rd Friday of every month (one long weekend each month) Monthly Wellness Reimbursement Program to promote health well-being Monthly Office Commutation Reimbursement Program Paid paternity and maternity leaves Qualifications Educational Background: Bachelor’s or Master’s degree in Computer Science, Data Engineering, or a related field. Equivalent experience in data engineering roles will also be considered. Additional Information All your information will be kept confidential according to EEO guidelines.

Posted 3 weeks ago

Apply

0 years

2 - 3 Lacs

Mumbai

On-site

eClerx is hiring a Product Data Management Analyst who will work within our Product Data Management team to help our customers enhance online product data quality for Electrical, Mechanical & Electronics products. It will also involve creating technical specifications and product descriptions for online presentation. The candidate will also be working on consultancy projects on redesigning e-commerce customer’s website taxonomy and navigation. The ideal candidate must possess strong communication skills, with an ability to listen and comprehend information and share it with all the key stakeholders, highlighting opportunities for improvement and concerns, if any. He/she must be able to work collaboratively with teams to execute tasks within defined timeframes while maintaining high-quality standards and superior service levels. The ability to take proactive actions and willingness to take up responsibility beyond the assigned work area is a plus. Apprentice_Analyst Roles and responsibilities: Data enrichment/gap fill, standardization, normalization, and categorization of online and offline product data via research through different sources like internet, specific websites, database, etc. Data quality check and correction Data profiling and reporting (basic) Email communication with the client on request acknowledgment, project status and response on queries Help customers in enhancing their product data quality (electrical, mechanical, electronics) from the technical specification and description perspective Provide technical consulting to the customer category managers around the industry best practices of product data enhancement Technical and Functional Skills: Bachelor’s Degree in Engineering from Electrical, Mechanical OR Electronics stream Excellent technical knowledge of engineering products (Pumps, motors, HVAC, Plumbing, etc.) and technical specifications Intermediate knowledge of MS Office/Internet.

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

Ahmedabad, Gujarat, India

Remote

Join us as a Senior Database Developer—Drive High-Performance Data Systems for Financial Services! MSBC is seeking a Senior Database Developer with expertise in Oracle and PL/SQL to design, develop, and optimize complex database solutions. This role offers an exciting opportunity to enhance data integrity, scalability, and performance while working on mission-critical applications. Collaborate with industry experts to deliver efficient and secure database solutions supporting financial services and enterprise applications. If you are passionate about database development and thrive in a fast-paced, technology-driven environment, join us in driving innovation and efficiency through data management. Key Tools and Technologies: Database Management: Oracle Database, MySQL, PostgreSQL, NoSQL Development & Optimization: PL/SQL, SQL, Query Optimization, Index Tuning, Execution Plans Architecture & Data Modeling: Logical & Physical Data Modeling, Normalization, Data Governance Security & Performance: Data Security, Performance Tuning, Backup & Recovery, Disaster Recovery Version Control & Deployment: Git, Database Deployment Strategies Cloud & Automation: Oracle Cloud, AWS RDS, ETL Processes, BI Tools, DevOps Practices Key Responsibilities: Develop and optimize database solutions ensuring integrity, security, and performance. Design and maintain database schema, tables, indexes, views, and stored procedures. Implement data models and governance standards aligned with business requirements. Conduct performance tuning and troubleshooting to enhance efficiency. Manage backups, recovery, and disaster recovery strategies. Collaborate with architects, analysts, and development teams for seamless integration. Provide technical support and mentorship to junior developers. Skills & Qualifications: 5+ years of experience in database development with expertise in PL/SQL and SQL. Strong grasp of database architecture, normalization, and design patterns. Hands-on experience with database security, performance tuning, and version control. Familiarity with cloud-based solutions, automation, and DevOps practices. Additional experience with MySQL, PostgreSQL, or NoSQL databases is a plus. Oracle Certified Professional (OCP) certification preferred. Strong problem-solving, attention to detail, and communication skills. Note: Shift timings align with UK working hours. This role is based in Ahmedabad, but candidates from other cities or states are encouraged to apply, as remote or hybrid working options are available. MSBC Group has been a trusted technology partner for over 20 years, delivering the latest systems and software solutions for financial services, manufacturing, logistics, construction, and startup ecosystems. Our expertise includes Accessible AI, Custom Software Solutions, Staff Augmentation, Managed Services, and Business Process Outsourcing. We are at the forefront of developing advanced AI-enabled services and supporting transformative projects, such as state-of-the-art trading platforms, seamless application migrations, and integrating real-time data analytics. With offices in London, California, and Ahmedabad, and operating in every time-zone, MSBC Group is your AI and automation partner.

Posted 3 weeks ago

Apply

8.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Company Description Forbes Advisor i s a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. We believe in the power of entrepreneurial capitalism and use it on various platforms to ignite the conversations that drive systemic change in business, culture, and society. We celebrate success and are committed to using our megaphone to drive diversity, equity and inclusion. We are the world’s biggest business media brand and we consistently place in the top 20 of the most popular sites in the United States, in good company with brands like Netflix, Apple and Google. In short, we have a big platform and we use it responsibly. Job Description The Data Research Engineering Team is a brand new team with the purpose of managing data from acquisition to presentation, collaborating with other teams while also operating independently. Their responsibilities include acquiring and integrating data, processing and transforming it, managing databases, ensuring data quality, visualizing data, automating processes, working with relevant technologies, and ensuring data governance and compliance. They play a crucial role in enabling data-driven decision-making and meeting the organization's data needs. A typical day in the life of a Data Research Engineer- Team Lead will involve guiding team members through code standards, optimization techniques, and best practices in debugging and testing. They oversee the development and consistent application of testing protocols, including unit, integration, and performance testing, ensuring a high standard of code quality across the team. They work closely with engineers, offering technical mentorship in areas like Git version control, task tracking, and documentation processes, as well as advanced Python and database practices. Responsibilities Technical Mentorship and Code Quality: Guide and mentor team members on coding standards, optimization techniques, and debugging. Conduct thorough code reviews, provide constructive feedback, and enforce code quality standards to ensure maintainable and efficient code. Testing and Quality Assurance Leadership: Develop, implement, and oversee rigorous testing protocols, including unit, integration, and performance testing, to guarantee the reliability and robustness of all projects. Advocate for automated testing and ensure comprehensive test coverage within the team. Process Improvement and Documentation: Establish and maintain high standards for version control, documentation, and task tracking across the team. Continuously refine these processes to enhance team productivity, streamline workflows, and ensure data quality. Hands-On Technical Support: Serve as the team’s primary resource for troubleshooting complex issues, particularly in Python, MySQL, GitKraken, and Knime. Provide on-demand support to team members, helping them overcome technical challenges and improve their problem-solving skills. High-Level Technical Mentorship: Provide mentorship in advanced technical areas, including architecture design, data engineering best practices, and advanced Python programming. Guide the team in building scalable and reliable data solutions. Cross-Functional Collaboration: Work closely with data scientists, product managers, and quality assurance teams to align on data requirements, testing protocols, and process improvements. Foster open communication across teams to ensure seamless integration and delivery of data solutions. Continuous Learning and Improvement: Stay updated with emerging data engineering methodologies and best practices, sharing relevant insights with the team. Drive a culture of continuous improvement, ensuring the team’s skills and processes evolve with industry standards. Data Pipelines: Design, implement, and maintain scalable data pipelines for efficient data transfer, cleaning, normalization, transformation, aggregation, and visualization to support production-level workloads. Big Data: Leverage distributed processing frameworks such as PySpark and Kafka to manage and process massive datasets efficiently. Cloud-Native Data Solutions: Develop and optimize workflows for cloud-native data solutions, including BigQuery, Databricks, Snowflake, Redshift, and tools like Airflow and AWS Glue. Regulations: Ensure compliance with regulatory frameworks like GDPR and implement robust data governance and security measures. Skills And Experience Experience: 8+ years Technical Proficiency: ○ Programming: Expert-level skills in Python, with a strong understanding of code optimization, debugging, and testing. ○ Object-Oriented Programming (OOP) Expertise: Strong knowledge of OOP principles in Python, with the ability to design modular, reusable, and efficient code structures. Experience in implementing OOP best practices to enhance code organization and maintainability. ○ Data Management: Proficient in MySQL and database design, with experience in creating efficient data pipelines and workflows. ○ Tools: Advanced knowledge of Git and GitKraken for version control, with experience in task management, ideally on GitHub. Familiarity with Knime or similar data processing tools is a plus. Testing and QA Expertise: Proven experience in designing and implementing testing protocols, including unit, integration, and performance testing. Ability to embed automated testing within development workflows. Process-Driven Mindset: Strong experience with process improvement and documentation, particularly for coding standards, task tracking, and data management protocols. Leadership and Mentorship: Demonstrated ability to mentor and support junior and mid-level engineers, with a focus on fostering technical growth and improving team cohesion. Experience leading code reviews and guiding team members in problem-solving and troubleshooting. Problem-Solving Skills: Ability to handle complex technical issues and serve as a key resource for team troubleshooting. Expertise in guiding others through debugging and technical problem-solving. Strong Communication Skills: Effective communicator capable of aligning cross-functional teams on project requirements, technical standards, and data workflows. Adaptability and Continuous Learning: A commitment to staying updated with the latest in data engineering, coding practices, and tools, with a proactive approach to learning and sharing knowledge within the team. Data Pipelines: Comprehensive expertise in building and optimizing data pipelines, including data transfer, transformation, and visualization, for real-world applications. Distributed Systems: Strong knowledge of distributed systems and big data tools such as PySpark and Kafka. Data Warehousing: Proficiency with modern cloud data warehousing platforms (BigQuery, Databricks, Snowflake, Redshift) and orchestration tools (Airflow, AWS Glue). Regulations: Demonstrated understanding of regulatory compliance requirements (e.g., GDPR) and best practices for data governance and security in enterprise settings Perks: Day off on the 3rd Friday of every month (one long weekend each month) Monthly Wellness Reimbursement Program to promote health well-being Monthly Office Commutation Reimbursement Program Paid paternity and maternity leaves Qualifications Educational Background: Bachelor’s or Master’s degree in Computer Science, Data Engineering, or a related field. Equivalent experience in data engineering roles will also be considered. Additional Information All your information will be kept confidential according to EEO guidelines.

Posted 3 weeks ago

Apply

10.0 years

4 - 5 Lacs

Bengaluru

On-site

Job Title Senior Database Administrator Job Description We are seeking a Senior Database Administrator with deep expertise in AWS cloud data services and strong experience supporting healthcare-grade systems . This role will be responsible for managing both on-premises and cloud-based database environments, ensuring high availability, data security, regulatory compliance, and optimal performance. The ideal candidate will be self-driven, technically skilled, and collaborative, with a proven track record in supporting mission-critical data environments in regulated industries. Your role: Database Management Administer AWS RDS, Aurora, DynamoDB , and other AWS-managed database services for a 24x7 production environment. Maintain and support legacy on-premise SQL Server environments and coordinate migrations to newer versions and cloud platforms. Monitor and manage SQL Agent jobs , troubleshoot job failures, and maintain operational continuity. Perform regular maintenance tasks, including backups, patching, schema updates, and deployments . Automate administrative and monitoring tasks through scripts and infrastructure-as-code solutions . Data Security, Availability & Compliance Implement and maintain database security policies, access controls, encryption, and auditing to support healthcare data compliance (e.g., HIPAA). Design and support disaster recovery and high-availability solutions , ensuring alignment with business continuity plans and SLAs . Enforce robust change management and security standards across development, staging, and production environments. Ensure ongoing compliance with healthcare data regulations , including data retention and protection requirements. Troubleshooting & Operational Support Diagnose and resolve database performance and connectivity issues proactively. Provide incident support and root cause analysis for database-related service disruptions. Collaborate with DevOps, application, and infrastructure teams to support and improve end-to-end performance. Database Design & Optimization Participate in the design, normalization, and optimization of database schemas, indexes, and stored procedures. Implement and manage replication, clustering , and failover configurations for high availability and scalability. Conduct capacity planning and make strategic recommendations to ensure system performance under growing workloads. Support development teams with guidance on database best practices during architecture and review phases. You're the right fit if: Bachelor's degree in Computer Science, Information Systems, or a related discipline. 10+ years of hands-on database administration experience, including 3 + years with AWS database services . AWS Certifications such as AWS Certified Database – Specialty , SysOps Administrator , or Cloud Practitioner are preferred. Expert knowledge of SQL Server 2008 R2 to 2019+ , with migration experience to latest platforms. Proficient in AWS RDS, Aurora, DynamoDB , and AWS shared responsibility model. Strong expertise in T-SQL , query tuning, stored procedure development, and optimization. Proven experience in SQL Server replication (transactional and merge), clustering, and availability groups. Familiarity with VMware and running SQL within virtualized environments. Hands-on experience with database performance monitoring tools (e.g., SQL Sentry, Datadog). Exposure to BI/data warehousing tools and techniques is a plus. Experience supporting data systems in regulated industries (healthcare, life sciences, etc.), with working knowledge of HIPAA compliance. Excellent communication, collaboration, and documentation skills. Highly motivated, self-starter , with strong ability to multitask in a fast-paced environment. How we work together We believe that we are better together than apart, this means working in-person at least 3 days per week. About Philips We are a health technology company. We built our entire company around the belief that every human matters, and we won't stop until everybody everywhere has access to the quality healthcare that we all deserve. Do the work of your life to help the lives of others. Learn more about our business . Discover our rich and exciting history . Learn more about our purpose . If you’re interested in this role and have many, but not all, of the experiences needed, we encourage you to apply. You may still be the right candidate for this or other opportunities at Philips. Learn more about our culture of impact with care here . #LI-EU #LI-Hybrid

Posted 3 weeks ago

Apply

3.0 - 5.0 years

0 Lacs

Coimbatore, Tamil Nadu, India

On-site

Job Title - SQL Application Support Engineer Experience - 3 to 5 Years Location - Coimbatore Job Summary: We are seeking a dedicated and skilled Application Support Engineer with strong SQL expertise to join our dynamic team. The ideal candidate will provide technical support, resolve application issues, and ensure seamless system performance. This role requires proficiency in SQL, including the ability to write complex queries, stored procedures, and functions. Key Responsibilities: Provide Level 1 and Level 2 application support for internal and external clients. Investigate, troubleshoot, and resolve technical issues related to applications and databases. Develop and optimize SQL queries, stored procedures, and functions for data extraction, reporting, and troubleshooting. Collaborate with development teams to implement bug fixes, performance enhancements, and application updates. Monitor system performance, identify bottlenecks, and apply necessary optimizations. Ensure proper documentation of issues, resolutions, and updates for knowledge base maintenance. Provide training and guidance to end-users regarding application features and functionality. Required Skills and Qualifications: Bachelor's degree in Computer Science, Information Technology. Proven experience in SQL , including writing queries, stored procedures, functions, and performance tuning. Knowledge of database structures, normalization, and indexing strategies. Strong analytical and problem-solving skills with attention to detail. Excellent communication skills with the ability to interact with technical and non-technical stakeholders. Experience with incident management tools and support ticketing systems is a plus. Knowledge of .NET framework or other programming languages is an advantage.

Posted 3 weeks ago

Apply

7.5 years

0 Lacs

Coimbatore, Tamil Nadu, India

On-site

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : SAP Sales and Distribution (SD) Good to have skills : NA Minimum 7.5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. You will be responsible for managing the team and ensuring successful project delivery. Your typical day will involve collaborating with multiple teams, making key decisions, and providing solutions to problems for your immediate team and across multiple teams. Roles & Responsibilities: - Expected to be an SME - Collaborate and manage the team to perform - Responsible for team decisions - Engage with multiple teams and contribute to key decisions - Provide solutions to problems for their immediate team and across multiple teams - Lead the effort to design, build, and configure applications - Act as the primary point of contact for the project - Manage the team and ensure successful project delivery - Collaborate with multiple teams to make key decisions - Provide solutions to problems for the immediate team and across multiple teams Professional & Technical Skills: - Must To Have Skills: Proficiency in SAP Sales and Distribution (SD) - Strong understanding of statistical analysis and machine learning algorithms - Experience with data visualization tools such as Tableau or Power BI - Hands-on implementing various machine learning algorithms such as linear regression, logistic regression, decision trees, and clustering algorithms - Solid grasp of data munging techniques, including data cleaning, transformation, and normalization to ensure data quality and integrity Additional Information: - The candidate should have a minimum of 7.5 years of experience in SAP Sales and Distribution (SD) - This position is based in Mumbai - A 15 years full-time education is required

Posted 3 weeks ago

Apply

7.5 years

0 Lacs

Coimbatore, Tamil Nadu, India

On-site

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : ServiceNow IT Service Management Good to have skills : NA Minimum 7.5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. You will be responsible for managing the team and ensuring successful project delivery. Your typical day will involve collaborating with multiple teams, making key decisions, and providing solutions to problems for your immediate team and across multiple teams. Roles & Responsibilities: - Expected to be an SME - Collaborate and manage the team to perform - Responsible for team decisions - Engage with multiple teams and contribute to key decisions - Provide solutions to problems for their immediate team and across multiple teams - Lead the effort to design, build, and configure applications - Act as the primary point of contact for the project - Manage the team and ensure successful project delivery - Collaborate with multiple teams to make key decisions - Provide solutions to problems for the immediate team and across multiple teams Professional & Technical Skills: - Must To Have Skills: Proficiency in ServiceNow IT Service Management - Strong understanding of statistical analysis and machine learning algorithms - Experience with data visualization tools such as Tableau or Power BI - Hands-on implementing various machine learning algorithms such as linear regression, logistic regression, decision trees, and clustering algorithms - Solid grasp of data munging techniques, including data cleaning, transformation, and normalization to ensure data quality and integrity Additional Information: - The candidate should have a minimum of 7.5 years of experience in ServiceNow IT Service Management - This position is based at our Bengaluru office - A 15 years full-time education is required

Posted 3 weeks ago

Apply

3.0 years

0 Lacs

Coimbatore, Tamil Nadu, India

On-site

Project Role : Service Management Practitioner Project Role Description : Support the delivery of programs, projects or managed services. Coordinate projects through contract management and shared service coordination. Develop and maintain relationships with key stakeholders and sponsors to ensure high levels of commitment and enable strategic agenda. Must have skills : Microsoft Power Business Intelligence (BI) Good to have skills : Microsoft Power Apps Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Service Management Practitioner, you will support the delivery of programs, projects, or managed services. Coordinate projects through contract management and shared service coordination. Develop and maintain relationships with key stakeholders and sponsors to ensure high levels of commitment and enable strategic agenda. Roles & Responsibilities: - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work-related problems. - Coordinate the delivery of programs, projects, or managed services. - Develop and maintain relationships with key stakeholders and sponsors. - Ensure high levels of commitment from stakeholders. - Enable strategic agenda through effective coordination. - Provide regular updates and reports on project progress. Professional & Technical Skills: - Must To Have Skills: Proficiency in Microsoft Power Business Intelligence (BI). - Good To Have Skills: Experience with Microsoft Power Apps. - Strong understanding of statistical analysis and machine learning algorithms. - Experience with data visualization tools such as Tableau or Power BI. - Hands-on implementing various machine learning algorithms such as linear regression, logistic regression, decision trees, and clustering algorithms. - Solid grasp of data munging techniques, including data cleaning, transformation, and normalization to ensure data quality and integrity. Additional Information: - The candidate should have a minimum of 3 years of experience in Microsoft Power Business Intelligence (BI). - This position is based at our Chennai office. - A 15 years full-time education is required.

Posted 3 weeks ago

Apply

3.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

Project Role : AI / ML Engineer Project Role Description : Develops applications and systems that utilize AI tools, Cloud AI services, with proper cloud or on-prem application pipeline with production ready quality. Be able to apply GenAI models as part of the solution. Could also include but not limited to deep learning, neural networks, chatbots, image processing. Must have skills : Machine Learning Operations Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : BE Summary: As an AI / ML Engineer, you will develop applications and systems that utilize AI to improve performance and efficiency, including deep learning, neural networks, chatbots, and natural language processing. Roles & Responsibilities: - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Implement machine learning models for various applications. - Optimize AI algorithms for improved performance. - Collaborate with cross-functional teams to integrate AI solutions. - Stay updated with the latest trends in AI and ML technologies. - Provide technical guidance and mentor junior team members. Professional & Technical Skills: - Must To Have Skills: Proficiency in Machine Learning Operations. - Strong understanding of statistical analysis and machine learning algorithms. - Experience with data visualization tools such as Tableau or Power BI. - Hands-on implementing various machine learning algorithms such as linear regression, logistic regression, decision trees, and clustering algorithms. - Solid grasp of data munging techniques, including data cleaning, transformation, and normalization to ensure data quality and integrity. Additional Information: - The candidate should have a minimum of 3 years of experience in Machine Learning Operations. - This position is based at our Kolkata office. - A BE degree is required.

Posted 3 weeks ago

Apply

7.5 years

0 Lacs

Gurugram, Haryana, India

On-site

Project Role : AI / ML Engineer Project Role Description : Develops applications and systems that utilize AI tools, Cloud AI services, with proper cloud or on-prem application pipeline with production ready quality. Be able to apply GenAI models as part of the solution. Could also include but not limited to deep learning, neural networks, chatbots, image processing. Must have skills : Large Language Models Good to have skills : NA Minimum 7.5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an AI/ML Engineer, you will develop applications and systems utilizing AI tools, Cloud AI services, and GenAI models. Your role involves implementing deep learning, neural networks, chatbots, and image processing in production-ready quality solutions. Roles & Responsibilities: - Expected to be an SME. - Collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Lead the implementation of large language models in AI applications. - Research and apply cutting-edge AI techniques to enhance system performance. - Contribute to the development of innovative AI solutions for complex business challenges. Professional & Technical Skills: - Must To Have Skills: Proficiency in Large Language Models. - Strong understanding of statistical analysis and machine learning algorithms. - Experience with data visualization tools such as Tableau or Power BI. - Hands-on implementing various machine learning algorithms such as linear regression, logistic regression, decision trees, and clustering algorithms. - Solid grasp of data munging techniques, including data cleaning, transformation, and normalization to ensure data quality and integrity. Additional Information: - The candidate should have a minimum of 7.5 years of experience in Large Language Models. - This position is based at our Bengaluru office. - A 15 years full-time education is required.

Posted 3 weeks ago

Apply

7.5 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Project Role : AI / ML Engineer Project Role Description : Develops applications and systems that utilize AI tools, Cloud AI services, with proper cloud or on-prem application pipeline with production ready quality. Be able to apply GenAI models as part of the solution. Could also include but not limited to deep learning, neural networks, chatbots, image processing. Must have skills : Large Language Models Good to have skills : NA Minimum 7.5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an AI/ML Engineer, you will develop applications and systems utilizing AI tools, Cloud AI services, and GenAI models. Your role involves implementing deep learning, neural networks, chatbots, and image processing in production-ready quality solutions. Roles & Responsibilities: - Expected to be an SME. - Collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Lead the implementation of large language models in AI applications. - Research and apply cutting-edge AI techniques to enhance system performance. - Contribute to the development of innovative AI solutions for complex business challenges. Professional & Technical Skills: - Must To Have Skills: Proficiency in Large Language Models. - Strong understanding of statistical analysis and machine learning algorithms. - Experience with data visualization tools such as Tableau or Power BI. - Hands-on implementing various machine learning algorithms such as linear regression, logistic regression, decision trees, and clustering algorithms. - Solid grasp of data munging techniques, including data cleaning, transformation, and normalization to ensure data quality and integrity. Additional Information: - The candidate should have a minimum of 7.5 years of experience in Large Language Models. - This position is based at our Bengaluru office. - A 15 years full-time education is required.

Posted 3 weeks ago

Apply

7.5 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : SAP FI CO Finance Good to have skills : NA Minimum 7.5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. You will be responsible for managing the team and ensuring successful project delivery. Your typical day will involve collaborating with multiple teams, making key decisions, and providing solutions to problems for your immediate team and across multiple teams. Roles & Responsibilities: - Expected to be an SME - Collaborate and manage the team to perform - Responsible for team decisions - Engage with multiple teams and contribute on key decisions - Provide solutions to problems for their immediate team and across multiple teams - Lead the effort to design, build, and configure applications - Act as the primary point of contact for the project - Manage the team and ensure successful project delivery - Collaborate with multiple teams to make key decisions - Provide solutions to problems for the immediate team and across multiple teams Professional & Technical Skills: - Must To Have Skills: Proficiency in SAP FI CO Finance - Strong understanding of statistical analysis and machine learning algorithms - Experience with data visualization tools such as Tableau or Power BI - Hands-on implementing various machine learning algorithms such as linear regression, logistic regression, decision trees, and clustering algorithms - Solid grasp of data munging techniques, including data cleaning, transformation, and normalization to ensure data quality and integrity Additional Information: - The candidate should have a minimum of 7.5 years of experience in SAP FI CO Finance - This position is based at our Hyderabad office - A 15 years full-time education is required

Posted 3 weeks ago

Apply

3.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Our client is founded in 2002 with offices in the US, India, Europe, Canada, Singapore, Costa Rica, Brazil, and the UK they got national and international scope and reach, backed by decades of experience and deep domain expertise. They specialize in Products such as AI Governance/Data Privacy and Services such as Interactive (Product, Discovery, Research, User Journey, Prototyping), Talent, Cloud (Development, Transformation, SRE, Architecture), Engineering (Web, Mobile, Strategy), Enterprise (Salesforce, ServiceNow, SAP, Oracle, Microsoft, Workday), Training (Corporate Learning Design and Development), and building offshore cost-effective captive Global Capability Centers. Required Qualifications • Relevant 3 to 5 years of experience in cybersecurity engineering, with deep expertise in Splunk, SIEM, SOAR, ML, and automated data pipelines. • 3+ years of experience with security automation platforms (SOAR) such as Splunk SOAR, XSOAR, Swimlane, etc. • 3+ years of experience in cyber data engineering or analytics, including log processing and data pipeline architecture. • Strong proficiency in Python, PowerShell, and API integrations. • Proven experience with GitLab, automation platform deployment, and pipeline troubleshooting. • Hands-on experience with ETL tools, relational and columnar databases, and data visualization tools such as Power BI. • Solid understanding of SIEM design, normalization, and correlation strategies. • Excellent debugging, problem-solving, and communication skills. • Bachelor's degree in Computer Science, Engineering, Cybersecurity, or equivalent technical field (or 10+ years of experience). Preferred Qualifications • Hands-on experience with cloud environments such as AWS, Azure, or GCP. • Strong knowledge of cloud-native security technologies, serverless architecture, and containerized data flows. • Cybersecurity certifications such as CISSP, CISM, CISA, or equivalent. • Experience working in Agile or DevSecOps environments with CI/CD pipelines. • Familiarity with corporate change management practices and IT governance frameworks.

Posted 4 weeks ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Project Role : AI / ML Engineer Project Role Description : Develops applications and systems that utilize AI tools, Cloud AI services, with proper cloud or on-prem application pipeline with production ready quality. Be able to apply GenAI models as part of the solution. Could also include but not limited to deep learning, neural networks, chatbots, image processing. Must have skills : Salesforce Einstein AI Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an AI/ML Engineer, you will develop applications and systems utilizing AI tools, Cloud AI services, with a proper cloud or on-prem application pipeline of production-ready quality. You will apply GenAI models as part of the solution, including deep learning, neural networks, chatbots, and image processing. Roles & Responsibilities: - Expected to be an SME. - Collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Lead the implementation of AI/ML models. - Conduct research on emerging AI technologies. - Optimize AI algorithms for performance and scalability. Professional & Technical Skills: - Must To Have Skills: Proficiency in Salesforce Einstein AI. - Strong understanding of statistical analysis and machine learning algorithms. - Experience with data visualization tools such as Tableau or Power BI. - Hands-on implementing various machine learning algorithms such as linear regression, logistic regression, decision trees, and clustering algorithms. - Solid grasp of data munging techniques, including data cleaning, transformation, and normalization to ensure data quality and integrity. Additional Information: - The candidate should have a minimum of 7.5 years of experience in Salesforce Einstein AI. - This position is based at our Bengaluru office. - A 15 years full-time education is required. 15 years full time education

Posted 4 weeks ago

Apply

5.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Project Role : AI / ML Engineer Project Role Description : Develops applications and systems that utilize AI tools, Cloud AI services, with proper cloud or on-prem application pipeline with production ready quality. Be able to apply GenAI models as part of the solution. Could also include but not limited to deep learning, neural networks, chatbots, image processing. Must have skills : Large Language Models Good to have skills : NA Minimum 5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an AI/ML Engineer, you will develop applications and systems utilizing AI tools, Cloud AI services, and GenAI models. Your role involves implementing deep learning, neural networks, chatbots, and image processing in production-ready quality solutions. Roles & Responsibilities: - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work-related problems. - Develop applications and systems using AI tools and Cloud AI services. - Implement deep learning and neural networks in solutions. - Create chatbots and work on image processing tasks. - Collaborate with team members to provide innovative solutions. - Stay updated with the latest AI/ML trends and technologies. Professional & Technical Skills: - Must To Have Skills: Proficiency in Large Language Models. - Strong understanding of statistical analysis and machine learning algorithms. - Experience with data visualization tools such as Tableau or Power BI. - Hands-on implementing various machine learning algorithms like linear regression, logistic regression, decision trees, and clustering algorithms. - Solid grasp of data munging techniques including data cleaning, transformation, and normalization. Additional Information: - The candidate should have a minimum of 3 years of experience in Large Language Models. - This position is based at our Bengaluru office. - A 15 years full-time education is required. 15 years full time education

Posted 4 weeks ago

Apply

0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Job Description eClerx is hiring a Product Data Management Analyst who will work within our Product Data Management team to help our customers enhance online product data quality for Electrical, Mechanical & Electronics products. It will also involve creating technical specifications and product descriptions for online presentation. The candidate will also be working on consultancy projects on redesigning e-commerce customer’s website taxonomy and navigation. The ideal candidate must possess strong communication skills, with an ability to listen and comprehend information and share it with all the key stakeholders, highlighting opportunities for improvement and concerns, if any. He/she must be able to work collaboratively with teams to execute tasks within defined timeframes while maintaining high-quality standards and superior service levels. The ability to take proactive actions and willingness to take up responsibility beyond the assigned work area is a plus. Apprentice_Analyst Roles and responsibilities: Data enrichment/gap fill, standardization, normalization, and categorization of online and offline product data via research through different sources like internet, specific websites, database, etc. Data quality check and correction Data profiling and reporting (basic) Email communication with the client on request acknowledgment, project status and response on queries Help customers in enhancing their product data quality (electrical, mechanical, electronics) from the technical specification and description perspective Provide technical consulting to the customer category managers around the industry best practices of product data enhancement Technical And Functional Skills Bachelor’s Degree in Engineering from Electrical, Mechanical OR Electronics stream Excellent technical knowledge of engineering products (Pumps, motors, HVAC, Plumbing, etc.) and technical specifications Intermediate knowledge of MS Office/Internet. About The Team eClerx is a global leader in productized services, bringing together people, technology and domain expertise to amplify business results. Our mission is to set the benchmark for client service and success in our industry. Our vision is to be the innovation partner of choice for technology, data analytics and process management services. Since our inception in 2000, we've partnered with top companies across various industries, including financial services, telecommunications, retail, and high-tech. Our innovative solutions and domain expertise help businesses optimize operations, improve efficiency, and drive growth. With over 18,000 employees worldwide, eClerx is dedicated to delivering excellence through smart automation and data-driven insights. At eClerx, we believe in nurturing talent and providing hands-on experience. eClerx is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability or protected veteran status, or any other legally protected basis, per applicable law.

Posted 4 weeks ago

Apply

15.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Details: Job Description Exp: 15+ year of exp Require knowledge of Talend but also knowledge of other Data related tools like: Databricks or Snowflake The Senior Talend Developer/Architect role is responsible to lead the design, development and manage the INSEAD data infrastructure for the CRM ecosystem, to develop Talend Jobs & Flows and to act as a mentor for the other 3-4 Talend Developers. This role will be instrumental in driving data pipeline architecture and ensuring data integrity, performance, and scalability using the Talend platform. This role is key part of the HARMONIA project team while the engagement is active. The role will also be part of the Harmonia Data Quality project and Data Operations scrum teams. It will contribute to additional activities such data modelling & design, architecture, integration and propose technology strategy. The position holder must organize and plan her/his work under special consideration of a frictionless information flow in the Digital Solutions and the relevant business department. He/She will collaborate closely with cross-functional teams to deliver high-quality data solutions that support strategic business objectives. Job Requirements Details: Design, develop, and deploy scalable ETL/ELT solutions using Talend (e.g. Data Stewardship, Management Console, Studio). Architect end-to-end data integration workflows. Establish development best practices, reusable components, and job templates to optimize performance and maintainability. Responsible for delivering robust data architecture, tested, validated and deployable jobs/flows to production environments. He/she will follow Talend best practices and JIRA development framework development practices. Translate business requirements into efficient and scalable Talend solutions. Assist with the developer input/feedback for those requirements wherever deemed necessary. These are to be done by actively leading brainstorming sessions arranged by the project manager. Work closely with the Manager of Data Operations and Quality, project manager, business analysts, data analysts, developers and other subject matter experts to align technical solutions with operational needs. Ensure alignment with data governance, security, and compliance standards. Responsible for ensuring that newly produced developments follow standard styles which are already part of the current Talend jobs & flows developed by the current integrator, and INSEAD teams. Actively participate to the project related activities and ensure the SDLC process is followed. Participate in the implementation and execution of data cleansing and normalization, deduplication and transformation projects. Conduct performance tuning, error handling, monitoring, and troubleshooting of Talend jobs and environments. Contribute to sprint planning and agile ceremonies with the Harmonia Project Team and Data Operations Team. Document technical solutions, data flows, and design decisions to support operational transparency. Stay current with Talend product enhancements and industry trends, recommending upgrades or changes where appropriate. No budget responsibility Personnel responsibility: Provide technical mentorship to junior Talend developers and contribute to develop the internal knowledge base. (INSEAD and external ones).

Posted 4 weeks ago

Apply

55.0 years

0 Lacs

Gandhinagar, Gujarat, India

On-site

At Capgemini Engineering, the world leader in engineering services, we bring together a global team of engineers, scientists, and architects to help the world’s most innovative companies unleash their potential. From autonomous cars to life-saving robots, our digital and software technology experts think outside the box as they provide unique R&D and engineering services across all industries. Join us for a career full of opportunities. Where you can make a difference. Where no two days are the same. Job Description Expertise in .NET Core and .NET Framework - Including Micro Services, Azure Services Expertise in Microsoft Azure Services Understand OOPS, Design patterns such as SOLID principles, Database Normalization and Optimization Hands on with ODBC/JDBC and Relational Databases – SQL Server, Oracle, MySQL Multithreaded programming Deep experience in design and development of RESTful webservices. Hands on with Unit testing frameworks (along with mock framework). Good at problem understanding, impact analysis, troubleshooting and Strong logical thinker Independent workstyle and loves getting things done, proposing multiple solutions and recommending the best option. Agile / Scrum methology practioner as well good knowledge on Behaviour Driven Development (BDD), Test Driven Development (TDD), etc. Good knowledge on Test Automation Framework - Selenium, etc. Primary Skill Backend services: .NET framework, .NET Core, C#, ASP.NET, SQL Server, Oracle, ProstgreSQL, Azure Services Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fuelled by its market leading capabilities in AI, cloud and data, combined with its deep industry expertise and partner ecosystem. The Group reported 2023 global revenues of €22.5 billion.

Posted 4 weeks ago

Apply

3.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Project Role : Application Tech Support Practitioner Project Role Description : Act as the ongoing interface between the client and the system or application. Dedicated to quality, using exceptional communication skills to keep our world class systems running. Can accurately define a client issue and can interpret and design a resolution based on deep product knowledge. Must have skills : Software License Management Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Key Responsibilities: a) Maintain software publisher licensing information for the assigned publishers (i.e., both entitlements and deployments) b) Analyze software licensing agreements, create entitlements summary, and summarize use right information from software agreements. c) Importing licenses and agreements into the SAM tool (SNOW SLM/ SAM Pro, Flexera/Others). d) Update software entitlement and agreement information into the SAM tool. e) Maintain accurate records of software licenses and related assets, ensuring compliance with licensing agreements and regulations. f) Develop and implement software license management policies and procedures, ensuring adherence to industry best practices and standards. g) Maintain software installation records in SAM tool and perform product normalization. h) Perform license reconciliation in SAM tool. i) Work with internal stakeholders to ensure deployment of software applications are compliant and if not, work with the stakeholders to remediate non-compliance. j) Respond to customer queries on software licensing. k) Create customized reports and recommendations to report on SAM function activities. l) Identify cost savings and license re-harvesting opportunities. m) Drive periodic or ad-hoc stakeholder and project meetings. Technical Experience: Competent on any 2 tier 1 publishers (Microsoft, Oracle, IBM, VMware, SAP) & any 2 Tier 2 publishers (Salesforce, Adobe, Quest, Autodesk, Microfocus, Citrix, Veritas, Informatica) Hands on experience on ServiceNow SAM Pro / Flexera / SNOW SLM Good understanding of publisher contracts, license metrics and product use rights Experience in creation of entitlements, license overview report and contracts. Experience in handling software license requests and performing technical validation Professional Attributes: Excellent communication skills Expert knowledge in MS Office applications (Excel & PowerPoint) Ability to work in a team environment. Must have Skills: Software licensing & Software Asset Management Tools Good to Have Skills: Analytical and Communication Skills Educational Qualification: 15 years of full-time education

Posted 4 weeks ago

Apply

0 years

2 - 6 Lacs

Pune, Maharashtra, India

On-site

Job Purpose Responsible for managing end-to-end database operations, ensuring data accuracy, integrity, and security across systems. The position plays a key role in driving data reliability, availability, and compliance with operational standards. Key Responsibilities Collate audit reports from the QA team and structure data in accordance with Standard Operating Procedures (SOP). Perform data transformation and validation for accuracy and consistency. Upload processed datasets into SQL Server using SSIS packages. Monitor and optimize database performance, identifying and resolving bottlenecks. Perform regular backups, restorations, and recovery checks to ensure data continuity. Manage user access and implement robust database security policies. Oversee database storage allocation and utilization. Conduct routine maintenance and support incident management, including root cause analysis and resolution. Design and implement scalable database solutions and architecture. Create and maintain stored procedures, views, and other database components. Optimize SQL queries for performance and scalability. Execute ETL processes and support seamless integration of multiple data sources. Maintain data integrity and quality through validation and cleansing routines. Collaborate with cross-functional teams on data solutions and project deliverables. Educational Qualification: Any Graduate Required Skills & Qualifications Proven experience with SQL Server or similar relational database platforms. Strong expertise in SSIS, ETL processes, and data warehousing. Proficiency in SQL/T-SQL, including scripting, performance tuning, and query optimization. Experience in database security, user role management, and access control. Familiarity with backup/recovery strategies and database maintenance best practices. Strong analytical skills with experience working with large and complex datasets. Solid understanding of data modeling, normalization, and schema design. Knowledge of incident and change management processes. Excellent communication and collaboration skills. Experience with Python for data manipulation and automation is a strong plus. Skills:- SQL, PowerBI and Python

Posted 4 weeks ago

Apply

0 years

2 - 8 Lacs

Hyderābād

On-site

Job title: R&D Data Steward Manager Associate Location: Hyderabad About the job Sanofi is a global life sciences company committed to improving access to healthcare and supporting the people we serve throughout the continuum of care. From prevention to treatment, Sanofi transforms scientific innovation into healthcare solutions, in human vaccines, rare diseases, multiple sclerosis, oncology, immunology, infectious diseases, diabetes and cardiovascular solutions and consumer healthcare. More than 110,000 people in over 100 countries at Sanofi are dedicated to making a difference on patients’ daily life, wherever they live and enabling them to enjoy a healthier life. As a company with a global vision of drug development and a highly regarded corporate culture, Sanofi is recognized as one of the best pharmaceutical companies in the world and is pioneering the application of Artificial Intelligence (AI) with strong commitment to develop advanced data standards to increase reusability & interoperability and thus accelerate impact on global health. The R&D Data Office serves as a cornerstone to this effort. Our team is responsible for cross-R&D data strategy, governance, and management. We sit in partnership with Business and Digital, and drive data needs across priority and transformative initiatives across R&D. Team members serve as advisors, leaders, and educators to colleagues and data professionals across the R&D value chain. As an integral team member, you will be responsible for defining how R&D's structured, semi-structured and unstructured data will be stored, consumed, integrated / shared and reported by different end users such as scientists, clinicians, and more. You will also be pivotal in the development of sustainable mechanisms for ensuring data are FAIR (findable, accessible, interoperable, and reusable). Position Summary: The R&D Data Steward plays a critical role in the intersection between business and data, where stewards will guide business teams on how to unlock value from data. This role will drive definition and documentation of R&D data standards in line with enterprise. Data stewards will place heavily cross-functional roles and must be comfortable with R&D data domains, data policies, and data cataloguing. Main responsibilities: Work in collaboration with R&D Data Office leadership (including the Data Capability and Strategy Leads), business, R&D Digital subject matter experts and other partners to: Understand the data-related needs for various cross-R&D capabilities (E.g., data catalog, master data management etc) and associated initiatives Influence, design, and document data governance policies, standards and procedures for R&D data Drive data standard adoption across capabilities and initiatives; manage and maintain quality and integrity of data via data enrichment activities (E.g., cleansing, validating, enhancing etc) Understand and adopt data management tools such as R&D data catalogue, etc Develop effective data sharing artifacts for appropriate usage of data across R&D data domains Ensure the seamless running of the data-related activities and verify data standard application from ingest through access Maintain documentation and act as an expert on data definitions, data flows, legacy data structures, access rights models, etc. for assigned domain Oversee data pipeline and availability and escalate issues where they surface; ensure on-schedule/on-time delivery and proactive management of risks/issues Educate and guide R&D teams on standards and information management principles, methodologies, best practices, etc. Oversee junior data stewards and/or business analysts based on complexity or size of initiatives/functions supported Deliverables: Defines Data quality and communication metrics for assigned domains and 1-2 business functions Implements continuous improvement opportunities such as functional training Accountable for data quality and data management activities for the assigned domains. Facilitates data issue resolution Defines business terms and data elements (metadata) according to data standards, and ensures standardization/normalization of metadata Leads working groups to identify data elements, perform root cause and impact analysis, and identify improvements for metadata and data quality Regularly communicates with other data leads, expert Data Steward and escalates issues as appropriate About you Experience in Data wrangling, Data programming, Business Data Management, Information Architecture, Technology, or related fields ,Demonstrated ability to understand end-to-end data use and needs Experience in CMC (Chemistry manufacturing & control) experience in R&D /CRO/Pharma data domains (e.g., across research, clinical, regulatory etc) Solid grasp of data governance practices and track record of implementation, Ability to understand data processes and requirements, particularly in R&D at an enterprise level Demonstrated strong attention to detail, quality, time management and customer focus, Excellent written and oral communications skills Strong networking, influencing and negotiating skills and superior problem-solving skills, demonstrated willingness to make decisions and to take responsibility for such Excellent interpersonal skills (team player), People management skills either in matrix or direct line function Familiar with data management practices and technologies (e.g., Collibra, Informatica etc); experience in practices not required, Knowledge of pharma R&D industry regulations and compliance requirements related to data governance Education : Scientific or life sciences background

Posted 4 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies