Jobs
Interviews

4342 Data Quality Jobs - Page 38

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 7.0 years

7 - 9 Lacs

Pune

Work from Office

Department : Platform Engineering Summary : We are seeking a highly skilled Data Engineer with expertise in ontology development and knowledge graph implementation. This role will be pivotal in shaping our data infrastructure and ensuring the accurate representation and integration of complex data sets. You will leverage industry best practices, including the Basic Formal Ontology (BFO) and Common Core Ontologies (CCO), to design, develop, and maintain ontologies, semantic and syntactic data models, and knowledge graphs on the Databricks Data Intelligence Platform that drive data-driven decision-making and innovation within the company. Responsibilities : Ontology Development : - Design and implement ontologies based on BFO and CCO principles, ensuring alignment with business requirements and industry standards. - Collaborate with domain experts to capture and formalize domain knowledge into ontological structures. - Develop and maintain comprehensive ontologies to model various business entities, relationships, and processes. Data Modeling : - Design and implement semantic and syntactic data models that adhere to ontological principles. - Create data models that are scalable, flexible, and adaptable to changing business needs. - Integrate data models with existing data infrastructure and applications. Knowledge Graph Implementation : - Design and build knowledge graphs based on ontologies and data models. - Develop algorithms and tools for knowledge graph population, enrichment, and maintenance. - Utilize knowledge graphs to enable advanced analytics, search, and recommendation systems. Data Quality And Governance : - Ensure the quality, accuracy, and consistency of ontologies, data models, and knowledge graphs. - Define and implement data governance processes and standards for ontology development and maintenance. Collaboration And Communication : - Work closely with data scientists, software engineers, and business stakeholders to understand their data requirements and provide tailored solutions. - Communicate complex technical concepts clearly and effectively to diverse audiences. Qualifications : Education : - Bachelor's or Master's degree in Computer Science, Data Science, or a related field. Experience : - 5+ years of experience in data engineering or a related role. - Proven experience in ontology development using BFO and CCO or similar ontological frameworks. - Strong knowledge of semantic web technologies, including RDF, OWL, SPARQL, and SHACL. - Proficiency in Python, SQL, and other programming languages used for data engineering. - Experience with graph databases (e.g., TigerGraph, JanusGraph) and triple stores (e.g., GraphDB, Stardog) is a plus. Desired Skills : - Familiarity with machine learning and natural language processing techniques. - Experience with cloud-based data platforms (e.g., AWS, Azure, GCP). - Experience with Databricks technologies including Spark, Delta Lake, Iceberg, Unity Catalog, UniForm, and Photon. - Strong problem-solving and analytical skills. - Excellent communication and interpersonal skills.

Posted 2 weeks ago

Apply

1.0 - 5.0 years

1 - 5 Lacs

Pune

Work from Office

Responsibilities Design, build, and maintain scalable and efficient data pipelines to support analytics, reporting, and operational use cases Collaborate closely with product owners, analysts, and data consumers to translate business requirements into reliable data solutions Develop and maintain data integration workflows across both cloud-native and on-premises systems Champion best practices in data architecture, modelling, and quality assurance to ensure accuracy and performance Participate in sprint planning, daily stand-ups, and retrospectives as an active member of a cross-functional agile team Identify and remediate technical debt across legacy pipelines and contribute to the modernization of the data platform Implement robust monitoring and alerting for pipeline health, data quality, and SLA adherence Write and maintain documentation for data flows, transformations, and system dependencies Contribute to code reviews and peer development to foster a collaborative and high-quality engineering culture Ensure adherence to security, privacy, and compliance standards in all data engineering practices Requirements 5+ years of professional experience in data engineering, analytics engineering, or related fields Bachelors degree in computer science, or equivalent field and 2+ years of experience Advanced SQL skills, including performance tuning and query optimization Expertise in Snowflake, including data warehousing concepts, architecture, and best practices Experience with modern data transformation tools (e. g. , dbt) Experience building and maintaining automated ETL/ELT pipelines, with a focus on performance, scalability, and reliability Proficiency with version control systems (e. g. , Git), working within CI/CD pipelines and experience with environments that depend on infrastructure-as-code Experience writing unit and integration tests for data pipelines Familiarity with data modeling techniques (e. g. , dimensional modeling, star/snowflake schemas) Experience with legacy, on-premise databases such as Microsoft SQL Server is preferred Exposure to cloud platforms (e. g. , AWS, Azure, GCP), cloud-native data tools, and data federation tools is a plus Experience with Sql Server Reporting Services (SSRS) is beneficial

Posted 2 weeks ago

Apply

1.0 - 4.0 years

3 - 6 Lacs

Hyderabad

Work from Office

Application Development Associate Advisor - HIH - Evernorth About Evernorth: Evernorth Health Services, a division of The Cigna Group (NYSE: CI), creates pharmacy, care, and benefits solutions to improve health and increase vitality. We relentlessly innovate to make the prediction, prevention, and treatment of illness and disease more accessible to millions of people. Position Summary: Cigna, a leading Health Services company, is looking for an exceptional Application Development Associate Advisor to join our Client & Customer Services organization. The Full Stack Engineer is responsible for the delivery of a business need starting from understanding the requirements to deploying the software into production. This role requires you to be fluent in some of the critical technologies with proficiency in others and have a hunger to learn on the job and add value to the business. Critical attributes of being a Full Stack Engineer, among others, is ownership, eagerness to learn & an open mindset. In addition to Delivery, the Full Stack Engineer should have an automation first and continuous improvement mindset. Person should drive the adoption of CI/CD tools and support the improvement of the tools sets/processes. Job Description & Responsibilities: Full Stack Engineers are able to articulate clear business objectives aligned to technical specifications and work in an iterative, agile pattern daily. They have ownership over their work tasks, and embrace interacting with all levels of the team and raise challenges when necessary. We aim to be cutting-edge engineers - not institutionalized developers. Experience Required: 7+ years of C# Or . Net 7+ years JavaScript/HTML experience Angular JS and Node JS 3+ years of experience. Experience Desired: Knowledge and/or experience on SQL is a plus. Experience with version management tools - Git preferred. Experience with BDD and TDD development methodologies Experience working in an agile CI/CD environments; Jenkins experience preferred. Knowledge and/or experience with Health care information domains preferred Education and Training Required: Bachelor s degree (or equivalent) required. Additional Skills: Design and architect the solution independently Take ownership and accountability Write referenceable & modular code Be fluent in particular areas and have proficiency in many areas, Have a passion to learn. Have a quality mindset, not just code quality but also to ensure ongoing application/data quality by monitoring data to identify problems before they have business impact Take risks and champion new ideas About Evernorth Health Services Evernorth Health Services, a division of The Cigna Group, creates pharmacy, care and benefit solutions to improve health and increase vitality. We relentlessly innovate to make the prediction, prevention and treatment of illness and disease more accessible to millions of people. Join us in driving growth and improving lives.

Posted 2 weeks ago

Apply

5.0 - 8.0 years

9 - 14 Lacs

Hyderabad

Work from Office

Job Description Summary The Sr Data Analyst - BI Reporting will play a key role in developing end-to-end reporting solutions from data collection and transformation to report generation and visualization. This role involves working on the cutting edge of data engineering and analytics leveraging machine learning predictive modeling and generative AI to drive business insights. Job Description Roles and Responsibilities Design visualizations and create dashboards/reports using Power BI (good to have Tableau experience). Explore clean and visualize data sets to prepare for analysis/reporting ensuring data quality and consistency. Develop and maintain BI semantic data models for large-scale data Warehouses/ Data Lakes eventually getting consumed by reporting tools. Leverage SQL and big data tools (e. g. Hadoop Spark) for data manipulation and optimization. Build advanced data models and pipelines using SQL and other tools. Ensure data quality consistency and integrity throughout the data lifecycle. Collaborate closely with data engineers analysts and other stakeholders to understand data requirements and optimize the data flow architecture. Document data processes data architecture modelling/flow charts and best practices for future reference and knowledge sharing. Desired Characteristics Technical Expertise 5 to 8 years of experience in data analytics data mining/integration BI development reporting and insights. Strong knowledge of SQL and experience with big data technologies such as Hadoop Spark or similar tool for data massaging / manipulation. Develop advanced visualization/reports to highlight trends patterns and outliers making complex data easily understandable for various business functions. Implement UI/UX best practices to improve navigation data storytelling and the overall usability of dashboards ensuring that reports are actionable and user-friendly providing the desired insights. #LI-CK1 Additional Information Relocation Assistance Provided: Yes

Posted 2 weeks ago

Apply

1.0 - 3.0 years

4 - 8 Lacs

Bengaluru

Work from Office

At Hitachi Energy our purpose is advancing a sustainable energy future for all. We bring power to our homes, schools, hospitals and factories. Join us and work with fantastic people, while learning and developing yourself on projects that have a real impact to our communities and society. Bring your passion, bring your energy, and be part of a global team that appreciates a simple truth: Diversity + Collaboration = Great Innovation. Your responsibilities: This position is responsible for managing employee data accurately from Hire to Retire complete process in Hi-Next as the data will flow to all downstream applications. Ability to work across geographies and different regulatory environments. Ability to analyze and understand complex problems and their resulting dependencies. Excellent attention to detail, time management, and multitasking skills. Support in preparing offer letters for shortlisted candidates Preparing offer letter for shortlisted candidates and keeping track of BGV status Support key Employee Life Cycle processes such as onboarding of new professionals, professionals data management, absence management, visa renewals. transfers, offboarding of professionals. Monitoring payroll related data in in Workday, including salary and regular payments, and assuring supporting documentation are in place. Oversee time-sensitive procedures and assure that data, such as salary, one-time payments, bank information, etc. , is validated as correct and genuine. Adhere to the SOP and WI s defined for each process and follow the instructions as mentioned Ability to identify process deficiencies and perform initial root cause analysis in support of improvements. Ensure to achieve SLA targets and agreed KPIs, identification of root causes of operational issues and implementation of improvement measures. Capturing ELC related activities into appropriate systems and record the employee queries in an appropriate IT system (ticketing tool) and process ownership of complete employee life cycle from Hire to Retire/ Termination Ensuring execution and delivery of ELC services according to defined Service Level Agreement(s) like TAT, quality and Customer satisfaction (C-SAT), etc Extracting reports for internal data quality through Ad-hoc query, or from customized Transactions and willingness to learn and implement the learning in an innovative manner. Living Hitachi Energy core values safety and integrity, which means taking responsibility for your own actions while caring for your colleagues, and the business. Your background Graduate/ MBA/ Any Degree. At least 1-3 years of experience in HR Shared Services in a global organization will be added advantage. Proficiency in MS Office and excellent written and verbal communication. Skills ability to manage multiple demands on time and work with cross functional teams. Flexible to work in any shift. A collaborative, solutions-oriented approach, strong analytical skills and a proactive way of working to serve. Customers with a can-do attitude. Qualified individuals with a disability may request a reasonable accommodation if you are unable or limited in your ability to use or access the Hitachi Energy career site as a result of your disability. You may request reasonable accommodations by completing a general inquiry form on our website. Please include your contact information and specific details about your required accommodation to support you during the job application process. .

Posted 2 weeks ago

Apply

10.0 - 18.0 years

7 - 8 Lacs

Noida

Work from Office

Senior Technical Team leader Business Intelligence, Data Governance & Reporting # Key Responsibilities -- Lead the development and execution of BI strategies, tools, and reporting solutions in alignment with business objectives. Serve as a subject matter expert for BI within the organization, supporting internal initiatives and mentoring team members on best practices. Design, implement, and maintain scalable data models, analytical layers, and interactive dashboards using modern BI tools (primarily Power BI). Continuously optimize BI architecture to ensure scalability, performance, and adaptability to evolving business needs. Apply performance optimization techniques to improve data processing, dashboard responsiveness, and user experience. Ensure high standards of data quality, consistency, and governance across all BI solutions. Collaborate closely with cross-functional teams including data engineers, data scientists, and business stakeholders to define and meet BI requirements. Utilize advanced Power BI features (DAX, Power Query, Power BI Service) to build robust, automated reporting and analytical solutions. Host workshops and office hours to guide business units on Power BI usage, self-service BI strategies, and technical troubleshooting. Stay abreast of emerging BI tools, trends, and methodologies to drive continuous innovation and improvement. # Desired Skills and Experience -- Bachelor s or Master s degree in Computer Science, Data Science, Engineering, Mathematics, or a related field. 10+ years of experience in Business Intelligence, including data warehousing, ETL pipelines, and reporting. Expert-level proficiency in BI tools, particularly Power BI. Certified Power BI Data Analyst Associate (PL300) and Certified Data Management Professional (CDMP)- DAMA. Strong command of DAX, Power Query, and SQL for data modeling, integration, and Python for analysis. Proficient in Agile\Scrum or traditional project management methodologies. Foster a collaborative team culture and encourage continuous learning. Total Experience Expected: 14-18 years

Posted 2 weeks ago

Apply

11.0 - 13.0 years

11 - 15 Lacs

Hyderabad

Work from Office

Application Development Advisor - Data Engineering Position Overview Were seeking a versatile Data Engineer to contribute to all aspects of our customer data platform. Youll work on various data engineering tasks, from designing data pipelines to implementing data integration solutions, playing a crucial role in enhancing our data infrastructure. Roles & Responsibilities Develop and maintain full-stack applications using our cutting-edge tech stack Develop and maintain data pipelines using Snowflake, Databricks, and AWS services. Collaborate with cross-functional teams to understand data requirements and deliver high-quality data solutions. Optimize data storage and retrieval for performance and scalability. Implement data quality checks and monitoring to ensure data integrity. Participate in code reviews and contribute to best practices. Experiment with new technologies and tools to improve our data platform Qualifications Required Experience & Skills: 11 to 13 years of development experience Strong proficiency in Python Extensive experience with Snowflake and Databricks Proficiency with AWS services (Lambda, EC2, Glue, Kinesis, SQS, SNS, SES, Pinpoint) Strong analytical skills and the ability to work independently Familiarity with SQL and database management Proficiency with version control systems (e. g. , Git) Preferred Experience & Skills: Advanced proficiency with Snowflake and Databricks Knowledge of cloud architecture and best practices Ability to experiment with and adopt new technologies quickly Experience with end-user messaging platforms Familiarity with containerization and orchestration (e. g. , Docker, Kubernetes) Understanding of DevOps practices and CI/CD pipelines About Evernorth Health Services Evernorth Health Services, a division of The Cigna Group, creates pharmacy, care and benefit solutions to improve health and increase vitality. We relentlessly innovate to make the prediction, prevention and treatment of illness and disease more accessible to millions of people. Join us in driving growth and improving lives.

Posted 2 weeks ago

Apply

3.0 - 6.0 years

4 - 8 Lacs

Gurugram

Work from Office

Work Flexibility: Hybrid What you will do: Data Collection and Management: Gathering data from various sources, ensuring data quality, and managing databases. Data Analysis: Analyzing data using statistical techniques and tools to identify trends, patterns, and anomalies. Visualization: Developing and maintaining interactive dashboards and reports using Power BI to present findings effectively. Automation: Utilizing Automation tools to automate repetitive data tasks, such as data extraction, cleaning, and reporting. Collaboration: Working with stakeholders to understand their data needs and provide analytical support. - Present findings and recommendations to management and other stakeholders and Communicate technical information to non-technical audiences effectively. Problem Solving: Identifying and solving business problems through data analysis and automation. Documentation: Creating and maintaining documentation for data processes, dashboards, and reports What you need: Required- Education: - Bachelor s or master s degree in data science, Computer Science, Statistics, or a related field. Experience: - 3-6 years Tools experience - Power BI; UiPath/ Power Automate or Similar For robotic process automation and workflow automation. SQL and Python/R Preferred- Programming: (Optional) Proficiency in SQL, Python, or R for data manipulation and analysis. Travel Percentage: None

Posted 2 weeks ago

Apply

3.0 - 10.0 years

9 - 14 Lacs

Hyderabad

Work from Office

We are seeking a detail-oriented language data operations manager to join the Oracle Analytics data team. This is a manager role that focuses on creating, managing, and optimizing data operations to power AI model development, including both data procurement and annotations. You will be responsible for hiring and developing a team of data engineers and annotators, building relationships with company-internal data, engineering, and legal teams, and ensuring data quality, coverage, and timely delivery to downstream machine learning teams. A background in language data management, AI, linguistics, or data engineering, and practical experience managing data operations for AI development in an industry setting are a must. Familiarity with Oracle business applications and analytics tools is a strong plus. The ideal candidate will have a strong sense of ownership, deep understanding of AI data requirements, and demonstrated ability to deliver results within tight timelines. Career Level - M3 Key Responsibilities: Collaborate with cross-functional teams to gather language data resource requirements. Develop and maintain relationships with data vendors. Develop and maintain data annotation guidelines and implement quality control mechanisms. Identify data operations bottlenecks and improvements and work with engineering teams to resolve them. Lead a small to medium sized team of data annotators. Own data delivery and quality standards. Required Qualifications: Bachelors degree in Business Administration, Information Systems, Data Science, Linguistics, or a related field. 6-10+ years of experience in data operations, data annotation, or a related role. 3+ years of experience in team management. Proficiency with data annotation tools, standards, and methodologies. Strong analytical and problem-solving skills. Excellent communication and collaboration abilities. Strong sense of ownership and a self-starter mindset. Developer experience with SQL, Python, and data visualization tools. Knowledge of machine learning concepts and how annotated data supports model development. Preferred Qualifications: Familiarity with Oracle Fusion applications and modules. Previous experience working in an Agile environment.

Posted 2 weeks ago

Apply

7.0 - 10.0 years

20 - 25 Lacs

Kolkata

Work from Office

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Manager Job Description & Summary At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In business intelligence at PwC, you will focus on leveraging data and analytics to provide strategic insights and drive informed decision-making for clients. You will develop and implement innovative solutions to optimise business performance and enhance competitive advantage. *Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Responsibilities: Requirement gathering and analysis Design of data architecture and data model to ingest data Experience with different databases like Synapse, SQL DB, Snowflake etc. Design and implement data pipelines using Azure Data Factory, Databricks, Synapse Create and manage Azure SQL Data Warehouses and Azure Cosmos DB databases Extract, transform, and load (ETL) data from various sources into Azure Data Lake Storage Implement data security and governance measures Monitor and optimize data pipelines for performance and efficiency Troubleshoot and resolve data engineering issues Hands on experience on Azure functions and other components like realtime streaming etc Oversee Azure billing processes, conducting analyses to ensure cost-effectiveness and efficiency in data operations. Provide optimized solution for any problem related to data engineering Ability to work with verity of sources like Relational DB, API, File System, Realtime streams, CDC etc. Strong knowledge on Databricks, Delta tables Mandatory skill sets: SQL, ADF, ADLS, Synapse, Pyspark, Databricks, data modelling Preferred skill sets: Pyspark, Databricks Years of experience required: 7 - 10 yrs Education qualification: B.tech/MCA and MBA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering, Master of Business Administration Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Structured Query Language (SQL) Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Analytical Thinking, Business Case Development, Business Data Analytics, Business Intelligence and Reporting Tools (BIRT), Business Intelligence Development Studio, Coaching and Feedback, Communication, Competitive Advantage, Continuous Process Improvement, Creativity, Data Analysis and Interpretation, Data Architecture, Database Management System (DBMS), Data Collection, Data Pipeline, Data Quality, Data Science, Data Visualization, Embracing Change, Emotional Regulation, Empathy, Inclusion {+ 21 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship Government Clearance Required Job Posting End Date

Posted 2 weeks ago

Apply

6.0 - 9.0 years

11 - 15 Lacs

Bengaluru

Work from Office

Get to Know the Team: At Grabber Technology Solutions (GTS), we revolutionise the technology experience for every Grabber. Our mission is to empower our team with seamless and solutions that enhance their daily work. We are a diverse group of forward-thinkers committed to creating personalised IT experiences. If youre passionate about customer-centric innovation and technology at Grab, come join us and help shape the future of technology! Get to Know the Role: We are looking for an experienced Senior Specialist, Configuration Management to drive the accuracy, integrity, and strategic value of our Configuration Management Database (CMDB). This important individual contributor role will be the primary owner and performer of CMDB operations, ensuring it serves as the definitive source of truth for our IT landscape. You understand configuration management mechanics, including the seamless integration of hardware and software assets within the CMDB framework. You will report to Manager II, IT Service Transition. This role is based in Bangalore. The Critical Tasks You will Perform: Own and maintain the Configuration Management Database (CMDB), ensuring accuracy and completeness by collaborating with cross-functional teams on Configuration Item (CI) identification, documentation, and lifecycle management. Lead and evolve Software Asset Management (SAM) processes, defining inclusive policies, tools, and procedures for licence tracking, compliance, usage, and optimisation. Identify and implement opportunities to streamline and automate Configuration Management processes within the ITSM platform, ensuring seamless integration with core ITSM functions like Change, Incident, Problem, and Release Management. Generate regular reports and KPIs, conduct configuration audits, and support risk assessments to address discrepancies and ensure compliance. Provide expert support for Change Management processes, contributing to accurate and collaborative impact assessments for changes affecting configurations. Stay current with industry trends and emerging technologies, recommending strategic process and tool improvements to enhance Configuration and Asset Management practices. Read more Skills you need What Essential Skills You will Need: Bachelors degree in Computer Science, Information Technology, or a related field 6 to 9 years hands-on experience in IT Operations, Service Management or Configuration Management roles. Deep, hands-on expertise in configuration management principles and practices, including CMDB data modelling, CI lifecycle, relationships and data quality. Track record in defining and implementing Hardware Asset Management (HAM) and Software Asset Management (SAM) processes, policies and tools. Hands-on experience with automated discovery and reconciliation tools and integrating data from multiple IT systems. Demonstrated experience defining and generating reports on KPIs and building data visualisations. Good to have ITIL Expert (v3/v4) certified COBIT 5 Foundation certified Lean/SixSigma certified Read more What we offer About Grab and Our Workplace Grab is Southeast Asias leading superapp. From getting your favourite meals delivered to helping you manage your finances and getting around town hassle-free, weve got your back with everything. In Grab, purpose gives us joy and habits build excellence, while harnessing the power of Technology and AI to deliver the mission of driving Southeast Asia forward by economically empowering everyone, with heart, hunger, honour, and humility. Read more Life at Grab Life at Grab We care about your well-being at Grab, here are some of the global benefits we offer: We have your back with Term Life Insurance and comprehensive Medical Insurance. With GrabFlex, create a benefits package that suits your needs and aspirations. Celebrate moments that matter in life with loved ones through Parental and Birthday leave , and give back to your communities through Love-all-Serve-all (LASA) volunteering leave We have a confidential Grabber Assistance Programme to guide and uplift you and your loved ones through lifes challenges. What We Stand For at Grab We are committed to building an inclusive and equitable workplace that enables diverse Grabbers to grow and perform at their best. As an equal opportunity employer, we consider all candidates fairly and equally regardless of nationality, ethnicity, religion, age, gender identity, sexual orientation, family commitments, physical and mental impairments or disabilities, and other attributes that make them unique. Read more

Posted 2 weeks ago

Apply

8.0 - 12.0 years

20 - 25 Lacs

Bengaluru

Work from Office

Job Title: Lead, Analytics & Operations Strategy About Skyhigh Security: Skyhigh Security is a dynamic, fast-paced, cloud company that is a leader in the security industry. Our mission is to protect the world s data, and because of this, we live and breathe security. We value learning at our core, underpinned by openness and transparency. Since 2011, organizations have trusted us to provide them with a complete, market-leading security platform built on a modern cloud stack. Our industry-leading suite of products radically simplifies data security through easy-to-use, cloud-based, Zero Trust solutions that are managed in a single dashboard, powered by hundreds of employees across the world. With offices in Santa Clara, Aylesbury, Paderborn, Bengaluru, Sydney, Tokyo and more, our employees are the heart and soul of our company. Skyhigh Security Is more than a company; here, when you invest your career with us, we commit to investing in you. We embrace a hybrid work model, creating the flexibility and freedom you need from your work environment to reach your potential. From our employee recognition program, to our Blast Talks learning series, and team celebrations (we love to have fun!), we strive to be an interactive and engaging place where you can be your authentic self. We are on these too! Follow us on LinkedIn and Twitter @SkyhighSecurity . Role Overview: Are you a data expert who sees beyond the numbers to the story they tellDo you thrive on transforming complex data into strategic insights that drive business decisionsWe are looking for an Analytics & Operations Strategy Lead to join our team and become a pivotal voice in shaping our company s direction. You will be instrumental in driving our data-driven decision-making and operational excellence. Youll be responsible for unifying our analytics and operations efforts, fostering cross-functional collaboration, and developing scalable solutions that impact the entire organization. What Youll Do Tell Stories with Data: Transform complex data into clear, compelling narratives that inform business strategy and drive action. Develop and present insightful reports, dashboards, and presentations to leadership and various teams. Automate and Scale Analytics & Operations: Design, build, and maintain robust and scalable analytics solutions. You will champion the automation of processes, implement scalable solutions, and empower stakeholders with self-service access to critical data. Drive Strategic Alignment: Act as a critical thought partner to cross-functional teams, including Product, Marketing, Sales, and Engineering. You will use your analytical expertise to understand their challenges, identify opportunities, and build consensus on strategic initiatives. Mentor and Lead Junior Team Members: Provide guidance, mentorship, and support to junior analysts and operations specialists. Foster a culture of continuous learning, professional development, and high performance within the team. Build Trust in Our Data: Take ownership of our data quality and integrity. You will be a key player in developing and implementing data governance best practices, ensuring our datasets are accurate, reliable, and trusted as the single source of truth. Deep Dive Analysis: Conduct sophisticated exploratory analysis to identify key business trends, challenges, and opportunities. Your work will form the foundation of our strategic planning and decision-making processes. Qualifications 8 to 12 years of experience in data analytics, business intelligence, and operations roles, with a proven track record of driving impact. Bachelors degree in a quantitative field (e.g., Business Analytics, Computer Science, Statistics, Economics, Engineering) or equivalent practical experience. Masters degree preferred. Strong proficiency in data visualization tools (e.g., Tableau, Power BI, Looker) and advanced Excel. Proven experience in process automation and building scalable solutions. Excellent communication, presentation, and interpersonal skills with the ability to influence and collaborate effectively across all levels of the organization. Demonstrated leadership abilities, including mentoring and developing team members. Strong strategic thinking and problem-solving skills, with the ability to prioritize and manage multiple initiatives simultaneously. Preferred Qualifications Familiarity with project management methodologies (e.g., Agile, Scrum). Familiarity with advanced statistical techniques and their business applications. Experience in Cybersecurity and/or SaaS. Company Benefits and Perks: We believe that the best solutions are developed by teams who embrace each others unique experiences, skills, and abilities. We work hard to create a dynamic workforce where we encourage everyone to bring their authentic selves to work every day. We offer a variety of social programs, flexible work hours and family-friendly benefits to all of our employees. Retirement Plans Medical, Dental and Vision Coverage Paid Time Off Paid Parental Leave Support for Community Involvement Were serious ab out our commitment to a workplace where everyone can thrive and contribute to our industry-leading products and customer support, which is why we prohibit discrimination and harassment based on race, color, religion, gender, national origin, age, disability, veteran status, marital status, pregnancy, gender expression or identity, sexual orientation or any other legally protected status.

Posted 2 weeks ago

Apply

7.0 - 10.0 years

30 - 35 Lacs

Bengaluru

Work from Office

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Manager Job Description & Summary At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In business intelligence at PwC, you will focus on leveraging data and analytics to provide strategic insights and drive informed decision-making for clients. You will develop and implement innovative solutions to optimise business performance and enhance competitive advantage. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. Responsibilities: Experience in Data Warehouse, Solution Design and Data Analytics. Experience in data modelling exercises like dimensional modelling, data vault modelling Understand, Interpret, and clarify Functional requirements as well as technical requirements. Should understand the overall system landscape, upstream and downstream systems. Should be able understand ETL tech specifications and develop the code efficiently. Ability to demonstrate Informatica Cloud features/functions to the achieve best results. Hands on experience in performance tuning, pushdown optimization in IICS. Provide mentorship on debugging and problem-solving. Review and optimize ETL tech specifications and code developed by the team. Ensure alignment with overall system architecture and data flow. Mandatory skill sets: Data Modelling, IICS/any leading ETL tool, SQL Preferred skill sets: Python Years of experience required: 7 - 10 yrs Education qualification: B.tech/MBA/MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering, Master of Business Administration Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills ETL Tools Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Analytical Thinking, Business Case Development, Business Data Analytics, Business Intelligence and Reporting Tools (BIRT), Business Intelligence Development Studio, Coaching and Feedback, Communication, Competitive Advantage, Continuous Process Improvement, Creativity, Data Analysis and Interpretation, Data Architecture, Database Management System (DBMS), Data Collection, Data Pipeline, Data Quality, Data Science, Data Visualization, Embracing Change, Emotional Regulation, Empathy, Inclusion {+ 21 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship Government Clearance Required Job Posting End Date

Posted 2 weeks ago

Apply

6.0 - 11.0 years

18 - 20 Lacs

Bengaluru

Work from Office

Senior Software Test Engineer-GenAI Testing - Kongsberg Digital This website uses cookies to ensure you get the best experience. Kongsberg Digital and our selected partners use cookies and similar technologies (together cookies ) that are necessary to present this website, and to ensure you get the best experience of it. If you consent to it, we will also use cookies for analytics and marketing purposes. You can withdraw and manage your consent at any time, by clicking Manage cookies at the bottom of each website page. Decline all non-necessary cookies Select which cookies you accept On this site, we always set cookies that are strictly necessary, meaning they are necessary for the site to function properly. If you consent to it, we will also set other types of cookies. You can provide or withdraw your consent to the different types of cookies using the toggles below. You can change or withdraw your consent at any time, by clicking the link Manage Cookies , which is always available at the bottom of the site. These cookies are necessary to make the site work properly, and are always set when you visit the site. These cookies collect information to help us understand how the site is being used. These cookies are used to make advertising messages more relevant to you. In some cases, they also deliver additional functions on the site. Decline all non-necessary cookies Senior Software Test Engineer-GenAI Testing About the Role: We are seeking a passionate and forward-thinking Senior QA Engineer to lead quality assurance for Generative AI (GenAI) solutions embedded within our Digital Twin platform. This is a high-impact role that goes beyond traditional QA focusing on the nuanced evaluation, reliability, and guardrails of AI-powered systems in production. You will be responsible not just for testing, but also for establishing evaluation frameworks, defining AI quality benchmarks, and upskilling other QA engineers in GenAI testing methods. The ideal candidate brings a mix of structured QA discipline, hands-on familiarity with GenAI systems (LLMs, RAG, agents), and a strong sense of ownership. Key Responsibilities: Design and implement end-to-end QA strategies for applications using Node.js, integrated with LLMs, retrieval-augmented generation (RAG), and Agentic AI workflows. Establish comprehensive benchmarks and quality metrics for GenAI components including accuracy, coherence, relevance, stability, and safety. Develop structured evaluation datasheets for LLM behaviour validation: test prompts, expected responses, classification criteria, and scoring rubrics. Perform data quality testing for RAG databases and ensure relevant, high-quality retrieval to minimize hallucinations and improve grounding. Conduct A/B testing across model versions, prompt designs, and system configurations to measure and compare output quality. Define methodologies and simulate non-deterministic behaviours using Agentic AI testing techniques. Collaborate closely with developers, product owners, and AI engineers to test prompt engineering pipelines, function-calling interfaces, and fallback logic. Build QA automation where applicable and integrate GenAI evaluations into CI/CD pipelines. Lead internal capability development by mentoring QA peers on GenAI testing practices and helping evolve the organization s AI quality maturity. Required Skills and Qualifications: 6+ years of experience in software quality assurance, with at least 3+ years working in or around GenAI or LLM-based systems. Deep understanding of GenAI quality dimensions: response grounding, factual correctness, context awareness, and hallucination minimization. Experience creating and maintaining LLM evaluation datasets and designing test cases for dynamic prompt behaviour. Hands-on experience with tools and techniques for testing retrieval pipelines, embedding quality, and vector similarity results in RAG architectures. Familiarity with non-deterministic testing strategies, agent loop evaluation, and multi-step LLM task validation. Comfortable working with APIs, logs, test scripts, and tracing tools to validate both system and AI behaviour. Strong analytical thinking and a methodical approach to identifying bugs, regressions, and inconsistencies in AI outputs. Bachelor or master s degree in engineering Preferred Skills: Experience with GenAI tools/platforms like OpenAI, LangChain, Semantic Kernel, Hugging Face, Pinecone, or Weaviate. Exposure to evaluating LLMs in production settings, including safety nets, guardrails, and red-teaming approaches. Familiarity with prompt tuning, few-shot learning, and function/tool calling in LLMs. Basic scripting knowledge (Python, JavaScript, or TypeScript) for building test harnesses or validation utilities. First /Mid-Level Officials OUR POWER IS CURIOSITY, CREATION AND INNOVATION We believe you love to experiment, challenge the established, co-create, develop and cultivate. Together we can explore new answers to today s challenges and future opportunities, and talk about how industrial digitalisation can be a part of the solution for a better tomorrow. We believe that different perspectives are crucial for developing gamechanging technology for a better tomorrow. Join us in taking on this challenge! Already working at Kongsberg Digital Let s recruit together and find your next colleague.

Posted 2 weeks ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Pune

Work from Office

Data Scientist Responsibilities : Data Exploration and Insights : - Conduct continuous data exploration and analysis to identify opportunities for enhancing data matching logic, including fuzzy logic, and improving overall data quality within the SCI solution. - This includes working with large datasets from various sources, including Excel files and databases. Data Quality Improvement : - Perform various analyses specifically aimed at improving data quality within the SCI system. - This will involve identifying data quality issues, proposing solutions, and implementing improvements. Weekly Playback and Collaboration : - Participate in weekly playback sessions, using Jupyter Notebook to demonstrate data insights and analysis. - Incorporate new explorations and analyses based on feedback from the working group and prioritized tasks. Project Scaling and Support : - Contribute to the scaling of the SCI project by supporting data acquisition, cleansing, and validation processes for new markets. - This includes pre-requisites for batch ingestion and post-batch ingestion analysis and validation of SCI records. Data Analysis and Validation : - Perform thorough data analysis and validation of SCI records after batch ingestion. - Proactively identify insights and implement solutions to improve data quality. Stakeholder Collaboration : - Coordinate with business stakeholders to facilitate the manual validation of records flagged for manual intervention. - Communicate findings and recommendations clearly and effectively. Technical Requirements : - 5+ years of experience as a Data Scientist. - Strong proficiency in Python and SQL. - Extensive experience using Jupyter Notebook for data analysis and visualization. - Working knowledge of data matching techniques, including fuzzy logic. - Experience working with large datasets from various sources (Excel, databases, etc. - Solid understanding of data quality principles and methodologies. Skills : - SQL - Machine Learning (While not explicitly required in the initial description, it's a valuable skill for a Data Scientist and should be included) - Data Analysis - Jupyter Notebook - Data Cleansing - Fuzzy Logic - Python - Data Quality Improvement - Data Validation - Data Acquisition - Communication and Collaboration - Problem-solving and Analytical skills Preferred Qualifications (Optional, but can help attract stronger candidates) : - Experience with specific data quality tools and techniques. - Familiarity with cloud computing platforms (e.g., AWS, Azure, GCP). - Experience with data visualization tools (e.g., Tableau, Power BI). - Knowledge of statistical modeling and machine learning algorithms

Posted 2 weeks ago

Apply

7.0 - 10.0 years

9 - 12 Lacs

Pune

Work from Office

About the Job : We are seeking a highly skilled and experienced Senior Data Engineer to join our dynamic team. In this pivotal role, you will be instrumental in driving our data engineering initiatives, with a strong emphasis on leveraging Dataiku's capabilities to enhance data processing and analytics. You will be responsible for designing, developing, and optimizing robust data pipelines, ensuring seamless integration of diverse data sources, and maintaining high data quality and accessibility to support our business intelligence and advanced analytics projects. This role requires a unique blend of expertise in traditional data engineering principles, advanced data modeling, and a forward-thinking approach to integrating cutting-AI technologies, particularly LLM Mesh for Generative AI applications. If you are passionate about building scalable data solutions and are eager to explore the cutting edge of AI, we encourage you to apply. Key Responsibilities : - Dataiku Leadership : Drive data engineering initiatives with a strong emphasis on leveraging Dataiku capabilities for data preparation, analysis, visualization, and the deployment of data solutions. - Data Pipeline Development : Design, develop, and optimize robust and scalable data pipelines to support various business intelligence and advanced analytics projects. This includes developing and maintaining ETL/ELT processes to automate data extraction, transformation, and loading from diverse sources. - Data Modeling & Architecture : Apply expertise in data modeling techniques to design efficient and scalable database structures, ensuring data integrity and optimal performance. - ETL/ELT Expertise : Implement and manage ETL processes and tools to ensure efficient and reliable data flow, maintaining high data quality and accessibility. - Gen AI Integration : Explore and implement solutions leveraging LLM Mesh for Generative AI applications, contributing to the development of innovative AI-powered features. - Programming & Scripting : Utilize programming languages such as Python and SQL for data manipulation, analysis, automation, and the development of custom data solutions. - Cloud Platform Deployment : Deploy and manage scalable data solutions on cloud platforms such as AWS or Azure, leveraging their respective services for optimal performance and cost-efficiency. - Data Quality & Governance : Ensure seamless integration of data sources, maintaining high data quality, consistency, and accessibility across all data assets. Implement data governance best practices. - Collaboration & Mentorship : Collaborate closely with data scientists, analysts, and other stakeholders to understand data requirements and deliver impactful solutions. Potentially mentor junior team members. - Performance Optimization : Continuously monitor and optimize the performance of data pipelines and data systems. Required Skills & Experience : - Proficiency in Dataiku : Demonstrable expertise in Dataiku for data preparation, analysis, visualization, and building end-to-end data pipelines and applications. - Expertise in Data Modeling : Strong understanding and practical experience in various data modeling techniques (e.g., dimensional modeling, Kimball, Inmon) to design efficient and scalable database structures. - ETL/ELT Processes & Tools : Extensive experience with ETL/ELT processes and a proven track record of using various ETL tools (e.g., Dataiku's built-in capabilities, Apache Airflow, Talend, SSIS, etc.). - Familiarity with LLM Mesh : Familiarity with LLM Mesh or similar frameworks for Gen AI applications, understanding its concepts and potential for integration. - Programming Languages : Strong proficiency in Python for data manipulation, scripting, and developing data solutions. Solid command of SQL for complex querying, data analysis, and database interactions. - Cloud Platforms : Knowledge and hands-on experience with at least one major cloud platform (AWS or Azure) for deploying and managing scalable data solutions (e.g., S3, EC2, Azure Data Lake, Azure Synapse, etc.). - Gen AI Concepts : Basic understanding of Generative AI concepts and their potential applications in data engineering. - Problem-Solving : Excellent analytical and problem-solving skills with a keen eye for detail. - Communication : Strong communication and interpersonal skills to collaborate effectively with cross-functional teams. Bonus Points (Nice to Have) : - Experience with other big data technologies (e.g., Spark, Hadoop, Snowflake). - Familiarity with data governance and data security best practices. - Experience with MLOps principles and tools. - Contributions to open-source projects related to data engineering or AI. Education : Bachelor's or Master's degree in Computer Science, Data Science, Engineering, or a related quantitative field.

Posted 2 weeks ago

Apply

7.0 - 10.0 years

5 - 8 Lacs

Pune

Remote

Employment Type : Contract (Remote). Job Summary : We are looking for a highly skilled Data Engineer / Data Modeler with strong experience in Snowflake, DBT, and GCP to support our data infrastructure and modeling initiatives. The ideal candidate should possess excellent SQL skills, hands-on experience with Erwin Data Modeler, and a strong background in modern data architectures and data modeling techniques. Key Responsibilities : - Design and implement scalable data models using Snowflake and Erwin Data Modeler. - Create, maintain, and enhance data pipelines using DBT and GCP (BigQuery, Cloud Storage, Dataflow). - Perform reverse engineering on existing systems (e.g., Sailfish/DDMS) using DBeaver or similar tools to understand and rebuild data models. - Develop efficient SQL queries and stored procedures for data transformation, quality, and validation. - Collaborate with business analysts and stakeholders to gather data requirements and convert them into physical and logical models. - Ensure performance tuning, security, and optimization of the Snowflake data warehouse. - Document metadata, data lineage, and business logic behind data structures and flows. - Participate in code reviews, enforce coding standards, and provide best practices for data modeling and governance. Must-Have Skills : - Snowflake architecture, schema design, and data warehouse experience. - DBT (Data Build Tool) for data transformation and pipeline development. - Strong expertise in SQL (query optimization, complex joins, window functions, etc.) - Hands-on experience with Erwin Data Modeler (logical and physical modeling). - Experience with GCP (BigQuery, Cloud Composer, Cloud Storage). - Experience in reverse engineering legacy systems like Sailfish or DDMS using DBeaver. Good To Have : - Experience with CI/CD tools and DevOps for data environments. - Familiarity with data governance, security, and privacy practices. - Exposure to Agile methodologies and working in distributed teams. - Knowledge of Python for data engineering tasks and orchestration scripts. Soft Skills : - Excellent problem-solving and analytical skills. - Strong communication and stakeholder management. - Self-driven with the ability to work independently in a remote setup.

Posted 2 weeks ago

Apply

5.0 - 10.0 years

11 - 16 Lacs

Ahmedabad

Work from Office

Data Scientist Responsibilities : Data Exploration and Insights : - Conduct continuous data exploration and analysis to identify opportunities for enhancing data matching logic, including fuzzy logic, and improving overall data quality within the SCI solution. - This includes working with large datasets from various sources, including Excel files and databases. Data Quality Improvement : - Perform various analyses specifically aimed at improving data quality within the SCI system. - This will involve identifying data quality issues, proposing solutions, and implementing improvements. Weekly Playback and Collaboration : - Participate in weekly playback sessions, using Jupyter Notebook to demonstrate data insights and analysis. - Incorporate new explorations and analyses based on feedback from the working group and prioritized tasks. Project Scaling and Support : - Contribute to the scaling of the SCI project by supporting data acquisition, cleansing, and validation processes for new markets. - This includes pre-requisites for batch ingestion and post-batch ingestion analysis and validation of SCI records. Data Analysis and Validation : - Perform thorough data analysis and validation of SCI records after batch ingestion. - Proactively identify insights and implement solutions to improve data quality. Stakeholder Collaboration : - Coordinate with business stakeholders to facilitate the manual validation of records flagged for manual intervention. - Communicate findings and recommendations clearly and effectively. Technical Requirements : - 5+ years of experience as a Data Scientist. - Strong proficiency in Python and SQL. - Extensive experience using Jupyter Notebook for data analysis and visualization. - Working knowledge of data matching techniques, including fuzzy logic. - Experience working with large datasets from various sources (Excel, databases, etc. - Solid understanding of data quality principles and methodologies. Skills : - SQL - Machine Learning (While not explicitly required in the initial description, it's a valuable skill for a Data Scientist and should be included) - Data Analysis - Jupyter Notebook - Data Cleansing - Fuzzy Logic - Python - Data Quality Improvement - Data Validation - Data Acquisition - Communication and Collaboration - Problem-solving and Analytical skills Preferred Qualifications (Optional, but can help attract stronger candidates) : - Experience with specific data quality tools and techniques. - Familiarity with cloud computing platforms (e.g., AWS, Azure, GCP). - Experience with data visualization tools (e.g., Tableau, Power BI). - Knowledge of statistical modeling and machine learning algorithms

Posted 2 weeks ago

Apply

5.0 - 7.0 years

10 - 14 Lacs

Ahmedabad

Work from Office

Summary : We are seeking a highly skilled Data Engineer with expertise in ontology development and knowledge graph implementation. This role will be pivotal in shaping our data infrastructure and ensuring the accurate representation and integration of complex data sets. You will leverage industry best practices, including the Basic Formal Ontology (BFO) and Common Core Ontologies (CCO), to design, develop, and maintain ontologies, semantic and syntactic data models, and knowledge graphs on the Databricks Data Intelligence Platform that drive data-driven decision-making and innovation within the company. Responsibilities : Ontology Development : - Design and implement ontologies based on BFO and CCO principles, ensuring alignment with business requirements and industry standards. - Collaborate with domain experts to capture and formalize domain knowledge into ontological structures. - Develop and maintain comprehensive ontologies to model various business entities, relationships, and processes. Data Modeling : - Design and implement semantic and syntactic data models that adhere to ontological principles. - Create data models that are scalable, flexible, and adaptable to changing business needs. - Integrate data models with existing data infrastructure and applications. Knowledge Graph Implementation : - Design and build knowledge graphs based on ontologies and data models. - Develop algorithms and tools for knowledge graph population, enrichment, and maintenance. - Utilize knowledge graphs to enable advanced analytics, search, and recommendation systems. Data Quality And Governance : - Ensure the quality, accuracy, and consistency of ontologies, data models, and knowledge graphs. - Define and implement data governance processes and standards for ontology development and maintenance. Collaboration And Communication : - Work closely with data scientists, software engineers, and business stakeholders to understand their data requirements and provide tailored solutions. - Communicate complex technical concepts clearly and effectively to diverse audiences. Qualifications : Education : - Bachelor's or Master's degree in Computer Science, Data Science, or a related field. Experience : - 5+ years of experience in data engineering or a related role. - Proven experience in ontology development using BFO and CCO or similar ontological frameworks. - Strong knowledge of semantic web technologies, including RDF, OWL, SPARQL, and SHACL. - Proficiency in Python, SQL, and other programming languages used for data engineering. - Experience with graph databases (e.g., TigerGraph, JanusGraph) and triple stores (e.g., GraphDB, Stardog) is a plus. Desired Skills : - Familiarity with machine learning and natural language processing techniques. - Experience with cloud-based data platforms (e.g., AWS, Azure, GCP). - Experience with Databricks technologies including Spark, Delta Lake, Iceberg, Unity Catalog, UniForm, and Photon. - Strong problem-solving and analytical skills. - Excellent communication and interpersonal skills.

Posted 2 weeks ago

Apply

4.0 - 9.0 years

9 - 13 Lacs

Hyderabad

Work from Office

Cigna, a leading Health Services company, is looking for an exceptional Front-End API engineer in our Data Analytics Engineering organization. The Full Stack Engineer is responsible for the delivery of a business need starting from understanding the requirements to deploying the software into production. This role requires you to be fluent in some of the critical technologies with proficiency in others and have a hunger to learn on the job and add value to the business. Critical attributes of being a Full Stack Engineer, among others, is ownership, eagerness to learn an open mindset. In addition to Delivery, the Full Stack Engineer should have an automation first and continuous improvement mindset. Person should drive the adoption of CI/CD tools and support the improvement of the tools sets/processes. Job Description Responsibilities : Full Stack Engineers are able to articulate clear business objectives aligned to technical specifications and work in an iterative, agile pattern daily. They have ownership over their work tasks, and embrace interacting with all levels of the team and raise challenges when necessary. Participate in design, definition, planning, development, implementation of projects and evaluate conforming to Software Development Best Practices. Perform peer-code reviews. Responsible for the deliverable. Ask smart questions, take risks, and champion new ideas . Business oriented and able to communicate at all levels. Ensure adherence to existing strategic direction and architectural strategies. Embraces the agile delivery process by releasing business value incrementally into production. Transfer key knowledge and code ownership to the team. Mentor talent and cultivates new team members. Foster environment where business is involved in the project and aware of key decisions, issues, and functionality. Experience Required : 4 + years of React experience 4 + years of Javascript /Typescript (Nodejs) experience 4 + years of experience with SQL STRONG REACT / TYPESCRIPT / SQL 3 + years of experience on cloud technologies 3 + years being part of Agile teams Scrum Experience Desired : Experience with version management tools Git preferred. Experience with BDD and TDD development methodologies Experience working in an agile CI/CD environments; Jenkins experience preferred. Knowledge and/or experience with Health care information domains preferred Education and Training Required : Bachelor degree (or equivalent) required . Primary Skills : React , JavaScript, Typescript , SQL, Nodejs, GraphQL Additional Skills: AWS, Git, Terraform, Lambda, Design and architect the solution independently Take ownership and accountability Write referenceable modular code Be fluent in particular areas and have proficiency in many areas, Have a passion to learn. Have a quality mindset, not just code quality but also to ensure ongoing application/data quality by monitoring data to identify problems before they have business impact Take risks and champion new ideas

Posted 2 weeks ago

Apply

10.0 - 15.0 years

50 - 55 Lacs

Hyderabad

Work from Office

As a Director of Software Engineering at JPMorgan Chase within the Chief Data & Analytics Offices Data Governance Engineering team, you will play a pivotal role in supporting the firm in delivering services to clients and advancing the firm-wide agenda for Data & Analytics. You will lead firm-wide initiatives through a unified Data Analytics Platform, in alignment with the firms Data & AI strategy. Collaborating with engineering teams, you will be responsible for creating designs, establishing best practices, and developing guidelines along with scalable frameworks to effectively manage large volumes of data, ensuring interoperability, compliance with data classification requirements, and maintaining data integrity and accessibility. You will work closely with Product & Engineering teams to promote unified engineering execution across multiple initiatives, strategically designing and building applications that address real-world use cases. Your expertise in software, applications, technical processes, and product management will be essential in promoting complex projects and initiatives, serving as a primary decision-maker and a champion of innovation and solution delivery. As part of the Product Delivery team, you will design and build scalable cloud-native foundational data governance products and services that support the Data Risk Pillars, providing a unified experience through the CDAO platform.. Job responsibilities Collaborate with product and engineering teams to deliver robust firm wide data governance solutions that drive enhanced customer experiences. Provides critical day-to-day leadership and strategic thinking, working with team of engineers and architects to align cross-functional initiatives, ensuring they are feasible both fiscally and technically. Makes decisions that influence teams resources, budget, tactical operations, and the execution and implementation of processes and procedures. Champions the firm s culture of diversity, equity, inclusion, and respect Will be leading the consolidation and convergence effort of the Data Governance capabilities under unified CDAO Platform and other priority firmwide initiatives related to BCBS 239, data lineage, controls & data quality. Enable engineering teams to Develop, enhance, and maintain established standards and best practices, Drive, self-service, and deliver on a strategy to operate on a build broad use of Amazons utility computing web services (eg, AWS EC2, AWS S3, AWS RDS, AWS CloudFront, CloudWatch, EKS) Identify opportunities to improve resiliency, availability, secure, high performing platforms in Public Cloud using JPMC best practices. Implement continuous process improvement, including but not limited to policy, procedures, and production monitoring and reduce time to resolve. Identify, coordinate, and implement initiatives/projects and activities that create efficiencies and optimize technical processing. Measure and optimize system performance, with an eye toward pushing our capabilities forward, getting ahead of customer needs, and innovating to continually improve. Utilize programming languages like Java, Python, SQL, Node, Go, and Scala, Graph DB and Open Source RDBMs databases, Container Orchestration services including Kubernetes, and a variety of AWS tools and services. Required qualifications, capabilities, and skills. Formal training or certification on software engineering concepts and 10+ years applied experience. In addition, 5+ years of experience leading technologists to manage, anticipate and solve complex technical items within your domain of expertise. Experience in building or supporting environments on AWS using Terraform, which includes working with services like EKS, ELB, RDS, and S3. Strong understanding of business technology drivers and their impact on architecture design, performance, and monitoring best practices. Dynamic individual with excellent communication skills, capable of adapting verbiage and style to the audience at hand and delivering critical information in a clear and concise manner. Strong experience in managing stakeholders at all levels. Strong analytical thinker with business acumen and the ability to assimilate information quickly, with a solution-based focus on incident and problem management. Hands-on experience with one or more cloud computing platform providers Experience in architecting for private and public cloud environments and in re-engineering and migrating on-premises data solutions to the cloud. Proficiency in building on emerging cloud server less managed services to minimize or eliminate physical and virtual server footprints. Experience with high-volume, mission-critical applications and their interdependencies with other applications and databases. Proven work experience with container platforms such as Kubernetes. Strong understanding of architecture, design, and business processes. Keen understanding of financial and budget management, control, and optimization of public cloud expenses. Experience working in large, collaborative teams to achieve organizational goals. Passionate about building an innovative culture. Preferred qualifications, capabilities, and skills bachelors /masters degree in Computer science or other technical, scientific discipline Experience implementing multi-cloud architectures and deep understanding of cloud infrastructure design, architecture, and cloud migration strategies. Demonstrated proficiency in technical solutions, implementing firm wide solutions and experience in data governance vendor product knowledge is a plus. Certifications in target areas (AWS Cloud/Kubernetes etc) Experience leading Data Governance and Data Risk Reporting platforms is a preferred. If you are a software engineering leader ready to take the reins and drive impact, we've got an opportunity just for you.

Posted 2 weeks ago

Apply

5.0 - 9.0 years

25 - 30 Lacs

Pune

Work from Office

Ethoca, a Mastercard Company is seeking a Senior Data Engineer to join our team in Pune, India to drive data enablement and explore big data solutions within our technology landscape. The role is visible and critical as part of a high performing team - it will appeal to you if you have an effective combination of domain knowledge, relevant experience and the ability to execute on the details. You will bring cutting edge software and full stack development skills with advanced knowledge of cloud and data lake experience while working with massive data volumes. You will own this - our teams are small, agile and focused on the needs of the high growth fintech marketplace. You will be working across functional teams within Ethoca and Mastercard to deliver on cloud strategy. We are committed in making our systems resilient and responsive yet easily maintainable on cloud. Key Responsibilities Design, develop, and optimize batch and real-time data pipelines using Snowflake, Snowpark, Python, and PySpark. Build data transformation workflows using dbt, with a strong focus on Test-Driven Development (TDD) and modular design. Implement and manage CI/CD pipelines using GitLab and Jenkins, enabling automated testing, deployment, and monitoring of data workflows. Deploy and manage Snowflake objects using Schema Change, ensuring controlled, auditable, and repeatable releases across environments. Administer and optimize the Snowflake platform, handling performance tuning, access management, cost control, and platform scalability. Drive DataOps practices by integrating testing, monitoring, versioning, and collaboration into every phase of the data pipeline lifecycle. Build scalable and reusable data models that support business analytics and dashboarding in Power BI. Develop and support real-time data streaming pipelines (eg, using Kafka, Spark Structured Streaming) for near-instant data availability. Establish and implement data observability practices, including monitoring data quality, freshness, lineage, and anomaly detection across the platform. Plan and own deployments, migrations, and upgrades across data platforms and pipelines to minimize service impacts, including developing and executing mitigation plans. Collaborate with stakeholders to understand data requirements and deliver reliable, high-impact data solutions. Document pipeline architecture, processes, and standards, promoting consistency and transparency across the team. Apply exceptional problem-solving and analytical skills to troubleshoot complex data and system issues. Demonstrate excellent written and verbal communication skills when collaborating across technical and non-technical teams. Required Qualifications Tenured in the fields of Computer Science/Engineering or Software Engineering. Bachelors degree in computer science, or a related technical field including programming. Deep hands-on experience with Snowflake (including administration), Snowpark, and Python. Strong background in PySpark and distributed data processing. Proven track record using dbt for building robust, testable data transformation workflows following TDD. Familiarity with Schema Change for Snowflake object deployment and version control. Good to have familiarity with Java JDK 8 or greater and exposure to Spring & Springboot framework. Good to have understanding and knowledge on Databricks Proficient in CI/CD tooling, especially GitLab and Jenkins, with a focus on automation and DataOps. Experience with real-time data processing and streaming pipelines. Strong grasp of cloud-based database infrastructure (AWS, Azure, or GCP). Skilled in developing insightful dashboards and scalable data models using Power BI. Expert in SQL development and performance optimization. Demonstrated success in building and maintaining data observability tools and frameworks. Proven ability to plan and execute deployments, upgrades, and migrations with minimal disruption to operations. Strong communication, collaboration, and analytical thinking across technical and non-technical stakeholders. Ideally you have experience in banking, e-commerce, credit cards or payment processing and exposure to both SaaS and premises-based architectures. In addition, you have a post-secondary degree in computer science, mathematics, or quantitative science.

Posted 2 weeks ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Bengaluru

Work from Office

To assist and execute NDT data analysis drive improvements in NDT scanning and analysis towards securing ZERO defect escapement to customers and eliminate over processing. Competency Development & Resource Adequacy: Develop training materials and samples to facilitate classroom and practical trainings. Plan execute and monitor classroom and practical trainings for NDT level 1 and 2 certifications in the plant. Make sure plants are equipped with needed certified personnel to perform inspections. Maintain skill matrix of quality staff in plants as per LMWP competency model. Track the certification database and ensure only certified resources performing inspection in the plants. Improvement of internal or an alternate certification process to make it leaner and more effective. Consistently focus on improving competence level of NDT technicians which will allow high flexibility work force in the plants. Measurement and Inspection Methods: Eliminate or reduce the variation between the NDT reviewers through periodic training and assessments. Lead and/or support new development of NDT methods and tooling. Consistently improve or update the NDT process and methods. Focus on develop and implementation of Poka-Yoke solution in inspection and measurement processes. Execution and implementation of new inspection methods Geometrical verification methods new technologies and new AC after the completion of development from Engineering Drive improvements in existing NDT methods and processes to make the processes lean and effective. Lead tactical projects to meet functional strategic plan. Quality Compliance Focus on Proactive approach in assuring process compliance before failure occurs. Plan and execute NDT process audits NDT personnel review and periodic data as per defined frequency. Monitor inspection effectiveness support RCA and CAPA closure with stakeholders. Drive improvements in the audit process to make it is lean and effective. Follow up on audit findings closure. Bring improvement in BMS process and tools related to process audits. Perform RCA for reoccurring defects in the process and improve the quality of products. Operational process and support: Perform NDT data analysis accurately to secure Zero defect escapement and zero over processing. Provide on time analysis support and feedback to the plants for smooth operations in the plant. Drive improvements towards improving scanning methods and obtaining better data quality. Support plant NDT resources towards eliminating rescanning (First Time Right scan) Govern and monitor KPIs defined within the function/team and secure performance level is sustained. Maintain operational Quality records up to date. Set base line for all NDT activities and analysis and strive improvements in tools and equipment that would make the NDT process more effective. Support technology projects new product launch Quality issue projects from technical standpoint. Tracking monitoring and improving performance of gauge R & R in plants. Assure effective implementation of calibration process in relevant inspection methods. Training and implementation of new AC Monitor the permanence of process and optimize the inspection frequency based on SPC analysis. Provide on time support to the manufacturing plants on daily operational challenges. Required Qualifications: A bachelor degree in engineering or equivalent with experience in Quality domain and experience level minimum 3+years Certified UT level 2 in conventional and advanced Phased Array (PAUT) methods as per ASNT SNT-TC-1A. UT level 3 certification will be an added strength. Certified in IR inspection method. Minimum of 5 yearswork experience in Manufacturing preferably in blade manufacturing with UT inspections. International experience and cultural awareness covering Americas Europe India and China. Knowledge of blade manufacturing is preferable combined with explicit knowledge on Quality tools Systems and Processes Audits Six Sigma PFMEA Control plans PPAP. Strong English language skill (verbal and writing). Preferable to have an ISO 9001 Lead auditor certification and relevant audit experience. Flexible to travelling across LM business units for executing training and operational support. Flexible to work in shift pattern to support manufacturing plant across globe. Desired Characteristics A person with a quality mindset independent of Plant level responsibility and reporting A person with self-motivation and encourages others to take responsibility. Communication: Effectively communicate beyond own area at all levels. Initiates or improves the way to communicate facilitate negotiate resulting in increased impact and commitment. Target important areas for innovation and evaluates multiple solutions beyond own areas. Decision making: Sets goals and regularly follow up on these goals. Takes decisions and monitors results.

Posted 2 weeks ago

Apply

3.0 - 8.0 years

20 - 25 Lacs

Ahmedabad

Work from Office

As a HR Talent Acquisition Applicant Tracking Platform Owner at Infineon, you hold the key to unlocking the full potential of digital technologies in enhancing HR processes and elevating candidate and employee experience. Join us on this journey, and together, lets align Infineons people objectives with cutting-edge digital solutions - a customer centric HR system landscape that redefines the future of HR. In your new role you will: Be globally responsible for the design, implementation and continuous improvement of our Talent Acquisition (TA) Applicant Tracking platform within our Global HR Platforms team. You will focus on managing platform demands, ensuring high HR Data Quality, GDPR compliance and audit readiness. Besides, you will closely collaborate with HR, IT, Labor Relations and Business Continuity counterparts in global HR projects and beyond. Interface to other Talent Acquisition Platform and Module owners , ensuring alignment with relevant stakeholders and managing change request to TA applicant tracking platform Coordinate together with other talent acquisition platform and module owners demand management and prioritize demands with key stakeholders in alignment with IT counterpart(s) Set policies and guidelines for the platform to ensure that it operates smoothly and is Global Data Protection Regulation (GDPR) compliant , eg manage & monitor data deletion and access concepts Together with other talent acquisition platform and module owners proactively drive decision making on direction and focus topics for the artificial intelligence driven platforms Define and drive actions to improve TA data quality together with HR Data Quality Owner Drive automation and digitalization via TA applicant tracking platform for related processes in close collaboration with Global Service Designer and IT Support and consult in global HR projects related to our TA applicant tracking platform Enable platform stakeholders on platform usage, changes, issues and dependencies Ensure that all platform releases are thoroughly tested and validated before deployment You are best equipped for this task if you have: Customer centricity and an effective HR system landscape is at the heart of your thoughts and actions; you demonstrate excellent communication skills and know how to establish sustainable relations. You are willing to take responsibility while generating value with your ideas and solutions. Moreover, you enjoy working in interdisciplinary teams with multicultural backgrounds. A degree in Human Resources Management, Information Technology, Business Administration, or related fields 3+ years of relevant working experience in a multinational working environment in a similar role Strong communication skills: you master conveying the benefits of technical adjustments to a non-technical savvy audience and are able to translate business (HR) demands into technical requirements Strong stakeholder and expectation management skills Experience working in and managing HR (recruiting) systems , like Umantis Applicant tracking system, SuccessFactors, Eightfold, or similar Innovation, customer centric and problem-solving mindset , combined with hands-on spirit and great planning capabilities Team spirit and knowledge about change management in larger globally operating organizations Excellent English Skills

Posted 2 weeks ago

Apply

3.0 - 6.0 years

9 - 13 Lacs

Hyderabad

Work from Office

In SAP data and analytics at PwC, you will specialise in providing consulting services for data and analytics solutions using SAP technologies. You will analyse client requirements, design and implement data management and analytics solutions, and provide training and support for effective utilisation of SAP data and analytics tools. Working in this area, you will work closely with clients to understand their data needs, develop data models, perform data analysis, and create visualisations and reports to support datadriven decisionmaking, helping them optimise their data management processes, enhance data quality, and derive valuable insights from their data to achieve their strategic objectives. BW on HANA/BW4HANA implementation in key SAP modules Requirement s analysis, conception, implementation/development of solution as per requirement. Create HLD and then convert them to LLD. Data extraction from SAP and nonSAP systems, data modelling and reports delivery. Work closely with Project leaders, Business teams, SAP functional counterparts to Architect, Design, Develop SAP BW4 HANA Solutions Understand the integration and consumption of BW data models/repots with other Tools Responsibilities Handson experience in SAP BW/4HANA or SAP BW ON HANA and strong understanding of usage of objects like Composite providers, Open ODS view, advanced DSOs, Transformations, exposing BW models as HANA views, mixed scenarios, and performance optimization concepts such as data tiering optimization. Experience in integration of BW with various SAP and NonSAP backend systems/sources of data and good knowledge of different data acquisition techniques in BW/4HANA. knowledge of available SAP BW/4HANA tools and its usage like BW/4HANA Web Cockpit. Mandatory skill sets Full life cycle Implementation experience in SAP BW4HANA or SAP BW on HANA Hands on Experience in data extraction using standard or generic data sources. Good Knowledge of data source enhancement Strong experience in writing ABAP/AMDP code for exits, Transformation. Strong understanding CKF, RKF, Formula, Selections, Variables and other components used for reporting. Preferred skill sets Understanding of LSA/LSA++ architecture and its development standards. Good understanding of BW4 Application and Database security concepts. Functional Knowledge of various Modules like SD, MM, FI. Education qualification B.Tech / M.Tech (Computer Science, Mathematics & Scientific Computing etc) Education Degrees/Field of Study required Bachelor of Technology, Master of Engineering Degrees/Field of Study preferred Required Skills SAP Business Warehouse Microsoft Azure

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies