Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 years
0 Lacs
Delhi
On-site
The Role Context: This is an exciting opportunity to join a dynamic and growing organization, working at the forefront of technology trends and developments in social impact sector. Wadhwani Center for Government Digital Transformation (WGDT) works with the government ministries and state departments in India with a mission of “ Enabling digital transformation to enhance the impact of government policy, initiatives and programs ”. We are seeking a highly motivated and detail-oriented individual to join our team as a Data Engineer with experience in the designing, constructing, and maintaining the architecture and infrastructure necessary for data generation, storage and processing and contribute to the successful implementation of digital government policies and programs. You will play a key role in developing, robust, scalable, and efficient systems to manage large volumes of data, make it accessible for analysis and decision-making and driving innovation & optimizing operations across various government ministries and state departments in India. Key Responsibilities: a. Data Architecture Design : Design, develop, and maintain scalable data pipelines and infrastructure for ingesting, processing, storing, and analyzing large volumes of data efficiently. This involves understanding business requirements and translating them into technical solutions. b. Data Integration: Integrate data from various sources such as databases, APIs, streaming platforms, and third-party systems. Should ensure the data is collected reliably and efficiently, maintaining data quality and integrity throughout the process as per the Ministries/government data standards. c. Data Modeling: Design and implement data models to organize and structure data for efficient storage and retrieval. They use techniques such as dimensional modeling, normalization, and denormalization depending on the specific requirements of the project. d. Data Pipeline Development/ ETL (Extract, Transform, Load): Develop data pipeline/ETL processes to extract data from source systems, transform it into the desired format, and load it into the target data systems. This involves writing scripts or using ETL tools or building data pipelines to automate the process and ensure data accuracy and consistency. e. Data Quality and Governance: Implement data quality checks and data governance policies to ensure data accuracy, consistency, and compliance with regulations. Should be able to design and track data lineage, data stewardship, metadata management, building business glossary etc. f. Data lakes or Warehousing: Design and maintain data lakes and data warehouse to store and manage structured data from relational databases, semi-structured data like JSON or XML, and unstructured data such as text documents, images, and videos at any scale. Should be able to integrate with big data processing frameworks such as Apache Hadoop, Apache Spark, and Apache Flink, as well as with machine learning and data visualization tools. g. Data Security : Implement security practices, technologies, and policies designed to protect data from unauthorized access, alteration, or destruction throughout its lifecycle. It should include data access, encryption, data masking and anonymization, data loss prevention, compliance, and regulatory requirements such as DPDP, GDPR, etc. h. Database Management: Administer and optimize databases, both relational and NoSQL, to manage large volumes of data effectively. i. Data Migration: Plan and execute data migration projects to transfer data between systems while ensuring data consistency and minimal downtime. a. Performance Optimization : Optimize data pipelines and queries for performance and scalability. Identify and resolve bottlenecks, tune database configurations, and implement caching and indexing strategies to improve data processing speed and efficiency. b. Collaboration: Collaborate with data scientists, analysts, and other stakeholders to understand their data requirements and provide them with access to the necessary data resources. They also work closely with IT operations teams to deploy and maintain data infrastructure in production environments. c. Documentation and Reporting: Document their work including data models, data pipelines/ETL processes, and system configurations. Create documentation and provide training to other team members to ensure the sustainability and maintainability of data systems. d. Continuous Learning: Stay updated with the latest technologies and trends in data engineering and related fields. Should participate in training programs, attend conferences, and engage with the data engineering community to enhance their skills and knowledge. Desired Skills/ Competencies Education: A Bachelor's or Master's degree in Computer Science, Software Engineering, Data Science, or equivalent with at least 5 years of experience. Database Management: Strong expertise in working with databases, such as SQL databases (e.g., MySQL, PostgreSQL) and NoSQL databases (e.g., MongoDB, Cassandra). Big Data Technologies: Familiarity with big data technologies, such as Apache Hadoop, Spark, and related ecosystem components, for processing and analyzing large-scale datasets. ETL Tools: Experience with ETL tools (e.g., Apache NiFi, Talend, Apache Airflow, Talend Open Studio, Pentaho, Infosphere) for designing and orchestrating data workflows. Data Modeling and Warehousing: Knowledge of data modeling techniques and experience with data warehousing solutions (e.g., Amazon Redshift, Google BigQuery, Snowflake). Data Governance and Security: Understanding of data governance principles and best practices for ensuring data quality and security. Cloud Computing: Experience with cloud platforms (e.g., AWS, Azure, Google Cloud) and their data services for scalable and cost-effective data storage and processing. Streaming Data Processing: Familiarity with real-time data processing frameworks (e.g., Apache Kafka, Apache Flink) for handling streaming data. KPIs: Data Pipeline Efficiency: Measure the efficiency of data pipelines in terms of data processing time, throughput, and resource utilization. KPIs could include average time to process data, data ingestion rates, and pipeline latency. Data Quality Metrics: Track data quality metrics such as completeness, accuracy, consistency, and timeliness of data. KPIs could include data error rates, missing values, data duplication rates, and data validation failures. System Uptime and Availability: Monitor the uptime and availability of data infrastructure, including databases, data warehouses, and data processing systems. KPIs could include system uptime percentage, mean time between failures (MTBF), and mean time to repair (MTTR). Data Storage Efficiency: Measure the efficiency of data storage systems in terms of storage utilization, data compression rates, and data retention policies. KPIs could include storage utilization rates, data compression ratios, and data storage costs per unit. Data Security and Compliance: Track adherence to data security policies and regulatory compliance requirements such as DPDP, GDPR, HIPAA, or PCI DSS. KPIs could include security incident rates, data access permissions, and compliance audit findings. Data Processing Performance: Monitor the performance of data processing tasks such as ETL (Extract, Transform, Load) processes, data transformations, and data aggregations. KPIs could include data processing time, CPU usage, and memory consumption. Scalability and Performance Tuning: Measure the scalability and performance of data systems under varying workloads and data volumes. KPIs could include scalability benchmarks, system response times under load, and performance improvements achieved through tuning. Resource Utilization and Cost Optimization: Track resource utilization and costs associated with data infrastructure, including compute resources, storage, and network bandwidth. KPIs could include cost per data unit processed, cost per query, and cost savings achieved through optimization. Incident Response and Resolution: Monitor the response time and resolution time for data-related incidents and issues. KPIs could include incident response time, time to diagnose and resolve issues, and customer satisfaction ratings for support services. Documentation and Knowledge Sharing : Measure the quality and completeness of documentation for data infrastructure, data pipelines, and data processes. KPIs could include documentation coverage, documentation update frequency, and knowledge sharing activities such as internal training sessions or knowledge base contributions. Years of experience of the current role holder New Position Ideal years of experience 3 – 5 years Career progression for this role CTO WGDT (Head of Incubation Centre) ******************************************************************************* Wadhwani Corporate Profile: (Click on this link) Our Culture: WF is a global not-for-profit, and works like a start-up, in a fast-moving, dynamic pace where change is the only constant and flexibility is the key to success. Three mantras that we practice across job roles, levels, functions, programs and initiatives, are Quality, Speed, Scale, in that order. We are an ambitious and inclusive organization, where everyone is encouraged to contribute and ideate. We are intensely and insanely focused on driving excellence in everything we do. We want individuals with the drive for excellence, and passion to do whatever it takes to deliver world class outcomes to our beneficiaries. We set our own standards often more rigorous than what our beneficiaries demand, and we want individuals who love it this way. We have a creative and highly energetic environment – one in which we look to each other to innovate new solutions not only for our beneficiaries but for ourselves too. Open to collaborate with a borderless mentality, often going beyond the hierarchy and siloed definitions of functional KRAs, are the individuals who will thrive in our environment. This is a workplace where expertise is shared with colleagues around the globe. Individuals uncomfortable with change, constant innovation, and short learning cycles and those looking for stability and orderly working days may not find WF to be the right place for them. Finally, we want individuals who want to do greater good for the society leveraging their area of expertise, skills and experience. The foundation is an equal opportunity firm with no bias towards gender, race, colour, ethnicity, country, language, age and any other dimension that comes in the way of progress. Join us and be a part of us! Bachelors in Technology / Masters in Technology
Posted 23 hours ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Key Responsibilities JOB DESCRIPTION Develop, optimize, and maintain complex SQL queries, stored procedures, functions, and views. Analyze slow-performing queries and optimize execution plans to improve database performance. Design and implement indexing strategies to enhance query efficiency. Work with developers to optimize database interactions in applications. Develop and implement Teradata best practices for large-scale data processing and ETL workflows. Monitor and troubleshoot Teradata performance issues using tools like DBQL (Database Query Log), Viewpoint, and Explain Plan Analysis. Perform data modeling, normalization, and schema design improvements. Collaborate with teams to implement best practices for database tuning and performance enhancement. Automate repetitive database tasks using scripts and scheduled jobs. Document database architecture, queries, and optimization techniques. Responsibilities Required Skills & Qualifications: Strong proficiency in Teradata SQL, including query optimization techniques. Strong proficiency in SQL (T-SQL, PL/SQL, or equivalent). Experience with indexing strategies, partitioning, and caching techniques. Knowledge of database normalization, denormalization, and best practices. Familiarity with ETL processes, data warehousing, and large datasets. Experience in writing and optimizing stored procedures, triggers, and functions. Hands-on experience in Teradata performance tuning, indexing, partitioning, and statistics collection. Experience with EXPLAIN plans, DBQL analysis, and Teradata Viewpoint monitoring. Candidate should have PowerBI / Tableau integration experience - Good to Have About Us ABOUT US Bristlecone is the leading provider of AI-powered application transformation services for the connected supply chain. We empower our customers with speed, visibility, automation, and resiliency – to thrive on change. Our transformative solutions in Digital Logistics, Cognitive Manufacturing, Autonomous Planning, Smart Procurement and Digitalization are positioned around key industry pillars and delivered through a comprehensive portfolio of services spanning digital strategy, design and build, and implementation across a range of technology platforms. Bristlecone is ranked among the top ten leaders in supply chain services by Gartner. We are headquartered in San Jose, California, with locations across North America, Europe and Asia, and over 2,500 consultants. Bristlecone is part of the $19.4 billion Mahindra Group. Equal Opportunity Employer Bristlecone is an equal opportunity employer. All applicants will be considered for employment without attention to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran or disability status . Information Security Responsibilities Understand and adhere to Information Security policies, guidelines and procedure, practice them for protection of organizational data and Information System. Take part in information security training and act while handling information. Report all suspected security and policy breach to InfoSec team or appropriate authority (CISO). Understand and adhere to the additional information security responsibilities as part of the assigned job role.
Posted 1 day ago
5.0 - 31.0 years
9 - 15 Lacs
Bengaluru/Bangalore
On-site
Job Title: NoSQL Database Administrator (DBA )Department: IT / Data Management Job Purpose: The NoSQL Database Administrator will be responsible for designing, deploying, securing, and optimizing NoSQL databases to ensure high availability, reliability, and scalability of mission-critical applications. The role involves close collaboration with developers, architects, and security teams, especially in compliance-driven environments such as UIDAI. Key Responsibilities: Collaborate with developers and solution architects to design and implement efficient and scalable NoSQL database schemas. Ensure database normalization, denormalization where appropriate, and implement indexing strategies to optimize performance. Evaluate and deploy replication architectures to support high availability and fault tolerance. Monitor and analyze database performance using tools like NoSQL Enterprise Monitor and custom monitoring scripts. Troubleshoot performance bottlenecks and optimize queries using query analysis, index tuning, and rewriting techniques. Fine-tune NoSQL server parameters, buffer pools, caches, and system configurations to improve throughput and minimize latency. Implement and manage Role-Based Access Control (RBAC), authentication, authorization, and auditing to maintain data integrity, confidentiality, and compliance. Act as a liaison with UIDAI-appointed GRCP and security audit agencies, ensuring all security audits are conducted timely, and provide the necessary documentation and artifacts to address risks and non-conformities. Participate in disaster recovery planning, backup management, and failover testing. Key Skills & Qualifications: Educational Qualifications: Bachelor’s or Master’s Degree in Computer Science, Information Technology, or a related field. Technical Skills: Proficiency in NoSQL databases such as MongoDB, Cassandra, Couchbase, DynamoDB, or similar. Strong knowledge of database schema design, data modeling, and performance optimization. Experience in setting up replication, sharding, clustering, and backup strategies. Familiarity with performance monitoring tools and writing custom scripts for health checks. Hands-on experience with database security, RBAC, encryption, and auditing mechanisms. Strong troubleshooting skills related to query optimization and server configurations. Compliance & Security: Experience with data privacy regulations and security standards, particularly in compliance-driven sectors like UIDAI. Ability to coordinate with government and regulatory security audit teams. Behavioral Skills: Excellent communication and stakeholder management. Strong analytical, problem-solving, and documentation skills. Proactive and detail-oriented with a focus on system reliability and security. Key Interfaces: Internal: Developers, Solution Architects, DevOps, Security Teams, Project Managers. External: UIDAI-appointed GRCP, third-party auditors, security audit agencies. Key Challenges: Maintaining optimal performance and uptime in a high-demand, compliance-driven environment. Ensuring security, scalability, and availability of large-scale NoSQL deployments. Keeping up with evolving data security standards and audit requirements.
Posted 2 days ago
9.0 - 14.0 years
0 - 0 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
Role & responsibilities Design and develop conceptual, logical, and physical data models for enterprise and application-level databases. Translate business requirements into well-structured data models that support analytics, reporting, and operational systems. Define and maintain data standards, naming conventions, and metadata for consistency across systems. Collaborate with data architects, engineers, and analysts to implement models into databases and data warehouses/lakes. Analyze existing data systems and provide recommendations for optimization, refactoring, and improvements. Create entity relationship diagrams (ERDs) and data flow diagrams to document data structures and relationships. Support data governance initiatives including data lineage, quality, and cataloging. Review and validate data models with business and technical stakeholders. Provide guidance on normalization, denormalization, and performance tuning of database designs. Ensure models comply with organizational data policies, security, and regulatory requirements. Looking for a Data Modeler Architect to design conceptual, logical, and physical data models. Must translate business needs into scalable models for analytics and operational systems. Strong in normalization , denormalization , ERDs , and data governance practices. Experience in star/snowflake schemas and medallion architecture preferred. Role requires close collaboration with architects, engineers, and analysts. Data modelling, Normalization, Denormalization, Star and snowflake schemas, Medallion architecture, ERDLogical data model, Physical data model & Conceptual data model
Posted 4 days ago
5.0 - 10.0 years
9 - 14 Lacs
Vijayawada, Hyderabad
Work from Office
We are actively seeking experienced Power BI Administrators who can take full ownership of Power BI environments from data modeling and report development to security management and system integration. This role is ideal for professionals with a solid technical foundation and hands-on expertise across the Power BI ecosystem, including enterprise BI environments. Key Responsibilities: Data Model Management: Maintain and optimize Power BI data models to meet evolving analytical and reporting needs. Data Import & Transformation: Import and transform data from various sources using Power Query (M) and implement business logic using DAX. Advanced Measure Creation: Design complex DAX measures, KPIs, and calculated columns tailored to dynamic reporting requirements. Access & Permission Management: Administer and manage user access, roles, and workspace security settings in Power BI Service. Interactive Reporting: Develop insightful and interactive dashboards and reports aligned with business goals and user needs. Error Handling & Data Validation: Identify, investigate, and resolve data inconsistencies and refresh issues, ensuring data accuracy and report reliability. BCS & BIRT Integration: Develop and manage data extraction reports using Business Connectivity Services (BCS) and BIRT (Business Intelligence and Reporting Tool). Preferred Skills: Proven experience as a Power BI Administrator or similar BI role. Strong expertise in Power BI Desktop, Power BI Service, Power Query, DAX, and security configuration. Familiar with report lifecycle management, data governance, and large-scale enterprise BI environments. Experience with BCS and BIRT tools is highly preferred. Capable of independently troubleshooting data and configuration issues. Excellent communication and documentation skills.
Posted 1 week ago
9.0 years
0 Lacs
Greater Kolkata Area
On-site
Job Description Having 9+ years of working experience in Data Engineering and Data Analytic projects in implementing Data Warehouse, Data Lake and Lakehouse and associated ETL/ELT patterns. Worked as a Data Modeller in one or two implementations in creating and implementing Data models and Data Base designs using Dimensional, ER models. Good knowledge and experience in modelling complex scenario's like many to many relationships, SCD types, Late arriving fact and dimensions etc. Hands on experience in any one of the Data modelling tools like Erwin, ER/Studio, Enterprise Architect or SQLDBM etc. Experience in working closely with Business stakeholders/Business Analyst to understand the functional requirements and translating it into Data Models and database designs. Experience in creating conceptual models and logical models and translating it into physical models to address the both functional and non functional requirements. Strong knowledge in SQL, able to write complex queries and profile the data to understand the relationships and DQ issues. Very strong understanding of database modelling and design principles like normalization, denormalization, isolation levels. Experience in Performance optimizations through database designs (Physical Modelling). Good communication skills (ref:hirist.tech)
Posted 1 week ago
6.0 - 10.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description Summary: You will be creating, maintaining, and supporting data pipelines with information coming from our vessels, projects, campaigns, third party data services, and so on. You will play a key role in organizing data and developing & maintaining data models and designing modern data solutions and products on our cloud data platform. You will work closely with the business to define & finetune requirements. You will support our data scientists and report developers across the organization and enable them to find the required data and information. Your responsibilities You have a result-driven and hands-on mindset and prefer to work in an agile environment. You are a team player and good communicator. You have experience with SQL or other data-oriented development languages (Python, Scala, Spark etc). You have proven experience in developing data models and database structures. You have proven experience with UML modelling and ER modelling for documenting and designing data structures. You have proven experience in the development of data pipelines and orchestrations. You have a master or bachelor‘s degree in the field of engineering or computer science You like to iterate quickly and try out new things Ideally, you have experience with a wide variety of data tools & data like geospatial, time series, structured & unstructured, etc. Your profile Experience on Microsoft Azure data stack (Synapse/data factory, power bi, data bricks, data lake, Microsoft SQL, Microsoft AAS) is mandatory. Experience with machine learning and AI is a plus Knowledge in fundamental data modeling concepts such as entities, relationships, normalization, and denormalization. Knowledge of different data modeling techniques (e.g., ER diagrams, star schema, snowflake schema). Experience with reporting tools is a plus (Grafana, Power bi, Tableau). Having a healthy appetite and open mind for new technologies is a plus Holds a bachelor's or master's degree in computer science, information technology, or a related field. Relevant experience level of 6-10 years is mandatory. Job location is Chennai . Our offer An extensive mobility program for a healthy work-life balance. A permanent training track which allows you to develop yourself personally and professionally. A stimulating, innovative workplace with numerous growth opportunities. A people-oriented environment with an interactive health program and a focus on employee wellbeing. Show more Show less
Posted 1 week ago
8.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities Bea able to align data models with business goals and enterprise architecture Collaborate with Data Architects, Engineers, Business Analysts, and Leadership teams Lead data modelling, governance discussions and decision-making across cross-functional teams Proactively identify data inconsistencies, integrity issues, and optimization opportunities Design scalable and future-proof data models Define and enforce enterprise data modelling standards and best practices Experience working in Agile environments (Scrum, Kanban) Identify impacted applications, size capabilities, and create new capabilities Lead complex initiatives with multiple cross-application impacts, ensuring seamless integration Drive innovation, optimize processes, and deliver high-quality architecture solutions Understand business objectives, review business scenarios, and plan acceptance criteria for proposed solution architecture Discuss capabilities with individual applications, resolve dependencies and conflicts, and reach agreements on proposed high-level approaches and solutions Participate in Architecture Review, present solutions, and review other solutions Work with Enterprise architects to learn and adopt standards and best practices Design solutions adhering to applicable rules and compliances Stay updated with the latest technology trends to solve business problems with minimal change or impact Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Undergraduate degree or equivalent experience 8+ years of proven experience in a similar role, leading and mentoring a team of architects and technical leads Extensive experience with Relational, Dimensional, and NoSQL Data Modelling Experience in driving innovation, optimizing processes, and delivering high-quality solutions Experience in large scale OLAP, OLTP, and hybrid data processing systems Experience in complex initiatives with multiple cross-application impacts Expert in Erwin for Conceptual, Logical, and Physical Data Modelling Expertise in Relational Databases, SQL, indexing and partitioning for databases like Teradata, Snowflake, Azure Synapse or traditional RDBMS Expertise in ETL/ELT architecture, data pipelines, and integration strategies Expertise in Data Normalization, Denormalization and Performance Optimization Exposure to cloud platforms, tools, and AI-based solutions Solid knowledge of 3NF, Star Schema, Snowflake schema, and Data Vault Knowledge of Java, Python, Spring, Spring boot framework, SQL, Mongo DBS, KAFKA, React JS, Dynatrace, Power BI kind of exposure Knowledge of Azure Platform as a Service (PaaS) offerings (Azure Functions, App Service, Event grid) Good knowledge of the latest happenings in the technology world Advanced SQL skills for complex queries, stored procedures, indexing, partitioning, macros, recursive queries, query tuning and OLAP functions Understanding of Data Privacy Regulations, Master Data Management, and Data Quality Proven excellent communication and leadership skills Proven ability to think from a long-term perspective and arrive at intentional and strategic architecture Proven ability to provide consistent solutions across Lines of Business (LOB) At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission. Show more Show less
Posted 1 week ago
7.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Data Engineer Location: Bangalore About US FICO, originally known as Fair Isaac Corporation, is a leading analytics and decision management company that empowers businesses and individuals around the world with data-driven insights. Known for pioneering the FICO® Score, a standard in consumer credit risk assessment, FICO combines advanced analytics, machine learning, and sophisticated algorithms to drive smarter, faster decisions across industries. From financial services to retail, insurance, and healthcare, FICO's innovative solutions help organizations make precise decisions, reduce risk, and enhance customer experiences. With a strong commitment to ethical use of AI and data, FICO is dedicated to improving financial access and inclusivity, fostering trust, and driving growth for a digitally evolving world. The Opportunity “As a Data Engineer on our newly formed Generative AI team, you will work at the frontier of language model applications, developing novel solutions for various areas of the FICO platform to include fraud investigation, decision automation, process flow automation, and optimization. You will play a critical role in the implementation of Data Warehousing and Data Lake solutions. You will have the opportunity to make a meaningful impact on FICO’s platform by infusing it with next-generation AI capabilities. You’ll work with a dedicated team, leveraging your skills in the data engineering area to build solutions and drive innovation forward. ”. What You’ll Contribute Perform hands-on analysis, technical design, solution architecture, prototyping, proofs-of-concept, development, unit and integration testing, debugging, documentation, deployment/migration, updates, maintenance, and support on Data Platform technologies. Design, develop, and maintain robust, scalable data pipelines for batch and real-time processing using modern tools like Apache Spark, Kafka, Airflow, or similar. Build efficient ETL/ELT workflows to ingest, clean, and transform structured and unstructured data from various sources into a well-organized data lake or warehouse. Manage and optimize cloud-based data infrastructure on platforms such as AWS (e.g., S3, Glue, Redshift, RDS) or Snowflake. Collaborate with cross-functional teams to understand data needs and deliver reliable datasets that support analytics, reporting, and machine learning use cases. Implement and monitor data quality, validation, and profiling processes to ensure the accuracy and reliability of downstream data. Design and enforce data models, schemas, and partitioning strategies that support performance and cost-efficiency. Develop and maintain data catalogs and documentation, ensuring data assets are discoverable and governed. Support DevOps/DataOps practices by automating deployments, tests, and monitoring for data pipelines using CI/CD tools. Proactively identify data-related issues and drive continuous improvements in pipeline reliability and scalability. Contribute to data security, privacy, and compliance efforts, implementing role-based access controls and encryption best practices. Design scalable architectures that support FICO’s analytics and decisioning solutions Partner with Data Science, Analytics, and DevOps teams to align architecture with business needs. What We’re Seeking 7+ years of hands-on experience as a Data Engineer working on production-grade systems. Proficiency in programming languages such as Python or Scala for data processing. Strong SQL skills, including complex joins, window functions, and query optimization techniques. Experience with cloud platforms such as AWS, GCP, or Azure, and relevant services (e.g., S3, Glue, BigQuery, Azure Data Lake). Familiarity with data orchestration tools like Airflow, Dagster, or Prefect. Hands-on experience with data warehousing technologies like Redshift, Snowflake, BigQuery, or Delta Lake. Understanding of stream processing frameworks such as Apache Kafka, Kinesis, or Flink is a plus. Knowledge of data modeling concepts (e.g., star schema, normalization, denormalization). Comfortable working in version-controlled environments using Git and managing workflows with GitHub Actions or similar tools. Strong analytical and problem-solving skills, with the ability to debug and resolve pipeline and performance issues. Excellent written and verbal communication skills, with an ability to collaborate across engineering, analytics, and business teams. Demonstrated technical curiosity and passion for learning, with the ability to quickly adapt to new technologies, development platforms, and programming languages as needed. Bachelor’s in computer science or related field Exposure to MLOps pipelines MLflow, Kubeflow, Feature Stores is a plus but not mandatory Engineers with certifications will be preferred Our Offer to You An inclusive culture strongly reflecting our core values: Act Like an Owner, Delight Our Customers and Earn the Respect of Others. The opportunity to make an impact and develop professionally by leveraging your unique strengths and participating in valuable learning experiences. Highly competitive compensation, benefits and rewards programs that encourage you to bring your best every day and be recognized for doing so. An engaging, people-first work environment offering work/life balance, employee resource groups, and social events to promote interaction and camaraderie. Show more Show less
Posted 1 week ago
7.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Data Engineer About US FICO, originally known as Fair Isaac Corporation, is a leading analytics and decision management company that empowers businesses and individuals around the world with data-driven insights. Known for pioneering the FICO® Score, a standard in consumer credit risk assessment, FICO combines advanced analytics, machine learning, and sophisticated algorithms to drive smarter, faster decisions across industries. From financial services to retail, insurance, and healthcare, FICO's innovative solutions help organizations make precise decisions, reduce risk, and enhance customer experiences. With a strong commitment to ethical use of AI and data, FICO is dedicated to improving financial access and inclusivity, fostering trust, and driving growth for a digitally evolving world. The Opportunity “As a Data Engineer on our newly formed Generative AI team, you will work at the frontier of language model applications, developing novel solutions for various areas of the FICO platform to include fraud investigation, decision automation, process flow automation, and optimization. You will play a critical role in the implementation of Data Warehousing and Data Lake solutions. You will have the opportunity to make a meaningful impact on FICO’s platform by infusing it with next-generation AI capabilities. You’ll work with a dedicated team, leveraging your skills in the data engineering area to build solutions and drive innovation forward. ”. What You’ll Contribute Perform hands-on analysis, technical design, solution architecture, prototyping, proofs-of-concept, development, unit and integration testing, debugging, documentation, deployment/migration, updates, maintenance, and support on Data Platform technologies. Design, develop, and maintain robust, scalable data pipelines for batch and real-time processing using modern tools like Apache Spark, Kafka, Airflow, or similar. Build efficient ETL/ELT workflows to ingest, clean, and transform structured and unstructured data from various sources into a well-organized data lake or warehouse. Manage and optimize cloud-based data infrastructure on platforms such as AWS (e.g., S3, Glue, Redshift, RDS) or Snowflake. Collaborate with cross-functional teams to understand data needs and deliver reliable datasets that support analytics, reporting, and machine learning use cases. Implement and monitor data quality, validation, and profiling processes to ensure the accuracy and reliability of downstream data. Design and enforce data models, schemas, and partitioning strategies that support performance and cost-efficiency. Develop and maintain data catalogs and documentation, ensuring data assets are discoverable and governed. Support DevOps/DataOps practices by automating deployments, tests, and monitoring for data pipelines using CI/CD tools. Proactively identify data-related issues and drive continuous improvements in pipeline reliability and scalability. Contribute to data security, privacy, and compliance efforts, implementing role-based access controls and encryption best practices. Design scalable architectures that support FICO’s analytics and decisioning solutions Partner with Data Science, Analytics, and DevOps teams to align architecture with business needs. What We’re Seeking 7+ years of hands-on experience as a Data Engineer working on production-grade systems. Proficiency in programming languages such as Python or Scala for data processing. Strong SQL skills, including complex joins, window functions, and query optimization techniques. Experience with cloud platforms such as AWS, GCP, or Azure, and relevant services (e.g., S3, Glue, BigQuery, Azure Data Lake). Familiarity with data orchestration tools like Airflow, Dagster, or Prefect. Hands-on experience with data warehousing technologies like Redshift, Snowflake, BigQuery, or Delta Lake. Understanding of stream processing frameworks such as Apache Kafka, Kinesis, or Flink is a plus. Knowledge of data modeling concepts (e.g., star schema, normalization, denormalization). Comfortable working in version-controlled environments using Git and managing workflows with GitHub Actions or similar tools. Strong analytical and problem-solving skills, with the ability to debug and resolve pipeline and performance issues. Excellent written and verbal communication skills, with an ability to collaborate across engineering, analytics, and business teams. Demonstrated technical curiosity and passion for learning, with the ability to quickly adapt to new technologies, development platforms, and programming languages as needed. Bachelor’s in computer science or related field Exposure to MLOps pipelines MLflow, Kubeflow, Feature Stores is a plus but not mandatory Engineers with certifications will be preferred Our Offer to You An inclusive culture strongly reflecting our core values: Act Like an Owner, Delight Our Customers and Earn the Respect of Others. The opportunity to make an impact and develop professionally by leveraging your unique strengths and participating in valuable learning experiences. Highly competitive compensation, benefits and rewards programs that encourage you to bring your best every day and be recognized for doing so. An engaging, people-first work environment offering work/life balance, employee resource groups, and social events to promote interaction and camaraderie. Show more Show less
Posted 1 week ago
3.0 - 5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Position: Are you a passionate backend engineer looking to make a significant impact? Join our cross-functional, distributed team responsible for building and maintaining the core backend functionalities that power our customers. You’ll be instrumental in developing scalable and robust solutions, directly impacting on the efficiency and reliability of our platform. This role offers a unique opportunity to work on cutting-edge technologies and contribute to a critical part of our business, all within a supportive and collaborative environment. Role: Junior .Net Engineer Location: Hyderabad Experience: 3 to 5 years Job Type: Full Time Employment What You'll Do: Implement feature/module as per design and requirements shared by Architect, Leads, BA/PM using coding best practices Develop, and maintain microservices using C# and .NET Core perform unit testing as per code coverage benchmark. Support testing & deployment activities Micro-Services - containerized micro-services (Docker/Kubernetes/Ansible etc.) Create and maintain RESTful APIs to facilitate communication between microservices and other components. Analyze and fix defects to develop high standard stable codes as per design specifications. Utilize version control systems (e.g., Git) to manage source code. Requirement Analysis: Understand and analyze functional/non-functional requirements and seek clarifications from Architect/Leads for better understanding of requirements. Participate in estimation activity for given requirements. Coding and Development: Writing clean and maintainable code using best practices of software development. Make use of different code analyzer tools. Follow TTD approach for any implementation. Perform coding and unit testing as per design. Problem Solving/ Defect Fixing: Investigate and debug any defect raised. Finding root causes, finding solutions, exploring alternate approaches and then fixing defects with appropriate solutions. Fix defects identified during functional/non-functional testing, during UAT within agreed timelines. Perform estimation for defect fixes for self and the team. Deployment Support: Provide prompt response during production support Expertise You'll Bring: Language – C# Visual Studio Professional Visual Studio Code .NET Core 3.1 onwards Entity Framework with code-first approach Dependency Injection Error Handling and Logging SDLC Object-Oriented Programming (OOP) Principles SOLID Principles Clean Coding Principles Design patterns API Rest API with token-based Authentication & Authorization Postman Swagger Database Relational Database: SQL Server/MySQL/ PostgreSQL Stored Procedures and Functions Relationships, Data Normalization & Denormalization, Indexes and Performance Optimization techniques Preferred Skills Development Exposure to Cloud: Azure/GCP/AWS Code Quality Tool – Sonar Exposure to CICD process and tools like Jenkins etc., Good understanding of docker and Kubernetes Exposure to Agile software development methodologies and ceremonies Benefits: Competitive salary and benefits package Culture focused on talent development with quarterly promotion cycles and company-sponsored higher education and certifications Opportunity to work with cutting-edge technologies Employee engagement initiatives such as project parties, flexible work hours, and Long Service awards Annual health check-ups Insurance coverage: group term life, personal accident, and Mediclaim hospitalization for self, spouse, two children, and parents Inclusive Environment: Persistent Ltd. is dedicated to fostering diversity and inclusion in the workplace. We invite applications from all qualified individuals, including those with disabilities, and regardless of gender or gender preference. We welcome diverse candidates from all backgrounds. We offer hybrid work options and flexible working hours to accommodate various needs and preferences. Our office is equipped with accessible facilities, including adjustable workstations, ergonomic chairs, and assistive technologies to support employees with physical disabilities. If you are a person with disabilities and have specific requirements, please inform us during the application process or at any time during your employment. We are committed to creating an inclusive environment where all employees can thrive. Our company fosters a value-driven and people-centric work environment that enables our employees to: Accelerate growth, both professionally and personally Impact the world in powerful, positive ways, using the latest technologies Enjoy collaborative innovation, with diversity and work-life wellbeing at the core Unlock global opportunities to work and learn with the industry’s best Let’s unleash your full potential at Persistent “Persistent is an Equal Opportunity Employer and prohibits discrimination and harassment of any kind.” Show more Show less
Posted 1 week ago
8.0 years
0 Lacs
India
Remote
Job title : Data Engineer Experience: 5–8 Years Location: Remote Shift: IST (Indian Standard Time) Contract Type: Short-Term Contract Job Overview We are seeking an experienced Data Engineer with deep expertise in Microsoft Fabric to join our team on a short-term contract basis. You will play a pivotal role in designing and building scalable data solutions and enabling business insights in a modern cloud-first environment. The ideal candidate will have a passion for data architecture, strong hands-on technical skills, and the ability to translate business needs into robust technical solutions. Key Responsibilities Design and implement end-to-end data pipelines using Microsoft Fabric components (Data Factory, Dataflows Gen2). Build and maintain data models , semantic layers , and data marts for reporting and analytics. Develop and optimize SQL-based ETL processes integrating structured and unstructured data sources. Collaborate with BI teams to create effective Power BI datasets , dashboards, and reports. Ensure robust data integration across various platforms (on-premises and cloud). Implement mechanisms for data quality , validation, and error handling. Translate business requirements into scalable and maintainable technical solutions. Optimize data pipelines for performance and cost-efficiency . Provide technical mentorship to junior data engineers as needed. Required Skills Hands-on experience with Microsoft Fabric : Dataflows Gen2, Pipelines, OneLake. Strong proficiency in Power BI , including semantic modeling and dashboard/report creation. Deep understanding of data modeling techniques: star schema, snowflake schema, normalization, denormalization. Expertise in SQL , stored procedures, and query performance tuning. Experience integrating data from diverse sources: APIs, flat files, databases, and streaming. Knowledge of data governance , lineage, and data catalog tools within the Microsoft ecosystem. Strong problem-solving skills and ability to manage large-scale data workflows. Show more Show less
Posted 1 week ago
0.0 - 12.0 years
0 Lacs
Pune, Maharashtra
On-site
You deserve to do what you love, and love what you do – a career that works as hard for you as you do. At Fiserv, we are more than 40,000 #FiservProud innovators delivering superior value for our clients through leading technology, targeted innovation and excellence in everything we do. You have choices – if you strive to be a part of a team driven to create with purpose, now is your chance to Find your Forward with Fiserv. Responsibilities Requisition ID R-10363280 Date posted 06/17/2025 End Date 06/30/2025 City Pune State/Region Maharashtra Country India Additional Locations Noida, Uttar Pradesh Location Type Onsite Calling all innovators – find your future at Fiserv. We’re Fiserv, a global leader in Fintech and payments, and we move money and information in a way that moves the world. We connect financial institutions, corporations, merchants, and consumers to one another millions of times a day – quickly, reliably, and securely. Any time you swipe your credit card, pay through a mobile app, or withdraw money from the bank, we’re involved. If you want to make an impact on a global scale, come make a difference at Fiserv. Job Title Tech Lead, Data Architecture What Does a great Data Architecture do at Fiserv? We are seeking a seasoned Data Architect with extensive experience in data modeling and architecting data solutions, particularly with Snowflake. The ideal candidate will have 8-12 years of hands-on experience in designing, implementing, and optimizing data architectures to meet the evolving needs of our organization. As a Data Architect, you will play a pivotal role in ensuring the robustness, scalability, and efficiency of our data systems. What you will do: Data Architecture Design: Develop, optimize, and oversee conceptual and logical data systems, ensuring they meet both current and future business requirements. Data Modeling: Create and maintain data models using Snowflake, ensuring data integrity, performance, and security. Solution Architecture: Design and implement end-to-end data solutions, including data ingestion, transformation, storage, and access. Stakeholder Collaboration: Work closely with business stakeholders, data scientists, and engineers to understand data requirements and translate them into technical specifications. Performance Optimization: Monitor and improve data system performance, addressing any issues related to scalability, efficiency, and data quality. Governance and Compliance: Ensure data architectures comply with data governance policies, standards, and industry regulations. Technology Evaluation: Stay current with emerging data technologies and assess their potential impact and value to the organization. Mentorship and Leadership: Provide technical guidance and mentorship to junior data architects and engineers, fostering a culture of continuous learning and improvement. What you will need to have: 8-12 Years of Experience in data architecture and data modeling in Snowflake. Proficiency in Snowflake data warehousing platform. Strong understanding of data modeling concepts, including normalization, denormalization, star schema, and snowflake schema. Experience with ETL/ELT processes and tools. Familiarity with data governance and data security best practices. Knowledge of SQL and performance tuning for large-scale data systems. Experience with cloud platforms (AWS, Azure, or GCP) and related data services. Excellent problem-solving and analytical skills. Strong communication and interpersonal skills, with the ability to translate technical concepts for non-technical stakeholders. Demonstrated ability to lead and mentor technical teams. What would be nice to have: Education: Bachelor's or Master's degree in Computer Science, Information Technology, or a related field. Certifications: Snowflake certifications or other relevant industry certifications. Industry Experience: Experience in Finance/Cards/Payments industry Thank you for considering employment with Fiserv. Please: Apply using your legal name Complete the step-by-step profile and attach your resume (either is acceptable, both are preferable). Our commitment to Diversity and Inclusion: Fiserv is proud to be an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, gender, gender identity, sexual orientation, age, disability, protected veteran status, or any other category protected by law. Note to agencies: Fiserv does not accept resume submissions from agencies outside of existing agreements. Please do not send resumes to Fiserv associates. Fiserv is not responsible for any fees associated with unsolicited resume submissions. Warning about fake job posts: Please be aware of fraudulent job postings that are not affiliated with Fiserv. Fraudulent job postings may be used by cyber criminals to target your personally identifiable information and/or to steal money or financial information. Any communications from a Fiserv representative will come from a legitimate Fiserv email address.
Posted 1 week ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Spendflo is a fast-growing Series A startup helping companies streamline how they procure, manage, and optimize their software and services. Backed by top-tier investors, were building the most intelligent, automated platform for procurement operations. We are now looking for a Senior Data Engineer to design, build, and scale our data infrastructure. Youll be the backbone of all data movement at Spendflo from ingestion to transformation to Youll Do : Design, implement, and own the end-to-end data architecture at Spendflo. Build and maintain robust, scalable ETL/ELT pipelines across multiple sources and systems. Develop and optimize data models for analytics, reporting, and product needs. Own the reporting layer and work with PMs, analysts, and leadership to deliver actionable data. Ensure data quality, consistency, and lineage through validation and monitoring. Collaborate with engineering, product, and data science teams to build seamless data flows. Optimize data storage and query performance for scale and speed. Own documentation for pipelines, models, and data flows. Stay current with the latest data tools and bring in the right technologies. Mentor junior data engineers and help establish data best Qualifications : 5+ years of experience as a data engineer, preferably in a product/startup environment . Strong expertise in building ETL/ELT pipelines using modern frameworks (e.g., Dagster, dbt, Airflow). Deep knowledge of data modeling (star/snowflake schemas, denormalization, dimensional modeling). Hands-on with SQL (advanced queries, performance tuning, window functions, etc.). Experience with cloud data warehouses like Redshift, BigQuery, Snowflake, or similar. Comfortable working with cloud platforms (AWS/GCP/Azure) and tools like S3, Lambda, etc. Exposure to BI tools like Looker, Power BI, Tableau, or equivalent. Strong debugging and performance tuning skills. Excellent communication and documentation Qualifications : Built or managed large-scale, cloud-native data pipelines. Experience with real-time or stream processing (Kafka, Kinesis, etc.). Understanding of data governance, privacy, and security best practices. Exposure to machine learning pipelines or collaboration with data science teams. Startup experience able to handle ambiguity, fast pace, and end-to-end ownership. (ref:hirist.tech) Show more Show less
Posted 1 week ago
9.0 years
0 Lacs
Greater Kolkata Area
On-site
Job Description Having 9+ years of working experience in Data Engineering and Data Analytic projects in implementing Data Warehouse, Data Lake and Lakehouse and associated ETL/ELT patterns. Worked as a Data Modeller in one or two implementations in creating and implementing Data models and Data Base designs using Dimensional, ER models. Good knowledge and experience in modelling complex scenario's like many to many relationships, SCD types, Late arriving fact and dimensions etc. Hands on experience in any one of the Data modelling tools like Erwin, ER/Studio, Enterprise Architect or SQLDBM etc. Experience in working closely with Business stakeholders/Business Analyst to understand the functional requirements and translating it into Data Models and database designs. Experience in creating conceptual models and logical models and translating it into physical models to address the both functional and non functional requirements. Strong knowledge in SQL, able to write complex queries and profile the data to understand the relationships and DQ issues. Very strong understanding of database modelling and design principles like normalization, denormalization, isolation levels. Experience in Performance optimizations through database designs (Physical Modelling). Good communication skills (ref:hirist.tech) Show more Show less
Posted 1 week ago
8.0 years
0 Lacs
Greater Kolkata Area
On-site
JD For Data Modeler Key Requirements : Total 8 + years of Experience with 8 years of hands-on experience in data modelling Expertise in conceptual, logical, and physical data modeling Proficient in tools such as Erwin, SQL DBM, or similar Strong understanding of data governance and database design best practices Excellent communication and collaboration skills Having 8+ years of working experience in Data Engineering and Data Analytic projects in implementing Data Warehouse, Data Lake and Lakehouse and associated ETL/ELT patterns. Worked as a Data Modeller in one or two implementations in creating and implementing Data models and Data Base designs using Dimensional, ER models. Good knowledge and experience in modelling complex scenarios like many to many relationships, SCD types, Late arriving fact and dimensions etc. Hands on experience in any one of the Data modelling tools like Erwin, ER/Studio, Enterprise Architect or SQLDBM etc. Experience in working closely with Business stakeholders/Business Analyst to understand the functional requirements and translating it into Data Models and database designs. Experience in creating conceptual models and logical models and translating it into physical models to address the both functional and non functional requirements. Strong knowledge in SQL, able to write complex queries and profile the data to understand the relationships and DQ issues. Very strong understanding of database modelling and design principles like normalization, denormalization, isolation levels. Experience in Performance optimizations through database designs (Physical Modelling). (ref:hirist.tech) Show more Show less
Posted 1 week ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Data Modeler JD Proven experience as a Data Modeler or in a similar role (8 years depending on seniority level). Proficiency in data modeling tools (e.g., ER/Studio, Erwin, SAP PowerDesigner, or similar). Strong understanding of database technologies (e.g., SQL Server, Oracle, PostgreSQL, Snowflake). Experience with cloud data platforms (e.g., AWS, Azure, GCP). Familiarity with ETL processes and tools. Excellent knowledge of normalization and denormalization techniques. Strong analytical and problem-solving skills. Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Proficiency in data modeling tools such as ER/Studio, ERwin or similar. Deep understanding of relational database design, normalization/denormalization, and data warehousing principles. Experience with SQL and working knowledge of database platforms like Oracle, SQL Server, PostgreSQL, or Snowflake. Strong knowledge of metadata management, data lineage, and data governance practices. Understanding of data integration, ETL processes, and data quality frameworks. Ability to interpret and translate complex business requirements into scalable data models. Excellent communication and documentation skills to collaborate with cross-functional teams. Show more Show less
Posted 2 weeks ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Key Responsibilities: Ab Initio Development & Optimization: Design, develop, test, and deploy high-performance, scalable ETL/ELT solutions using Ab Initio components (GDE, Co>Operating System, EME, Control Center). Translate complex business requirements and data transformation rules into efficient and maintainable Ab Initio graphs and plans. Optimize existing Ab Initio applications for improved performance, resource utilization, and reliability. Troubleshoot, debug, and resolve complex data quality and processing issues within Ab Initio graphs and systems. Data Modeling & Advanced SQL: Apply expertise in advanced SQL to write complex queries for data extraction, transformation, validation, and analysis across various relational databases (e.g., DB2, Oracle, SQL Server). Design and implement efficient relational data models (e.g., Star Schema, Snowflake Schema, 3NF) for data warehousing and analytics. Understand and apply big data modeling concepts (e.g., denormalization for performance, schema-on-read, partitioning strategies for distributed systems). Spark & Big Data Integration: Collaborate with data architects on data integration strategies in a hybrid environment, understanding how Ab Initio processes interact with or feed into big data platforms. Analyze and debug data flow issues that may span across traditional ETL and big data platforms (e.g., HDFS, Hive, Spark). Demonstrate strong foundational knowledge in Apache Spark, including understanding Spark SQL and DataFrame operations, to comprehend and potentially assist in debugging Spark-based pipelines. Collaboration & Documentation: Work effectively with business analysts, data architects, QA teams, and other developers to deliver high-quality data solutions. Create and maintain comprehensive technical documentation for Ab Initio graphs, data lineage, data models, and ETL processes. Participate in code reviews, design discussions, and contribute to best practices within the team. Required Skills & Qualifications: Bachelor's or Master's degree in Computer Science, Engineering, or a related technical field. 5+ years of hands-on, in-depth development experience with Ab Initio GDE, Co>Operating System, and EME. Expert-level proficiency in SQL for complex data manipulation, analysis, and optimization across various relational databases. Solid understanding of relational data modeling concepts and experience designing logical and physical data models. Demonstrated proficiency or strong foundational knowledge in Apache Spark (Spark SQL, DataFrames) and familiarity with the broader Hadoop ecosystem (HDFS, Hive). Experience with Unix/Linux shell scripting. Strong understanding of ETL processes, data warehousing concepts, and data integration patterns. Excellent problem-solving, analytical, and troubleshooting skills. Strong communication and collaboration skills, with the ability to work effectively in cross-functional teams Show more Show less
Posted 2 weeks ago
5.0 years
0 Lacs
Greater Chennai Area
On-site
Overview A Data Modeller is responsible for designing, implementing, and managing data models that support the strategic and operational needs of an organization. This role involves translating business requirements into data structures, ensuring consistency, accuracy, and efficiency in data storage and retrieval processes. Responsibilities Develop and maintain conceptual, logical, and physical data models. Collaborate with business analysts, data architects, and stakeholders to gather data requirements. Translate business needs into efficient database designs. Optimize and refine existing data models to support analytics and reporting. Ensure data models support data governance, quality, and security standards. Work closely with database developers and administrators on implementation. Document data models, metadata, and data flows. Requirements Bachelor’s or Master’s degree in Computer Science, Information Systems, Data Science, or related field. Data Modeling Tools: ER/Studio, ERwin, SQL Developer Data Modeler, or similar. Database Technologies: Proficiency in SQL and familiarity with databases like Oracle, SQL Server, MySQL, PostgreSQL. Data Warehousing: Experience with dimensional modeling, star and snowflake schemas. ETL Processes: Knowledge of Extract, Transform, Load processes and tools. Cloud Platforms: Familiarity with cloud data services (e.g., AWS Redshift, Azure Synapse, Google BigQuery). Metadata Management & Data Governance: Understanding of data cataloging and governance principles. Strong analytical and problem-solving skills. Excellent communication skills to work with business stakeholders and technical teams. Ability to document models clearly and explain complex data relationships. 5+ years in data modeling, data architecture, or related roles. Experience working in Agile or DevOps environments is often preferred. Understanding of normalization/denormalization. Experience with business intelligence and reporting tools. Familiarity with master data management (MDM) principles. Show more Show less
Posted 2 weeks ago
6.0 - 8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description: Data Modeler Primary Skill: Data Modelling, Database design, Erwin, Dimension Modelling SecondarySkill : Datalake, Lakhouse design, Datawarehouse design Location: Hyderabad Industry: Insurance Employment Type: Permanent Functional Area: Solutions & Delivery Experience: 6-8 Years Are you a seasoned Data Modeller with expertise in Data & Analytics Projects? Do you stay ahead of the curve with the latest technologies and eager to expand your knowledge? Do you thrive in dynamic, fast-paced environments and have a passion for delivering high-quality solutions? If so, we have an exciting opportunity for you! As a Data Modeler at ValueMomentum, you will be responsible for designing and implementing scalable, high-performance Modern Data & Analytics solutions in an agile environment. You will work closely with cross-functional teams to create reusable, testable, and sustainable data architectures that align with business needs. This role will directly impact the quality of data systems and analytics solutions, helping organizations unlock the full potential of their data. Why ValueMomentum? Headquartered in New Jersey, US, ValueMomentum is one of the fastest-growing software & solutions firms focused on the Healthcare, Insurance, and Financial Services domains. Our industry focus, expertise in technology backed by R&D, and our customer-first approach uniquely position us to deliver value and drive momentum for our customers’ initiatives. At ValueMomentum, we value continuous learning, innovation, and collaboration. As an MS Fabric Architect, you will have the opportunity to work with cutting-edge technologies and make a significant impact on our data-driven solutions. You will collaborate with a talented team of professionals, shape the future of data architecture, and contribute to the success of our clients in industries that are transforming rapidly. If you're ready to take your career to the next level, apply today to join our dynamic team and help us drive innovation in the world of Modern Data & Analytics! Key Responsibilities: Design and develop conceptual, logical, and physical data models meeting business requirements and strategic initiatives. Design, Develop and maintain Insurance Data models. Collaborate with business stakeholders and data engineers to understand data needs and translate them into effective data models. Analyze and evaluate existing data models and databases to identify opportunities for optimization, standardization, and improvement. Define data standards, naming conventions, and data governance policies to ensure consistency, integrity, and quality of data models. Develop and maintain documentation of data models, data dictionaries, STTM, and metadata to facilitate understanding and usage of data assets. Implement best practices for data modelling, including normalization, denormalization, indexing, partitioning, and optimization techniques. Work closely with database administrators to ensure proper implementation and maintenance of data models in database management systems. Stay abreast of industry trends, emerging technologies, and best practices in data modelling, database design, and data management. Must-have Skills: 6–8 years of hands-on experience in data modelling, including database design, dimensional modelling (star/snowflake schemas). Hands-on experience in implementing Data Models for Policy, Claims, and Finance subject areas within the Property & Casualty (P&C) Insurance domain. Proficiency in data modelling tools such as erwin, ER/Studio, or PowerDesigner. Strong SQL skills and experience working with relational databases, primarily SQL Server. Exposure to design principles and best practices for Data Lakes and Lakehouse architectures. Experience with big data platforms (e.g., Spark). Strong documentation skills including data dictionaries, STTM, ER models etc. Familiarity with data warehouse design principles, ETL processes, and data integration techniques. Knowledge of cloud-based data platforms and infrastructure. Nice-to-have Skills: Expertise in advanced data modelling techniques for Real-time/Near real-time data solutions Experience with NoSQL data modelling. Handson experience on any BI tool. Your Key Accountabilities: Collaborate with cross-functional teams to align data models with business and technical requirements. Define and enforce best practices for data modelling and database design. Provide technical guidance on database optimization and performance tuning. Draft technical guidelines, documentation, and data dictionaries to standardize data modelling practices. What We Offer: Career Advancement: Individual Career Development, coaching, and mentoring programs for professional and leadership skill development. Comprehensive training and certification programs. Performance Management: Goal setting, continuous feedback, and year-end appraisal. Reward & recognition for extraordinary performers. Benefits: Comprehensive health benefits, wellness, and fitness programs. Paid time off and holidays. Culture: A highly transparent organization with an open-door policy and a vibrant culture. a If you’re enthusiastic about Data & Analytics and eager to make an impact through your expertise, we invite you to join us. Apply now and become part of a team that's driving the future of data-driven decision-making! Show more Show less
Posted 3 weeks ago
8.0 years
0 Lacs
Kochi, Kerala, India
On-site
Job Description We are looking for a seasoned Data Engineer with 5–8 years of experience, specializing in Microsoft Fabric for our UK Based client. The ideal candidate will play a key role in designing, building, and optimizing scalable data pipelines and models. You will work closely with analytics and business teams to drive data integration, ensure quality, and support data-driven decision-making in a modern cloud environment. Key Responsibilities: Design, develop, and optimize end-to-end data pipelines using Microsoft Fabric (Data Factory, Dataflows Gen2). Create and maintain data models, semantic models, and data marts for analytical and reporting purposes. Develop and manage SQL-based ETL processes, integrating various structured and unstructured data sources. Collaborate with BI developers and analysts to develop Power BI datasets, dashboards, and reports. Implement robust data integration solutions across diverse platforms and sources (on-premises, cloud). Ensure data integrity, quality, and governance through automated validation and error handling mechanisms. Work with business stakeholders to understand data requirements and translate them into technical specifications. Optimize data workflows for performance and cost-efficiency in a cloud-first architecture. Provide mentorship and technical guidance to junior data engineers. Required Skills: Strong hands-on experience with Microsoft Fabric, including Dataflows Gen2, Pipelines, and OneLake. Proficiency in Power BI, including building reports, dashboards, and working with semantic models. Solid understanding of data modeling techniques: star schema, snowflake, normalization/denormalization. Deep experience with SQL, stored procedures, and query optimization. Experience in data integration from diverse sources such as APIs, flat files, databases, and streaming data. Knowledge of data governance, lineage, and data catalog capabilities within the Microsoft ecosystem. Strong problem-solving skills and experience in performance tuning of large datasets. Show more Show less
Posted 3 weeks ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Key Responsibilities JOB DESCRIPTION Develop, optimize, and maintain complex SQL queries, stored procedures, functions, and views. Analyze slow-performing queries and optimize execution plans to improve database performance. Design and implement indexing strategies to enhance query efficiency. Work with developers to optimize database interactions in applications. Develop and implement Teradata best practices for large-scale data processing and ETL workflows. Monitor and troubleshoot Teradata performance issues using tools like DBQL (Database Query Log), Viewpoint, and Explain Plan Analysis. Perform data modeling, normalization, and schema design improvements. Collaborate with teams to implement best practices for database tuning and performance enhancement. Automate repetitive database tasks using scripts and scheduled jobs. Document database architecture, queries, and optimization techniques. Responsibilities Required Skills & Qualifications: Strong proficiency in Teradata SQL, including query optimization techniques. Strong proficiency in SQL (T-SQL, PL/SQL, or equivalent). Experience with indexing strategies, partitioning, and caching techniques. Knowledge of database normalization, denormalization, and best practices. Familiarity with ETL processes, data warehousing, and large datasets. Experience in writing and optimizing stored procedures, triggers, and functions. Hands-on experience in Teradata performance tuning, indexing, partitioning, and statistics collection. Experience with EXPLAIN plans, DBQL analysis, and Teradata Viewpoint monitoring. Candidate should have PowerBI / Tableau integration experience - Good to Have About Us ABOUT US Bristlecone is the leading provider of AI-powered application transformation services for the connected supply chain. We empower our customers with speed, visibility, automation, and resiliency – to thrive on change. Our transformative solutions in Digital Logistics, Cognitive Manufacturing, Autonomous Planning, Smart Procurement and Digitalization are positioned around key industry pillars and delivered through a comprehensive portfolio of services spanning digital strategy, design and build, and implementation across a range of technology platforms. Bristlecone is ranked among the top ten leaders in supply chain services by Gartner. We are headquartered in San Jose, California, with locations across North America, Europe and Asia, and over 2,500 consultants. Bristlecone is part of the $19.4 billion Mahindra Group. Equal Opportunity Employer Bristlecone is an equal opportunity employer. All applicants will be considered for employment without attention to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran or disability status . Information Security Responsibilities Understand and adhere to Information Security policies, guidelines and procedure, practice them for protection of organizational data and Information System. Take part in information security training and act while handling information. Report all suspected security and policy breach to InfoSec team or appropriate authority (CISO). Understand and adhere to the additional information security responsibilities as part of the assigned job role. Show more Show less
Posted 3 weeks ago
8.0 - 10.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Title: SAP Analytics Cloud Specialist Career Level : D2 Introduction to role Are you ready to make a significant impact in the world of analytics? Join AstraZeneca's Process Insights team within Global Business Services (GBS) as an SAP Analytics Cloud Specialist. We are on a mission to transform business processes through automation, analytics, and AI capabilities. As we scale our capabilities, you'll play a pivotal role in delivering SAP analytics solutions that drive progress across AstraZeneca. Accountabilities Collaborate with stakeholders to understand their business process requirements and objectives, translating them into SAP Analytics solutions (SAC & Datasphere). Create Extract, Transform, and Load (ETL) data pipelines, data warehousing, and testing. Validate and assure data quality and accuracy, including data cleansing, enrichment, and building data models. Develop comprehensive analytics and dashboards for business collaborators for reporting, business planning, and critical metric tracking purposes. Enhance solution experiences and visualizations using low/no-code development. Essential Skills/Experience Degree in Computer Science, Business Informatics or a comparable degree. Overall 8-10 years of experience and at least 2 years’ experience working on SAP SAC / Datasphere solutions as a Data Analyst and/or Data Engineer. Experience in SAP Datasphere, ETL, building data pipelines, preparing and integrating data, data modelling, understanding of relational data modelling and denormalization techniques. Experience in SAP Analytics Cloud in creating advanced analytics/dashboards i.e. stories, boardrooms, planning. Knowledge of analytics standard processes. Understanding of SAP related Finance and/or Operations processes will be valued. Certification in one or more of the following will be appreciated: SAC Data Analyst, Data Engineer, Low-Code/No-Code Developer. Good communication skills and ability to work in an Agile environment. Energetic, organized and self-motivated. Fluent in business English. Desirable Skills/Experience NA When we put unexpected teams in the same room, we unleash bold thinking with the power to inspire life-changing medicines. In-person working gives us the platform we need to connect, work at pace and challenge perceptions. That's why we work, on average, a minimum of three days per week from the office. But that doesn't mean we're not flexible. We balance the expectation of being in the office while respecting individual flexibility. Join us in our unique and ambitious world. AstraZeneca is a dynamic company where innovation is at the forefront of everything we do. Here, you can apply your skills to genuinely impact patients' lives while being part of a global team that drives excellence and breakthroughs. With a focus on digital transformation and leveraging radical technologies, we offer an environment where you can challenge norms, take ownership, and make quick decisions. Our commitment to sustainability and empowering our teams ensures that every action contributes to a greater purpose. Ready to take the next step in your career? Apply now and be part of our journey towards transforming healthcare through analytics! Show more Show less
Posted 3 weeks ago
8.0 - 10.0 years
5 - 8 Lacs
Chennai
On-site
Job ID R-226449 Date posted 05/23/2025 Job Title: SAP Analytics Cloud Specialist Career Level : D2 Introduction to role Are you ready to make a significant impact in the world of analytics? Join AstraZeneca's Process Insights team within Global Business Services (GBS) as an SAP Analytics Cloud Specialist. We are on a mission to transform business processes through automation, analytics, and AI capabilities. As we scale our capabilities, you'll play a pivotal role in delivering SAP analytics solutions that drive progress across AstraZeneca. Accountabilities Collaborate with stakeholders to understand their business process requirements and objectives, translating them into SAP Analytics solutions (SAC & Datasphere). Create Extract, Transform, and Load (ETL) data pipelines, data warehousing, and testing. Validate and assure data quality and accuracy, including data cleansing, enrichment, and building data models. Develop comprehensive analytics and dashboards for business collaborators for reporting, business planning, and critical metric tracking purposes. Enhance solution experiences and visualizations using low/no-code development. Essential Skills/Experience Degree in Computer Science, Business Informatics or a comparable degree. Overall 8-10 years of experience and at least 2 years’ experience working on SAP SAC / Datasphere solutions as a Data Analyst and/or Data Engineer. Experience in SAP Datasphere, ETL, building data pipelines, preparing and integrating data, data modelling, understanding of relational data modelling and denormalization techniques. Experience in SAP Analytics Cloud in creating advanced analytics/dashboards i.e. stories, boardrooms, planning. Knowledge of analytics standard processes. Understanding of SAP related Finance and/or Operations processes will be valued. Certification in one or more of the following will be appreciated: SAC Data Analyst, Data Engineer, Low-Code/No-Code Developer. Good communication skills and ability to work in an Agile environment. Energetic, organized and self-motivated. Fluent in business English. Desirable Skills/Experience NA When we put unexpected teams in the same room, we unleash bold thinking with the power to inspire life-changing medicines. In-person working gives us the platform we need to connect, work at pace and challenge perceptions. That's why we work, on average, a minimum of three days per week from the office. But that doesn't mean we're not flexible. We balance the expectation of being in the office while respecting individual flexibility. Join us in our unique and ambitious world. AstraZeneca is a dynamic company where innovation is at the forefront of everything we do. Here, you can apply your skills to genuinely impact patients' lives while being part of a global team that drives excellence and breakthroughs. With a focus on digital transformation and leveraging radical technologies, we offer an environment where you can challenge norms, take ownership, and make quick decisions. Our commitment to sustainability and empowering our teams ensures that every action contributes to a greater purpose. Ready to take the next step in your career? Apply now and be part of our journey towards transforming healthcare through analytics! AstraZeneca embraces diversity and equality of opportunity. We are committed to building an inclusive and diverse team representing all backgrounds, with as wide a range of perspectives as possible, and harnessing industry-leading skills. We believe that the more inclusive we are, the better our work will be. We welcome and consider applications to join our team from all qualified candidates, regardless of their characteristics. We comply with all applicable laws and regulations on non-discrimination in employment (and recruitment), as well as work authorization and employment eligibility verification requirements. SAP Analytics Cloud Specialist Posted date May. 23, 2025 Contract type Full time Job ID R-226449 APPLY NOW Why choose AstraZeneca India? Help push the boundaries of science to deliver life-changing medicines to patients. After 45 years in India, we’re continuing to secure a future where everyone can access affordable, sustainable, innovative healthcare. The part you play in our business will be challenging, yet rewarding, requiring you to use your resilient, collaborative and diplomatic skillsets to make connections. The majority of your work will be field based, and will require you to be highly-organised, planning your monthly schedule, attending meetings and calls, as well as writing up reports. Who do we look for? Calling all tech innovators, ownership takers, challenge seekers and proactive collaborators. At AstraZeneca, breakthroughs born in the lab become transformative medicine for the world's most complex diseases. We empower people like you to push the boundaries of science, challenge convention, and unleash your entrepreneurial spirit. You'll embrace differences and take bold actions to drive the change needed to meet global healthcare and sustainability challenges. Here, diverse minds and bold disruptors can meaningfully impact the future of healthcare using cutting-edge technology. Whether you join us in Bengaluru or Chennai, you can make a tangible impact within a global biopharmaceutical company that invests in your future. Join a talented global team that's powering AstraZeneca to better serve patients every day. Success Profile Ready to make an impact in your career? If you're passionate, growth-orientated and a true team player, we'll help you succeed. Here are some of the skills and capabilities we look for. 0% Tech innovators Make a greater impact through our digitally enabled enterprise. Use your skills in data and technology to transform and optimise our operations, helping us deliver meaningful work that changes lives. 0% Ownership takers If you're a self-aware self-starter who craves autonomy, AstraZeneca provides the perfect environment to take ownership and grow. Here, you'll feel empowered to lead and reach excellence at every level — with unrivalled support when you need it. 0% Challenge seekers Adapting and advancing our progress means constantly challenging the status quo. In this dynamic environment where everything we do has urgency and focus, you'll have the ability to show up, speak up and confidently take smart risks. 0% Proactive collaborators Your unique perspectives make our ambitions and capabilities possible. Our culture of sharing ideas, learning and improving together helps us consistently set the bar higher. As a proactive collaborator, you'll seek out ways to bring people together to achieve their best. Responsibilities Job ID R-226449 Date posted 05/23/2025 Job Title: SAP Analytics Cloud Specialist Career Level : D2 Introduction to role Are you ready to make a significant impact in the world of analytics? Join AstraZeneca's Process Insights team within Global Business Services (GBS) as an SAP Analytics Cloud Specialist. We are on a mission to transform business processes through automation, analytics, and AI capabilities. As we scale our capabilities, you'll play a pivotal role in delivering SAP analytics solutions that drive progress across AstraZeneca. Accountabilities Collaborate with stakeholders to understand their business process requirements and objectives, translating them into SAP Analytics solutions (SAC & Datasphere). Create Extract, Transform, and Load (ETL) data pipelines, data warehousing, and testing. Validate and assure data quality and accuracy, including data cleansing, enrichment, and building data models. Develop comprehensive analytics and dashboards for business collaborators for reporting, business planning, and critical metric tracking purposes. Enhance solution experiences and visualizations using low/no-code development. Essential Skills/Experience Degree in Computer Science, Business Informatics or a comparable degree. Overall 8-10 years of experience and at least 2 years’ experience working on SAP SAC / Datasphere solutions as a Data Analyst and/or Data Engineer. Experience in SAP Datasphere, ETL, building data pipelines, preparing and integrating data, data modelling, understanding of relational data modelling and denormalization techniques. Experience in SAP Analytics Cloud in creating advanced analytics/dashboards i.e. stories, boardrooms, planning. Knowledge of analytics standard processes. Understanding of SAP related Finance and/or Operations processes will be valued. Certification in one or more of the following will be appreciated: SAC Data Analyst, Data Engineer, Low-Code/No-Code Developer. Good communication skills and ability to work in an Agile environment. Energetic, organized and self-motivated. Fluent in business English. Desirable Skills/Experience NA When we put unexpected teams in the same room, we unleash bold thinking with the power to inspire life-changing medicines. In-person working gives us the platform we need to connect, work at pace and challenge perceptions. That's why we work, on average, a minimum of three days per week from the office. But that doesn't mean we're not flexible. We balance the expectation of being in the office while respecting individual flexibility. Join us in our unique and ambitious world. AstraZeneca is a dynamic company where innovation is at the forefront of everything we do. Here, you can apply your skills to genuinely impact patients' lives while being part of a global team that drives excellence and breakthroughs. With a focus on digital transformation and leveraging radical technologies, we offer an environment where you can challenge norms, take ownership, and make quick decisions. Our commitment to sustainability and empowering our teams ensures that every action contributes to a greater purpose. Ready to take the next step in your career? Apply now and be part of our journey towards transforming healthcare through analytics! AstraZeneca embraces diversity and equality of opportunity. We are committed to building an inclusive and diverse team representing all backgrounds, with as wide a range of perspectives as possible, and harnessing industry-leading skills. We believe that the more inclusive we are, the better our work will be. We welcome and consider applications to join our team from all qualified candidates, regardless of their characteristics. We comply with all applicable laws and regulations on non-discrimination in employment (and recruitment), as well as work authorization and employment eligibility verification requirements. APPLY NOW Explore the local area Take a look at the map to see what’s nearby. Reasons to Join Thomas Mathisen Sales Representative Oslo, Norway Christine Recchio Sales Representative California, United States Stephanie Ling Sales Representative Petaling Jaya, Malaysia What we offer We're driven by our shared values of serving people, society and the planet. Our people make this possible, which is why we prioritise diversity, safety, empowerment and collaboration. Discover what a career at AstraZeneca could mean for you. Lifelong learning Our development opportunities are second to none. You'll have the chance to grow your abilities, skills and knowledge constantly as you accelerate your career. From leadership projects and constructive coaching to overseas talent exchanges and global collaboration programmes, you'll never stand still. Autonomy and reward Experience the power of shaping your career how you want to. We are a high-performing learning organisation with autonomy over how we learn. Make big decisions, learn from your mistakes and continue growing — with performance-based rewards as part of the package. Health and wellbeing An energised work environment is only possible when our people have a healthy work-life balance and are supported for their individual needs. That's why we have a dedicated team to ensure your physical, financial and psychological wellbeing is a top priority. Inclusion and diversity Diversity and inclusion are embedded in everything we do. We're at our best and most creative when drawing on our different views, experiences and strengths. That's why we're committed to creating a workplace where everyone can thrive in a culture of respect, collaboration and innovation.
Posted 4 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
20312 Jobs | Dublin
Wipro
11977 Jobs | Bengaluru
EY
8165 Jobs | London
Accenture in India
6667 Jobs | Dublin 2
Uplers
6464 Jobs | Ahmedabad
Amazon
6352 Jobs | Seattle,WA
Oracle
5993 Jobs | Redwood City
IBM
5803 Jobs | Armonk
Capgemini
3897 Jobs | Paris,France
Tata Consultancy Services
3776 Jobs | Thane