Jobs
Interviews

115 Oltp Jobs - Page 5

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

7 - 11 years

15 - 19 Lacs

Hyderabad

Work from Office

ABOUT AMGEN Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. What you will do Role Description: We are seeking a Data Solutions Architect to design, implement, and optimize scalable and high-performance data solutions that support enterprise analytics, AI-driven insights, and digital transformation initiatives. This role will focus on data strategy, architecture, governance, security, and operational efficiency, ensuring seamless data integration across modern cloud platforms. The ideal candidate will work closely with engineering teams, business stakeholders, and leadership to establish a future-ready data ecosystem, balancing performance, cost-efficiency, security, and usability. This position requires expertise in modern cloud-based data architectures, data engineering best practices, and Scaled Agile methodologies. Roles & Responsibilities: Design and implement scalable, modular, and future-proof data architectures that support enterprise data lakes, data warehouses, and real-time analytics. Develop enterprise-wide data frameworks that enable governed, secure, and accessible data across various business domains. Define data modeling strategies to support structured and unstructured data, ensuring efficiency, consistency, and usability across analytical platforms. Lead the development of high-performance data pipelines for batch and real-time data processing, integrating APIs, streaming sources, transactional systems, and external data platforms. Optimize query performance, indexing, caching, and storage strategies to enhance scalability, cost efficiency, and analytical capabilities. Establish data interoperability frameworks that enable seamless integration across multiple data sources and platforms. Drive data governance strategies, ensuring security, compliance, access controls, and lineage tracking are embedded into enterprise data solutions. Implement DataOps best practices, including CI/CD for data pipelines, automated monitoring, and proactive issue resolution, to improve operational efficiency. Lead Scaled Agile (SAFe) practices, facilitating Program Increment (PI) Planning, Sprint Planning, and Agile ceremonies, ensuring iterative delivery of enterprise data capabilities. Collaborate with business stakeholders, product teams, and technology leaders to align data architecture strategies with organizational goals. Act as a trusted advisor on emerging data technologies and trends, ensuring that the enterprise adopts cutting-edge data solutions that provide competitive advantage and long-term scalability. What we expect of you Must-Have Skills: Experience in data architecture, enterprise data management, and cloud-based analytics solutions. Expertise in Databricks, cloud-native data platforms, and distributed computing frameworks. Strong proficiency in modern data modeling techniques, including dimensional modeling, NoSQL, and data virtualization. Experience designing high-performance ETL/ELT pipelines and real-time data processing solutions. Deep understanding of data governance, security, metadata management, and access control frameworks. Hands-on experience with CI/CD for data solutions, DataOps automation, and infrastructure as code (IaaC). Proven ability to collaborate with cross-functional teams, including business executives, data engineers, and analytics teams, to drive successful data initiatives. Strong problem-solving, strategic thinking, and technical leadership skills. Experienced with SQL/NOSQL database, vector database for large language models Experienced with data modeling and performance tuning for both OLAP and OLTP databases Experienced with Apache Spark Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops Good-to-Have Skills: Good to have deep expertise in Biotech & Pharma industries Experience with Data Mesh architectures and federated data governance models. Certification in cloud data platforms or enterprise architecture frameworks. Knowledge of AI/ML pipeline integration within enterprise data architectures. Familiarity with BI & analytics platforms for enabling self-service analytics and enterprise reporting. Education and Professional Certifications Doctorate Degree with 6-8 + years of experience in Computer Science, IT or related field OR Master’s degree with 8-10 + years of experience in Computer Science, IT or related field OR Bachelor’s degree with 10-12 + years of experience in Computer Science, IT or related field AWS Certified Data Engineer preferred Databricks Certificate preferred Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Ability to learn quickly, be organized and detail oriented. Strong presentation and public speaking skills. What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation.

Posted 2 months ago

Apply

7 - 11 years

10 - 14 Lacs

Hyderabad

Work from Office

What you will do Let’s do this. Let’s change the world. In this vital role you will drive the development and implementation of our data strategy. The ideal candidate possesses a strong blend of technical expertise and data-driven problem-solving skills. As a Senior Data Engineer, you will play a crucial role in designing, building, and optimizing our data pipelines and platforms while mentoring junior engineers. Roles & Responsibilities: Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions. Take ownership of data pipeline projects from inception to deployment, managing scope, timelines, and risks. Ensure data quality and integrity through rigorous testing and monitoring. Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions. Work closely with data analysts, data scientists, and business collaborators to understand data requirements. Identify and resolve complex data-related challenges. Adhere to data engineering best practices and standards. Experience developing in an Agile development environment, and comfortable with Agile terminology and ceremonies. Familiarity with code versioning using GIT, Jenkins and code migration tools. Exposure to Jira or Rally. Identifying and implementing opportunities for automation and CI/CD. Stay up to date with the latest data technologies and trends. What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Doctorate degree and 2 years of Computer Science, IT or related field experience OR Master’s degree and 8 to 10 years of Computer Science, IT or related field experience OR Bachelor’s degree and 10 to 14 years of Computer Science, IT or related field experience OR Diploma and 14 to 18 years of Computer Science, IT or related field experience Preferred Qualifications: Functional Skills: Must-Have Skills (Not more than 3 to 4): Demonstrated hands-on experience with cloud platforms (AWS, Azure, GCP) and the ability to architect cost-effective and scalable data solutions. Proficiency in Python, PySpark, SQL. Hands on experience with big data ETL performance tuning. Strong development knowledge in Databricks. Strong analytical and problem-solving skills to address complex data challenges. Good-to-Have Skills: Experienced with data modeling and performance tuning for both OLAP and OLTP databases Experienced working with Apache Spark, Apache Airflow Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops Experience in SQL/NOSQL database, vector database for large language models Experience with prompt engineering, model fine tuning Experience with DevOps/MLOps CICD build and deployment pipeline Professional Certifications (please mention if the certification is preferred or mandatory for the role): AWS Certified Data Engineer (preferred) Databricks Certification (preferred) Any SAFe Agile certification (preferred) Soft Skills: Initiative to explore alternate technology and approaches to solving problems. Skilled in breaking down problems, documenting problem statements, and estimating efforts. Effective communication and interpersonal skills to collaborate with multi-functional teams. Excellent analytical and solving skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills. What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now for a career that defies imagination Objects in your future are closer than they appear. Join us. careers.amgen.com

Posted 2 months ago

Apply

- 2 years

3 - 5 Lacs

Hyderabad

Work from Office

ABOUT AMGEN Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. What we expect of you Role Description: We are looking for an Associate Data Engineer with deep expertise in writing data pipelines to build scalable, high-performance data solutions. The ideal candidate will be responsible for developing, optimizing and maintaining complex data pipelines, integration frameworks, and metadata-driven architectures that enable seamless access and analytics. This role prefers deep understanding of the big data processing, distributed computing, data modeling, and governance frameworks to support self-service analytics, AI-driven insights, and enterprise-wide data management. Roles & Responsibilities: Data Engineer who owns development of complex ETL/ELT data pipelines to process large-scale datasets Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions Ensuring data integrity, accuracy, and consistency through rigorous quality checks and monitoring Exploring and implementing new tools and technologies to enhance ETL platform and performance of the pipelines Proactively identify and implement opportunities to automate tasks and develop reusable frameworks Eager to understand the biotech/pharma domains & build highly efficient data pipelines to migrate and deploy complex data across systems Work in an Agile and Scaled Agile (SAFe) environment, collaborating with cross-functional teams, product owners, and Scrum Masters to deliver incremental value Use JIRA, Confluence, and Agile DevOps tools to manage sprints, backlogs, and user stories. Support continuous improvement, test automation, and DevOps practices in the data engineering lifecycle Collaborate and communicate effectively with the product teams, with cross-functional teams to understand business requirements and translate them into technical solutions What we expect of you Must-Have Skills: Experience in Data Engineering with a focus on Databricks, AWS, Python, SQL, and Scaled Agile methodologies Proficiency & Strong understanding of data processing and transformation of big data frameworks (Databricks, Apache Spark, Delta Lake, and distributed computing concepts) Strong understanding of AWS services and can demonstrate the same Ability to quickly learn, adapt and apply new technologies Strong problem-solving and analytical skills Excellent communication and teamwork skills Experience with Scaled Agile Framework (SAFe), Agile delivery, and DevOps practices Good-to-Have Skills: Data Engineering experience in Biotechnology or pharma industry Exposure to APIs, full stack development Experienced with SQL/NOSQL database, vector database for large language models Experienced with data modeling and performance tuning for both OLAP and OLTP databases Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops Education and Professional Certifications Bachelor’s degree and 2 to 5 + years of Computer Science, IT or related field experience OR Master’s degree and 1 to 4 + years of Computer Science, IT or related field experience AWS Certified Data Engineer preferred Databricks Certificate preferred Scaled Agile SAFe certification preferred Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Ability to learn quickly, be organized and detail oriented. Strong presentation and public speaking skills. What you can expect of us EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation. Apply now and make a lasting impact with the Amgen team. careers.amgen.com

Posted 2 months ago

Apply

2 - 5 years

3 - 5 Lacs

Hyderabad

Work from Office

ABOUT AMGEN Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. ABOUT THE ROLE Role Description: We are looking for an Associate Data Engineer with deep expertise in writing data pipelines to build scalable, high-performance data solutions. The ideal candidate will be responsible for developing, optimizing and maintaining complex data pipelines, integration frameworks, and metadata-driven architectures that enable seamless access and analytics. This role prefers deep understanding of the big data processing, distributed computing, data modeling, and governance frameworks to support self-service analytics, AI-driven insights, and enterprise-wide data management. Roles & Responsibilities: Data Engineer who owns development of complex ETL/ELT data pipelines to process large-scale datasets Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions Ensuring data integrity, accuracy, and consistency through rigorous quality checks and monitoring Exploring and implementing new tools and technologies to enhance ETL platform and performance of the pipelines Proactively identify and implement opportunities to automate tasks and develop reusable frameworks Eager to understand the biotech/pharma domains & build highly efficient data pipelines to migrate and deploy complex data across systems Work in an Agile and Scaled Agile (SAFe) environment, collaborating with cross-functional teams, product owners, and Scrum Masters to deliver incremental value Use JIRA, Confluence, and Agile DevOps tools to manage sprints, backlogs, and user stories. Support continuous improvement, test automation, and DevOps practices in the data engineering lifecycle Collaborate and communicate effectively with the product teams, with cross-functional teams to understand business requirements and translate them into technical solutions Must-Have Skills: Experience in Data Engineering with a focus on Databricks, AWS, Python, SQL, and Scaled Agile methodologies Proficiency & Strong understanding of data processing and transformation of big data frameworks (Databricks, Apache Spark, Delta Lake, and distributed computing concepts) Strong understanding of AWS services and can demonstrate the same Ability to quickly learn, adapt and apply new technologies Strong problem-solving and analytical skills Excellent communication and teamwork skills Experience with Scaled Agile Framework (SAFe), Agile delivery, and DevOps practices Good-to-Have Skills: Data Engineering experience in Biotechnology or pharma industry Exposure to APIs, full stack development Experienced with SQL/NOSQL database, vector database for large language models Experienced with data modeling and performance tuning for both OLAP and OLTP databases Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops Education and Professional Certifications Any degree and 2-5 years of experience AWS Certified Data Engineer preferred Databricks Certificate preferred Scaled Agile SAFe certification preferred Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Ability to learn quickly, be organized and detail oriented. Strong presentation and public speaking skills. EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation.

Posted 2 months ago

Apply

6 - 10 years

12 - 22 Lacs

Coimbatore

Work from Office

Looking for Database Developer

Posted 2 months ago

Apply

10 - 15 years

30 - 35 Lacs

Noida

Remote

SR. DATA MODELER FULL-TIME ROLE REMOTE OR ONSITE Job Summary: We are seeking an experienced Data Modeler to support the Enterprise Data Platform (EDP) initiative, focusing on building and optimizing curated data assets on Google BigQuery. This role requires expertise in data modeling, strong knowledge of retail data, and an ability to collaborate with data engineers, business analysts, and architects to create scalable and high-performing data structures. Required Qualifications: 5+ years of experience in data modeling and architecture in cloud data platforms (BigQuery preferred). Expertise in dimensional modeling (Kimball), data vault, and normalization/denormalization techniques. Strong SQL skills, with hands-on experience in BigQuery performance tuning (partitioning, clustering, query optimization). Understanding of retail data models (e.g., sales, inventory, pricing, supply chain, customer analytics). Experience working with data engineering teams to implement models in ETL/ELT pipelines. Familiarity with data governance, metadata management, and data cataloging. Excellent communication skills and ability to translate business needs into structured data models. Key Responsibilities: 1. Data Modeling & Curated Layer Design Design logical, conceptual, and physical data models for the EDPs curated layer in BigQuery. Develop fact and dimension tables, ensuring adherence to dimensional modeling best practices (Kimball methodology). Optimize data models for performance, scalability, and query efficiency in a cloud-native environment. Work closely with data engineers to translate models into efficient BigQuery implementations (partitioning, clustering, materialized views). 2. Data Standardization & Governance Define and maintain data definitions, relationships, and business rules for curated assets. Ensure data integrity, consistency, and governance across datasets. Work with Data Governance teams to align models with enterprise data standards and metadata management policies. 3. Collaboration with Business & Technical Teams Engage with business analysts and product teams to understand data needs, ensuring models align with business requirements. Partner with data engineers and architects to implement best practices for data ingestion and transformation. Support BI & analytics teams by ensuring curated models are optimized for downstream consumption (e.g., Looker, Tableau, Power BI, AI/ML models, APIs). Please share the following details along with the most updated resume to geeta.negi@compunnel.com if you are interested in the opportunity: Total Experience Relevant experience Current CTC Expected CTC Notice Period (Last working day if you are serving the notice period) Current Location SKILL 1 RATING OUT OF 5 SKILL 2 RATING OUT OF 5 SKILL 3 RATING OUT OF 5 (Mention the skill)

Posted 2 months ago

Apply

5 - 10 years

7 - 12 Lacs

Hyderabad

Work from Office

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Snowflake Data Warehouse Good to have skills : Snowflake Schema Minimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. Your day will involve creating innovative solutions to address business needs and ensuring applications are tailored to meet specific requirements. Roles & Responsibilities:Implement snowflake cloud data warehouse and cloud related architecture. Migrating from various sources to Snowflake.Work on Snowflake capabilities such as Snow pipe, Stages, Snow SQL, Streams, and tasks.Implement snowflake advanced concepts like setting up resource monitor, RBAC controls, Virtual Warehouse sizing, zero copy clone.In-depth knowledge and experience in data migration from RDBMS to Snowflake cloud data warehouseDeploy the snowflake features such as data sharing, event, and lake house patterns.Implement Incremental extraction loads - batched and streaming.Must Have- Snowflake certification Professional & Technical Skills: Must To Have Skills:Proficiency in Snowflake Data Warehouse Good To Have Skills:Experience with Snowflake Schema Strong understanding of data warehousing concepts Experience in ETL processes and data modeling Knowledge of SQL and database management Ability to troubleshoot and debug applications Additional Information: The candidate should have a minimum of 5 years of experience in Snowflake Data Warehouse This position is based at our Hyderabad office A 15 years full-time education is required Qualifications 15 years full time education

Posted 2 months ago

Apply

2 - 6 years

18 - 25 Lacs

Pune

Work from Office

Senior Associate, Full Stack Engineer At BNY, our culture empowers you to grow and succeed. As a leading global financial services company at the center of the world s financial system we touch nearly 20% of the world s investible assets. Every day around the globe, our 50,000+ employees bring the power of their perspective to the table to create solutions with our clients that benefit businesses, communities and people everywhere. We continue to be a leader in the industry, awarded as a top home for innovators and for creating an inclusive workplace. Through our unique ideas and talents, together we help make money work for the world. This is what #LifeAtBNY is all about. We re seeking a future team member for the role Senior Associate, Full Stack Engineer to join our Compliance engineering. This role is in Pune, MH-HYBRID. In this role, you ll make an impact in the following ways: Overall 2-6 years of experience with ETL, Databases, Data warehouses etc. Need to have in-depth technical knowledge as a Pentaho ETL developer and should feel comfortable working within large internal and external data sets. Experience in OLAP and OLTP and data warehousing and data model concepts Having good experience in Vertica, Oracle, Denodo and similar kind of databases Experienced in Design, Development, and Implementation of large - scale projects in financial industries using Data Warehousing ETL tools (Pentaho) Experience in creating ETL transformations and jobs using Pentaho Kettle Spoon designer and Pentaho Data Integration Designer and scheduling. Proficient in writing - SQL Statements, Complex Stored Procedures, Dynamic SQL queries, Batches, Scripts, Functions, Triggers, Views, Cursors and Query Optimization Excellent data analysis skills Working knowledge of source control tools such as GitLab Good Analytical skills. Good in PDI architecture Having good experience in Splunk is plus. To be successful in this role, we re seeking the following: Graduates of bachelor s degree programs in business, related discipline, or equivalent work experience Relevant domain expertise in alternative investment Services domain or capital markets and financial services domain is required. At BNY, our culture speaks for itself. Here s a few of our awards: America s Most Innovative Companies, Fortune, 2024 World s Most Admired Companies, Fortune 2024 Human Rights Campaign Foundation, Corporate Equality Index, 100% score, 2023-2024 Best Places to Work for Disability Inclusion , Disability: IN - 100% score, 2023-2024 Most Just Companies , Just Capital and CNBC, 2024 Dow Jones Sustainability Indices, Top performing company for Sustainability, 2024 Bloomberg s Gender Equality Index (GEI), 2023 Our Benefits and Rewards: BNY offers highly competitive compensation, benefits, and wellbeing programs rooted in a strong culture of excellence and our pay-for-performance philosophy. We provide access to flexible global resources and tools for your life s journey. Focus on your health, foster your personal resilience, and reach your financial goals as a valued member of our team, along with generous paid leaves, including paid volunteer time, that can support you and your family through moments that matter. BNY is an Equal Employment Opportunity/Affirmative Action Employer - Underrepresented racial and ethnic groups/Females/Individuals with Disabilities/Protected Veterans.

Posted 2 months ago

Apply

6 - 9 years

20 - 25 Lacs

Bengaluru

Hybrid

Company Description Epsilon is the leader in outcome-based marketing. We enable marketing that's built on proof, not promises. Through Epsilon PeopleCloud, the marketing platform for personalizing consumer journeys with performance transparency, Epsilon helps marketers anticipate, activate and prove measurable business outcomes. Powered by CORE ID, the most accurate and stable identity management platform representing 200+ million people, Epsilon's award-winning data and technology is rooted in privacy by design and underpinned by powerful AI. With more than 50 years of experience in personalization and performance working with the world's top brands, agencies and publishers, Epsilon is a trusted partner leading CRM, digital media, loyalty and email programs. Positioned at the core of Publicis Groupe, Epsilon is a global company with over 8,000 employees in over 40 offices around the world. For more information, visit https://www.epsilon.com/apac (APAC). Follow us on Twitter at @EpsilonMktg. Click here to view how Epsilon transforms marketing with 1 View, 1 Vision and 1 Voice. https://www.epsilon.com/apac/youniverse Wondering what it's like to work with Epsilon? Check out this video that captures the spirit of our resilient minds, our values and our great culture. Job Description The Product team forms the crux of our powerful platforms and connects millions of customers to the product magic. This team of innovative thinkers develop and build products that help Epsilon be a market differentiator. They map the future and set new standards for our products, empowered with industry best practices, ML and AI capabilities. The team passionately delivers intelligent end-to-end solutions and plays a key role in Epsilon's success story. Candidate will be the Senior Software Engineer for Business Intelligence team in the Product Engineering group. The Business Intelligence team partners with internal and external clients and technology providers, to develop, implement, and manage state-of-the-art data analytics, business intelligence and data visualization solutions for our marketing products. The Sr Software Engineer will be an individual with strong technical expertise on business intelligence and analytics solutions/tools and work on the BI strategy in terms of toolset selection, report and visualization best practices, team training, and environment efficiency. Why we are looking for you You are an individual with combination of technical leadership and architectural design skills. You have a solid foundation in business intelligence and analytics solutions/tools. You have experience in Product Engineering & Software Development using Tableau and SAP Business Objects, Kibana Dashboard development. You have experience in data integration tools like Databricks. You excel at collaborating with different stakeholders (ERP, CRM, Data Hub and Business stakeholders) to success. You have a strong experience of building reusable database components using SQL queries You enjoy new challenges and are solution oriented. You like mentoring people and enable collaboration of the highest order What you will enjoy in this role As part of the Epsilon Product Engineering team, the pace of the work matches the fast-evolving demands of Fortune 500 clients across the globe. As part of an innovative team that's not afraid to take risks, your ideas will come to life in digital marketing products that support more than 50% automotive dealers in the US. The open and transparent environment that values innovation and efficiency. Exposure to all the different Epsilon Products where reporting plays a key role for the efficient decision-making abilities to the end users. What you will do Work on our BI strategy in terms of toolset selection, report and visualization best practices, team training, and environment efficiency. Analyze requirements and design data analytics and enterprise reporting solutions in various frameworks (such as Tableau, SAP Business Objects, and others) as part of the enterprise, multi-tier, customer-facing applications. Strong technical hands-on to develop data analytics solutions and enterprise reporting solutions in frameworks (such as Tableau, SAP Business Objects, and Kibana). Good to have scripting skills on Python. Build data integration & aggregate pipelines using Databricks. Provide estimates for BI solutions to be developed and deployed. Develop and support cloud infrastructure for BI solutions including automation, process definition and support documentation as required. Work in an agile environment and align with agile / scrum methodology for development work. Follow Data Management processes and procedures and provide input to the creation of data definitions, business rules and data access methods. Collaborate with database administrators and data warehouse architects on data access patterns to optimize data visualization and processing. Assess and come up with infrastructure design for BI solutions catering to system availability and fault tolerance needs. Establish best practices of workloads on multi-tenant deployments. Document solutions and train implementation and operational support teams. Assess gaps in solutions and make recommendations on how to solve the problem. Understand the priorities of various projects and help steer organizational tradeoffs to help focus on the most important initiatives. Show initiative and take responsibility for decisions that impact project and team goals Qualifications BE/ B. Tech/ MCA only, No correspondence course 7+ years of overall technical hands-on experience with good to have supervisory experience Experience in developing BI solutions in enterprise reporting frameworks Experience in designing semantic layer in reporting frameworks and developing reporting model on an OLTP or OLAP environment. Experience working with large data sets, both structured & unstructured, Datawarehouse and Data lakes. Strong knowledge in multitenancy concepts, object, folder and user group templates and user access models in BI reporting tool frameworks, including single sign-on integrations with identity and access management systems such as Okta. Experience in performing periodic sizing, establishing monitoring, backup and restore procedures catering to MTTR and MTBF expectations. Working knowledge of OLTP and relational database concepts and data warehouse concepts/best practices and data modeling Experience in documenting technical design and procedures, reusable artifacts and provide technical guidance as needed. Familiarity with cloud stack (AWS, Azure) & cloud deployments and tools Ability to work on multiple assignments concurrently.

Posted 2 months ago

Apply

8 - 10 years

25 - 30 Lacs

Bengaluru

Work from Office

Number of Openings* 1 ECMS Request no in sourcing stage * 525266 Duration of contract* 12 Months Total Yrs. of Experience* 8-10 Yrs. Detailed JD *(Roles and Responsibilities) Manage and maintain NoSQL database systems to ensure optimal performance, Monitor database health and troubleshoot performance issues, Implement and maintain database security measures to protect sensitive data, Collaborate with development teams to design efficient data models, Perform database backups and develop disaster recovery plans. Design, manage, and optimize relational databases, Configure, deploy, and support SQL Server databases, Ensure data security and integrity while managing SQL databases. Analyze and translate business needs into data models, Develop conceptual, logical, and physical data models, Create and enforce database development standards, Validate and reconcile data models to ensure accuracy, Maintain and update existing data models, Mandatory skills* knowledge of OLTP, OLAP Data modeling, NoSQL db.; mongo DB preferred, Desired skills* Should be good at SQL,PL/SQL; experience in MySQL is bonus. Must have interpersonal skills to work with client and understand data model of Insurance systems Domain* Insurance Approx. vendor billing rate excluding service tax* 7588 INR/Day Precise Work Location* (E.g. Bangalore Infosys SEZ or STP) No constraint; Mumbai Bengaluru Pune preferred BG Check ( Before OR After onboarding) Pre-Onboarding Any client prerequisite BGV Agency* NA Is there any working in shifts from standard Daylight (to avoid confusions post onboarding)* IST only

Posted 2 months ago

Apply

10 - 15 years

10 - 14 Lacs

Kolkata, Mumbai, New Delhi

Work from Office

The position is part of the Solutions Integration practice which focuses on the integration of information, process and people through the application of multiple technologies. The candidate is expected to handle small to medium scale consulting projects and should possess skills in the design, development, integration, and deployment of data extraction/load programs. Previous experience within Banking and Financial Services is preferred. To be considered, a candidate should be available for traveling (5% or more) and possess the required skills as mentioned below. The position will be based in the C&R Software Office in Bangalore, India . We shall offer Hybrid model of working. Position description - Solution Integration - Lead ETL Consultant - Band D Role/responsibilities: Design, develop, deploy, and support modules of our world-class enterprise-level solution into our international client base Drive technical architecture and design process in conjunction with client requirements Evaluate new design specifications, raise quality standards, and address architectural concerns Evaluate stability, compatibility, scalability, interoperability, and performance of the solution Own design aspects, performance, re-startablity, logging, error handling, security for both on-premises and cloud customers Continually learn new technologies in related areas Single point of contact (SPOC) for the technical implementation. Work with Project Manager to plan and deliver projects from requirements till Go-Live Be responsible to successfully deliver projects with accountability and ownership Getting the broader picture of the project and contributing accordingly. This includes understanding the business, technical & architectural aspects of project implementations. Thought process to build reusable artefacts and to make use of them to reduce development / testing / deployment / maintenance efforts. Ability to work with multiple customers at the same time. Adaptability to work in SDLC, Iterative and Agile methodology. Interact with clients/onsite team members to understand project requirements and goals. Lead client workshops (face to face or over the phone) for consulting, drive solutioning and issue resolution with client Follow up and escalate gaps, issues and enhancements identified throughout the project and drive them to closure Display high level of knowledge and consistent service in all interactions with client Establish positive client relationship(s) to facilitate our implementations Support client activities throughout the implementation of project life cycle including testing phases Support & review test strategy / planning of the end to end solution Lead in developing detailed business and technical specifications based on project requirements and turn into data extraction/load programs. Program ETL tool with business rules to be applied to data from source input to target data repository. Develop and assist in automating data extraction/load programs to run on regular schedule. Assist in managing daily, weekly, and monthly data operations and scheduled processes. Perform data conversion, quality, and integrity checks for all programs/processes Mentor junior members on team and be responsible for their deliverables Engage in Pre-Sales demonstrations, providing solutions and providing estimates In addition to these skills, the individual needs to be skilled in business analysis and knowledge acquisition . An integration consultant interacts with clients (both business and technical personnel) on a constant basis. Hence, it is necessary that an Integration Consultant have extremely good communication skills, should be able to listen carefully to clients and facilitate information gathering sessions. Skills/Experience requirements: Overall 10+ years of IT industry experience Undergraduate / Graduate in Computer Science or Computer Applications such as B. Sc. / B.C.A. / B. Tech. / B. E. / M. Sc. / M. Tech. / M. E. / M.C.A. Strong experience in understanding business requirements and converting those requirements into detailed functional and technical specifications 7 years experience with ETL tool preferably Kettle with knowledge on Metadata Injection, Kettle DB logging, Carte. 7 years experience in writing PL/SQL or T-SQL programming and queries on Oracle / SQL Server Strong knowledge on RDBMS concept and OLTP system architecture. Minimum 5 years experience in writing shell scripts on UNIX Sun Solaris Competent with SQL/database, SQL Server / Postgres, SSRS and other analytical programs, with the desire and ability to understand new software applications Experience reviewing query performance and optimizing/developing more efficient code Experience with creating table indexes to improve database performance Experience writing complex operations, views, stored procedures, triggers and functions to support business needs in a high availability environment Strong knowledge on source code control mechanism on any tool. Knowledge on GIT / BitBucket is added advantage. Strong knowledge of XML and JSON structures and Jenkins. Experience of job scheduling and working knowledge on at least one 3rd party scheduler Hands-on experience in AWS services like PostgreSQL, Aurora, Lambda is preferred Ability to perform data research and root cause analysis on data issues/discrepancies. Experience utilizing SOAP and REST to access web services Experience in Javascript, HTML, CSS Excellent written and verbal communication skills Excellent inter-personal skills and comfortable establishing professional relationships especially remotely (electronic, phone, written) Proven ability to plan and execute effectively to meet critical time-sensitive objectives Ability to effectively work alone and independently Experience in either the Banking or Financial Industry is preferred Experience in SSRS reports development Working knowledge of Python scripting is preferred Good mentorship skills Ability to deliver effectively in high pressure situations

Posted 2 months ago

Apply

2 - 6 years

5 - 9 Lacs

Hyderabad

Work from Office

AWS Data Engineer: ***************** As an AWS Data Engineer, you will contribute to our client and will have the below responsibilities: Work with technical development team and team lead to understand desired application capabilities. Candidate would need to do development using application development by lifecycles, & continuous integration/deployment practices. Working to integrate open-source components into data-analytic solutions Willingness to continuously learn & share learnings with others Required: 5+ years of direct applicable experience with key focus: Glue and Python; AWS; Data Pipeline creation Develop code using Python, such as o Developing data pipelines from various external data sources to internal data. Use of Glue for extracting data from the design data base. Developing Python APIs as needed Minimum 3 years of hands-on experience in Amazon Web Services including EC2, VPC, S3, EBS, ELB, Cloud-Front, IAM, RDS, Cloud Watch. Able to interpret business requirements, analyzing, designing and developing application on AWS Cloud and ETL technologies Able to design and architect server less application using AWS Lambda, EMR, and DynamoDB Ability to leverage AWS data migration tools and technologies including Storage Gateway, Database Migration and Import Export services. Understands relational database design, stored procedures, triggers, user-defined functions, SQL jobs. Familiar with CI/CD tools e.g., Jenkins, UCD for Automated application deployments Understanding of OLAP, OLTP, Star Schema, Snow Flake Schema, Logical/Physical/Dimensional Data Modeling. Ability to extract data from multiple operational sources and load into staging, Data warehouse, Data Marts etc. using SCDs (Type 1/Type 2/ Type 3/Hybrid) loads. Familiar with Software Development Life Cycle (SDLC) stages in a Waterfall and Agile environment. Nice to have: Familiar with the use of source control management tools for Branching, Merging, Labeling/Tagging and Integration, such as GIT and SVN. Experience working with UNIX/LINUX environments Hand-on experience with IDEs such as Jupiter Notebook Education & Certification University degree or diploma and applicable years of experience Job Segment Developer, Open Source, Data Warehouse, Cloud, Database, Technology

Posted 2 months ago

Apply

12 - 22 years

35 - 60 Lacs

Chennai

Hybrid

Warm Greetings from SP Staffing Services Private Limited!! We have an urgent opening with our CMMI Level 5 client for the below position. Please send your update profile if you are interested. Relevant Experience: 8 - 24 Yrs Location- Pan India Job Description : - The Data Modeler will be responsible for the design, development, and maintenance of data models and standards for Enterprise data platforms. Build dimensional data models applying best practices and providing business insights. Build data warehouse & data marts (on Cloud) while performing data profiling and quality analysis. Identify business needs and translate business requirements to Conceptual, Logical, Physical and semantic, multi-dimensional (star, snowflake), normalized/denormalized, Data Vault2.0 model for the project. Knowledge of snowflake and dbt is added advantage. Create and maintain the Source to Target Data Mapping document for this project. Includes documentation of all entities, attributes, data relationships, primary and foreign key structures, allowed values, codes, business rules, glossary terms, etc. Develop best practices for standard naming conventions and coding practices to ensure consistency of data models. Gather and Publish Data Dictionaries: Maintain data models and capture data models from existing databases and record descriptive information. Work with the Development team to implement data strategies, build data flows and develop conceptual data models. Create logical and physical data models using best practices to ensure high data quality and reduced redundancy. Optimize and update logical and physical data models to support new and existing projects. Data profiling, business domain modeling, logical data modeling, physical dimensional data modeling and design. Data design and performance optimization for large Data Warehouse solutions. understanding data - profile and analysis (metadata (formats, definitions, valid values, boundaries), relationship/usage) Create relational and dimensional structures for large (multi-terabyte) operational, analytical, warehouse and BI systems. Should be good in verbal and written communication. If interested please forward your updated resume to sankarspstaffings@gmail.com / Sankar@spstaffing.in With Regards, Sankar G Sr. Executive - IT Recruitment

Posted 2 months ago

Apply

12 - 22 years

35 - 60 Lacs

Kolkata

Hybrid

Warm Greetings from SP Staffing Services Private Limited!! We have an urgent opening with our CMMI Level 5 client for the below position. Please send your update profile if you are interested. Relevant Experience: 8 - 24 Yrs Location- Pan India Job Description : - The Data Modeler will be responsible for the design, development, and maintenance of data models and standards for Enterprise data platforms. Build dimensional data models applying best practices and providing business insights. Build data warehouse & data marts (on Cloud) while performing data profiling and quality analysis. Identify business needs and translate business requirements to Conceptual, Logical, Physical and semantic, multi-dimensional (star, snowflake), normalized/denormalized, Data Vault2.0 model for the project. Knowledge of snowflake and dbt is added advantage. Create and maintain the Source to Target Data Mapping document for this project. Includes documentation of all entities, attributes, data relationships, primary and foreign key structures, allowed values, codes, business rules, glossary terms, etc. Develop best practices for standard naming conventions and coding practices to ensure consistency of data models. Gather and Publish Data Dictionaries: Maintain data models and capture data models from existing databases and record descriptive information. Work with the Development team to implement data strategies, build data flows and develop conceptual data models. Create logical and physical data models using best practices to ensure high data quality and reduced redundancy. Optimize and update logical and physical data models to support new and existing projects. Data profiling, business domain modeling, logical data modeling, physical dimensional data modeling and design. Data design and performance optimization for large Data Warehouse solutions. understanding data - profile and analysis (metadata (formats, definitions, valid values, boundaries), relationship/usage) Create relational and dimensional structures for large (multi-terabyte) operational, analytical, warehouse and BI systems. Should be good in verbal and written communication. If interested please forward your updated resume to sankarspstaffings@gmail.com / Sankar@spstaffing.in With Regards, Sankar G Sr. Executive - IT Recruitment

Posted 2 months ago

Apply

12 - 22 years

35 - 60 Lacs

Noida

Hybrid

Warm Greetings from SP Staffing Services Private Limited!! We have an urgent opening with our CMMI Level 5 client for the below position. Please send your update profile if you are interested. Relevant Experience: 8 - 24 Yrs Location- Pan India Job Description : - The Data Modeler will be responsible for the design, development, and maintenance of data models and standards for Enterprise data platforms. Build dimensional data models applying best practices and providing business insights. Build data warehouse & data marts (on Cloud) while performing data profiling and quality analysis. Identify business needs and translate business requirements to Conceptual, Logical, Physical and semantic, multi-dimensional (star, snowflake), normalized/denormalized, Data Vault2.0 model for the project. Knowledge of snowflake and dbt is added advantage. Create and maintain the Source to Target Data Mapping document for this project. Includes documentation of all entities, attributes, data relationships, primary and foreign key structures, allowed values, codes, business rules, glossary terms, etc. Develop best practices for standard naming conventions and coding practices to ensure consistency of data models. Gather and Publish Data Dictionaries: Maintain data models and capture data models from existing databases and record descriptive information. Work with the Development team to implement data strategies, build data flows and develop conceptual data models. Create logical and physical data models using best practices to ensure high data quality and reduced redundancy. Optimize and update logical and physical data models to support new and existing projects. Data profiling, business domain modeling, logical data modeling, physical dimensional data modeling and design. Data design and performance optimization for large Data Warehouse solutions. understanding data - profile and analysis (metadata (formats, definitions, valid values, boundaries), relationship/usage) Create relational and dimensional structures for large (multi-terabyte) operational, analytical, warehouse and BI systems. Should be good in verbal and written communication. If interested please forward your updated resume to sankarspstaffings@gmail.com / Sankar@spstaffing.in With Regards, Sankar G Sr. Executive - IT Recruitment

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies