Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
10.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
You are as unique as your background, experience and point of view. Here, you’ll be encouraged, empowered and challenged to be your best self. You'll work with dynamic colleagues - experts in their fields - who are eager to share their knowledge with you. Your leaders will inspire and help you reach your potential and soar to new heights. Every day, you'll have new and exciting opportunities to make life brighter for our Clients - who are at the heart of everything we do. Discover how you can make a difference in the lives of individuals, families and communities around the world. Job Description: Please enter the external job description here (remove this line) Lead Solution Designer Job Description (heading) / Description Du Poste (titre The Senior Data Solution Designer plays a pivotal role within our evolving Data Engineering team, leading the strategic implementation of data solutions on AWS. You will be responsible for driving the technical vision and execution of cloud-based data architectures, ensuring the platform’s scalability, security, and performance while addressing both business and technical requirements. This role demands a hands-on leader with deep technical expertise who can also steer strategic initiatives to success. Preferred Skills (heading) / Compétences Particulières (titre) Responsibilities (heading) / Responsabilités (titre) Spearhead the implementation, performance finetuning, development, and delivery of data solutions using AWS core data services, driving innovation in Data Engineering, Governance, Integration, and Virtualization. Oversee all technical aspects of AWS-based data systems, ensuring end-to-end delivery. Coordinating with multiple stakeholders to ensure timely implementation and effective value realization. Continuously enhance the D&A platform to improve performance, resiliency, scalability, and security while incorporating new data technologies and methodologies. Work closely with business partners, data architects, and cross-functional teams to translate complex business requirements into technical solutions. Develop and implement data management strategies, including Data Lakehouse, Data Warehousing, Master Data Management, and Advanced Analytics solutions. Combine technical solutioning with hands-on work as needed, actively contributing to the architecture, coding, and deployment of critical data systems. Ensure system health by monitoring platform performance, identifying potential issues, and taking preventive or corrective measures as needed. Be accountable for the accuracy, consistency, and overall quality of the data used in various applications and analytical processes. Advocate for the use of best practices in data engineering, governance, and AWS cloud technologies, advising on the latest tools and standards. Qualifications (heading) / Compétences (titre) 10+ years of hands-on experience in developing and architecting data solutions, with a strong background in AWS cloud services. Proven experience designing and implementing AWS data services (such as S3, Redshift, Athena, Glue, etc.) and a solid understanding of data service design patterns. Expertise in building large-scale data platforms, including Data Lakehouse, Data Warehouse, Master Data Management, and Advanced Analytics systems. Ability to effectively communicate complex technical solutions to both technical and non-technical stakeholders. Experience working with multi-disciplinary teams and aligning data strategies with business objectives. Demonstrated experience managing multiple projects in a high-pressure environment, ensuring timely and high-quality delivery. Strong problem-solving skills, with the ability to make data-driven decisions and approach challenges methodically. Proficiency in data solution coding, ensuring access and integration for analytical and decision-making processes. Good verbal & written communication skills and ability to work independently as well as in a team environment providing structure in ambiguous situation. Good to have. Experience within the Insurance domain is an added advantage. Solid understanding of data governance frameworks and Master Data Management principles. This role is ideal for an experienced data architect who is passionate about leading innovative AWS data solutions while balancing technical expertise with business needs. Job Category: IT - Digital Development Posting End Date: 17/07/2025
Posted 1 week ago
15.0 years
0 Lacs
Hyderābād
Remote
Working with Us Challenging. Meaningful. Life-changing. Those aren't words that are usually associated with a job. But working at Bristol Myers Squibb is anything but usual. Here, uniquely interesting work happens every day, in every department. From optimizing a production line to the latest breakthroughs in cell therapy, this is work that transforms the lives of patients, and the careers of those who do it. You'll get the chance to grow and thrive through opportunities uncommon in scale and scope, alongside high-achieving teams. Take your career farther than you thought possible. Bristol Myers Squibb recognizes the importance of balance and flexibility in our work environment. We offer a wide variety of competitive benefits, services and programs that provide our employees with the resources to pursue their goals, both at work and in their personal lives. Read more: careers.bms.com/working-with-us . Position Summary: The Senior Manager, Senior Solution Engineer, Drug Development Information Technology (DDIT) will be part of the product team committed to bridge the gap between technology and business needs within the Clinical Data Ecosystem (CDE) that primarily delivers technology strategy and solutions for clinical trial execution, Global Biostatistics and Data Sciences, Clinical Data Management (Clinical Analytics, Site Selection, Feasibility, Real Word Evidence) The role is based out of our Hyderabad office, and is part of the Research and Development (R&D) BI&T data team that delivers data and analytics capabilities for across DD. You will be specifically supporting our Clinical Data Filing and Sharing (CDFS) Product line. This is a critical role that supports systems necessary for BMS' direct value change; regulated analysis and reporting for every trial in BMS Desired Candidate Characteristics:" Have a strong commitment to a career in technology with a passion for healthcare" Ability to understand the needs of the business and commitment to deliver the best user experience and adoption" Able to collaborate across multiple teams"" Demonstrated leadership experience Excellent communication skills" Innovative and inquisitive nature to ask questions, offer bold ideas and challenge the status quo" Agility to learn new tools and processes" As the candidate grows in their role, they will get additional training and there is opportunity to expand responsibilities and exposure to additional areas withing Drug Development. This includes working with Data Product Leads, providing input and innovation opportunities to modernize with cutting edge technologies (Agentic AI, advanced automation, visualization and application development techniques). Key Responsibilities: Architect and lead the evolution of the Statistical Computing Environment (SCE) platform to support modern clinical trial requirements Partner with data management and statistical programming leads to support seamless integration of data pipelines and metadata-driven standards Lead the development of automated workflows for clinical data ingestion, transformation, analysis, and reporting within the SCE. Drive process automation and efficiency initiatives in data preparation and statistical programming workflows Develop and implement solutions to enhance system performance, stability and security Act as a subject matter expert for AWS SAS implementations and integration with clinical systems. Lead the implementation of cloud-based infrastructure using AWS EC2, Auto Scaling, CloudWatch, and AWS related packages Provide architectural guidance and oversight for CDISC SDTM/ADaM data standards and eCTD regulatory submissions. Collaborate with cross-functional teams to identify product improvements and enhancements. Administer production environment and diagnose & resolve technical issues in a timely manner, documenting solutions for future reference. Coordinate with vendors, suppliers, and contractors to ensure the timely delivery of products and services Serve as a technical mentor for development and operations teams supporting SCE solutions. Analyze business challenges and identify areas for improvement through technology solutions. Ensure regulatory and security compliance through proper governance and access controls. Provide guidance to the resources supporting projects, enhancements, and operations. Stay up to date with the latest technology trends and industry best practices. Qualifications & Experience: Master's or bachelor's degree in computer science, information technology, or related field preferred. 15 + years of experience in software development and engineering, clinical development or data science field 8-10 years of hands-on experience working on implementing and operation of different type of Statistical Clinical Environment (SCE) with Life Sciences and Healthcare business vertical. Strong experience with SAS in an AWS-hosted environment including EC2, S3, IAM, Glue, Athena, and Lambda Hands-on development experience managing and delivering data solutions with AWS data, analytics, AI technologies such as AWS Glue, Redshift, RDS (PostgreSQL), S3, Athena, Lambda, Databricks, Business Intelligence and Visualization tools etc. Experience with R, Python, or other programming languages for data analysis or automation. Experience in shell/ Python scripting and Linux automation for operational monitoring and alerting across the environment Familiarity with cloud DevOps practices, infrastructure-as-code (e.g., CloudFormation, Terraform). Expertise in SAS Grid architecture, grid node orchestration, and job lifecycle management. Strong working knowledge of SASGSUB, job submission parameters, and performance tuning. Understanding of submission readiness and Health Authority requirements for data traceability and transparency. Excellent communication, collaboration and interpersonal skills to interact with diverse stakeholders. Ability to work both independently and collaboratively in a team-oriented environment. Comfortable working in a fast-paced environment with minimal oversight. Prior experience working in an Agile based environment. If you come across a role that intrigues you but doesn't perfectly line up with your resume, we encourage you to apply anyway. You could be one step away from work that will transform your life and career. Uniquely Interesting Work, Life-changing Careers With a single vision as inspiring as Transforming patients' lives through science™ , every BMS employee plays an integral role in work that goes far beyond ordinary. Each of us is empowered to apply our individual talents and unique perspectives in a supportive culture, promoting global participation in clinical trials, while our shared values of passion, innovation, urgency, accountability, inclusion and integrity bring out the highest potential of each of our colleagues. On-site Protocol BMS has an occupancy structure that determines where an employee is required to conduct their work. This structure includes site-essential, site-by-design, field-based and remote-by-design jobs. The occupancy type that you are assigned is determined by the nature and responsibilities of your role: Site-essential roles require 100% of shifts onsite at your assigned facility. Site-by-design roles may be eligible for a hybrid work model with at least 50% onsite at your assigned facility. For these roles, onsite presence is considered an essential job function and is critical to collaboration, innovation, productivity, and a positive Company culture. For field-based and remote-by-design roles the ability to physically travel to visit customers, patients or business partners and to attend meetings on behalf of BMS as directed is an essential job function. BMS is dedicated to ensuring that people with disabilities can excel through a transparent recruitment process, reasonable workplace accommodations/adjustments and ongoing support in their roles. Applicants can request a reasonable workplace accommodation/adjustment prior to accepting a job offer. If you require reasonable accommodations/adjustments in completing this application, or in any part of the recruitment process, direct your inquiries to adastaffingsupport@bms.com . Visit careers.bms.com/ eeo -accessibility to access our complete Equal Employment Opportunity statement. BMS cares about your well-being and the well-being of our staff, customers, patients, and communities. As a result, the Company strongly recommends that all employees be fully vaccinated for Covid-19 and keep up to date with Covid-19 boosters. BMS will consider for employment qualified applicants with arrest and conviction records, pursuant to applicable laws in your area. If you live in or expect to work from Los Angeles County if hired for this position, please visit this page for important additional information: https://careers.bms.com/california-residents/ Any data processed in connection with role applications will be treated in accordance with applicable data privacy policies and regulations.
Posted 1 week ago
15.0 years
8 - 10 Lacs
Gurgaon
On-site
ROLES & RESPONSIBILITIES We are seeking an experienced and visionary Data Architect with over 15 years of experience to lead the design and implementation of scalable, secure, and high-performing data architectures. The ideal candidate should have a deep understanding of cloud-native architectures, enterprise data platforms, and end-to-end data lifecycle management. You will work closely with business, engineering, and product teams to craft robust data solutions that drive business intelligence, analytics, and AI initiatives. Key Responsibilities: Design and implement enterprise-grade data architectures using cloud platforms (e.g., AWS, Azure, GCP). Lead the definition of data architecture standards, guidelines, and best practices. Architect scalable data solutions including data lakes, data warehouses, and real-time streaming platforms. Collaborate with data engineers, analysts, and data scientists to understand data requirements and deliver optimal solutions. Oversee data modeling activities including conceptual, logical, and physical data models. Ensure data security, privacy, and compliance with applicable regulations (e.g., GDPR, HIPAA). Define and implement data governance strategies in collaboration with stakeholders. Evaluate and recommend data-related tools and technologies. Provide architectural guidance and mentorship to data engineering teams. Participate in client discussions, pre-sales, and proposal building (if in a consulting environment). Required Skills & Qualifications: 15+ years of experience in data architecture, data engineering, or database development. Strong experience architecting data solutions on at least one major cloud platform (AWS, Azure, or GCP). Deep understanding of data management principles, data modeling, ETL/ELT pipelines, and data warehousing. Hands-on experience with modern data platforms and tools (e.g., Snowflake, Databricks, BigQuery, Redshift, Synapse, Apache Spark). Proficiency with programming languages such as Python, SQL, or Java. Familiarity with real-time data processing frameworks like Kafka, Kinesis, or Azure Event Hub. Experience implementing data governance, data cataloging, and data quality frameworks. Knowledge of DevOps practices, CI/CD pipelines for data, and Infrastructure as Code (IaC) is a plus. Excellent problem-solving, communication, and stakeholder management skills. Bachelor's or Master's degree in Computer Science, Information Technology, or a related field. Cloud Architect or Data Architect certification (AWS/Azure/GCP) is a strong plus. Preferred Certifications: AWS Certified Solutions Architect – Professional Microsoft Certified: Azure Solutions Architect Expert Google Cloud Professional Data Engineer TOGAF or equivalent architecture frameworks What We Offer: A collaborative and inclusive work environment Opportunity to work on cutting-edge data and AI projects Flexible work options Competitive compensation and benefits package EXPERIENCE 16-18 Years SKILLS Primary Skill: Data Architecture Sub Skill(s): Data Architecture Additional Skill(s): Data Architecture ABOUT THE COMPANY Infogain is a human-centered digital platform and software engineering company based out of Silicon Valley. We engineer business outcomes for Fortune 500 companies and digital natives in the technology, healthcare, insurance, travel, telecom, and retail & CPG industries using technologies such as cloud, microservices, automation, IoT, and artificial intelligence. We accelerate experience-led transformation in the delivery of digital platforms. Infogain is also a Microsoft (NASDAQ: MSFT) Gold Partner and Azure Expert Managed Services Provider (MSP). Infogain, an Apax Funds portfolio company, has offices in California, Washington, Texas, the UK, the UAE, and Singapore, with delivery centers in Seattle, Houston, Austin, Kraków, Noida, Gurgaon, Mumbai, Pune, and Bengaluru.
Posted 1 week ago
4.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Description Analytics Ops and Programs team in Hyderabad is looking for an innovative, hands-on and customer-obsessed Business Analyst. Candidate must be detail oriented, have superior verbal and written communication skills, strong organizational skills, excellent technical skills and should be able to juggle multiple tasks at once. Ideal candidate must be able to identify problems before they happen and implement solutions that detect and prevent outages. The candidate must be able to accurately prioritize projects, make sound judgments, work to improve the customer experience and get the right things done. This job requires you to constantly hit the ground running and have the ability to learn quickly. Primary responsibilities include defining the problem and building analytical frameworks to help the operations to streamline the process, identifying gaps in the existing process by analyzing data and liaising with relevant team(s) to plug it and analyzing data and metrics and sharing update with the internal teams. Amazon is an equal opportunity employer. Basic Qualifications 4+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience Experience with data visualization using Tableau, Quicksight, or similar tools Experience with data modeling, warehousing and building ETL pipelines Experience in Statistical Analysis packages such as R, SAS and Matlab Experience using SQL to pull data from a database or data warehouse and scripting experience (Python) to process data for modeling Preferred Qualifications Experience with AWS solutions such as EC2, DynamoDB, S3, and Redshift Experience in data mining, ETL, etc. and using databases in a business environment with large-scale, complex datasets Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ASSPL - Telangana Job ID: A2969869
Posted 1 week ago
12.0 years
0 Lacs
Delhi
On-site
Job requisition ID :: 80039 Date: Jul 4, 2025 Location: Delhi Designation: Manager Entity: Deloitte Touche Tohmatsu India LLP DATE: 18-Feb-2025 BUSINESS TITLE: Data Architect POSITION DESCRIPTION WHAT YOU’LL DO Define and design future state data architecture for risk data products. Partner with Technology, Data Stewards and various Products teams in an Agile work stream while meeting program goals and deadlines. Create user personas to support various business initiatives. Engage with line of business, operations, and project partners to gather process improvements. Lead to design / build new models to efficiently deliver the risk results to senior management. Evaluate Data related tools and technologies and recommend appropriate implementation patterns and standard methodologies to ensure our Data ecosystem is always modern. Collaborate with Enterprise Data Architects in establishing and adhering to enterprise standards while also performing POCs to ensure those standards are implemented. Provide technical expertise and mentorship to Data Engineers and Data Analysts in the Data Architecture. Develop and maintain processes, standards, policies, guidelines, and governance to ensure that a consistent framework and set of standards is applied across the company. Create and maintain conceptual / logical data models to identify key business entities and visual relationships. Work with business and IT teams to understand data requirements. Maintain a data dictionary consisting of table and column definitions. Review data models with both technical and business audiences. YOU’RE GOOD AT Design, document & train the team on the overall processes and process flows for the Data architecture. Resolve technical challenges in critical situations that require immediate resolution. Develop relationships with external stakeholders to maintain awareness of data and security issues and trends. Review work from other tech team members and provide feedback for growth. Implement Data security policies that align with governance objectives and regulatory requirements. EXPERIENCE & QUALIFICATIONS Bachelor's degree or equivalent combination of education and experience. Bachelor's degree in information science, data management, computer science or related field preferred. Essential Experience & Job Requirements 12+ years of IT experience with major focus on data warehouse/database related projects Expertise in cloud databases like Snowflake/RedShift, data catalogue, MDM etc Expertise in writing SQL and database procedures Proficient in Data Modelling- Conceptual, logical, and Physical modelling Proficient in documenting all the architecture related work performed. Hand on experience in data storage, ETL/ELT and data analytics tools and technologies e.g., Talend, DBT, Attunity, Golden Gate, FiveTran, APIs, Tableau, Power BI, Alteryx etc Experienced in Data Warehousing design/development and BI/ Analytical systems Experience working projects using Agile methodologies Strong hands-on experience with data and analytics data architecture, solution design, and engineering experience Experience with Cloud Big Data technologies such as AWS, Azure, GCP and Snowflake Experience working with agile methodologies (Scrum, Kanban) and Meta Scrum with cross-functional teams (Product Owners, Scrum Master, Architects, and data SMEs) Review existing databases, data architecture, data models across multiple systems and propose architecture enhancements for cross compatibility and target systems Excellent written, oral communication and presentation skills to present architecture, features, and solution recommendations YOU’LL WORK WITH Global functional portfolio technical leaders (Finance, HR, Marketing, Legal, Risk, IT), product owners, functional area teams across levels Global Data Portfolio Management & teams (Enterprise Data Model, Data Catalog, Master Data Management) Consulting and internal Data Portfolio teams
Posted 1 week ago
8.0 years
0 Lacs
Kolkata, West Bengal, India
Remote
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity We are looking for a seasoned and strategic-thinking Senior AWS DataOps Engineer to join our growing global data team. In this role, you will take ownership of critical data workflows and work closely with cross-functional teams to support, optimize, and scale cloud-based data pipelines. You will bring leadership to data operations, contribute to architectural decisions, and help ensure the integrity, availability, and performance of our AWS data infrastructure. Your Key Responsibilities Lead the design, monitoring, and optimization of AWS-based data pipelines using services like AWS Glue, EMR, Lambda, and Amazon S3. Oversee and enhance complex ETL workflows involving IICS (Informatica Intelligent Cloud Services), Databricks, and native AWS tools. Collaborate with data engineering and analytics teams to streamline ingestion into Amazon Redshift and lead data validation strategies. Manage job orchestration using Apache Airflow, AWS Data Pipeline, or equivalent tools, ensuring SLA adherence. Guide SQL query optimization across Redshift and other AWS databases for analytics and operational use cases. Perform root cause analysis of critical failures, mentor junior staff on best practices, and implement preventive measures. Lead deployment activities through robust CI/CD pipelines, applying DevOps principles and automation. Own the creation and governance of SOPs, runbooks, and technical documentation for data operations. Partner with vendors, security, and infrastructure teams to ensure compliance, scalability, and cost-effective architecture. Skills And Attributes For Success Expertise in AWS data services and ability to lead architectural discussions. Analytical thinker with the ability to design and optimize end-to-end data workflows. Excellent debugging and incident resolution skills in large-scale data environments. Strong leadership and mentoring capabilities, with clear communication across business and technical teams. A growth mindset with a passion for building reliable, scalable data systems. Proven ability to manage priorities and navigate ambiguity in a fast-paced environment. To qualify for the role, you must have 5–8 years of experience in DataOps, Data Engineering, or related roles. Strong hands-on expertise in Databricks. Deep understanding of ETL pipelines and modern data integration patterns. Proven experience with Amazon S3, EMR, Glue, Lambda, and Amazon Redshift in production environments. Experience in Airflow or AWS Data Pipeline for orchestration and scheduling. Advanced knowledge of IICS or similar ETL tools for data transformation and automation. SQL skills with emphasis on performance tuning, complex joins, and window functions. Technologies and Tools Must haves Proficient in Amazon S3, EMR (Elastic MapReduce), AWS Glue, and Lambda Expert in Databricks – ability to develop, optimize, and troubleshoot advanced notebooks Strong experience with Amazon Redshift for scalable data warehousing and analytics Solid understanding of orchestration tools like Apache Airflow or AWS Data Pipeline Hands-on with IICS (Informatica Intelligent Cloud Services) or comparable ETL platforms Good to have Exposure to Power BI or Tableau for data visualization Familiarity with CDI, Informatica, or other enterprise-grade data integration platforms Understanding of DevOps and CI/CD automation tools for data engineering workflows SQL familiarity across large datasets and distributed databases What We Look For Enthusiastic learners with a passion for data op’s and practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 1 week ago
15.0 years
0 Lacs
Noida
On-site
Required Experience 15 - 25 Years Skills Apache Superset, aws redshift, amazon redshift + 12 more Job Description: Associate Director- Data Engineering We at Pine Labs are looking for those who share our core belief - Every Day is Game day. We bring our best selves to work each day to realize our mission of enriching the world through the power of digital commerce and financial services. Responsibilities Own the data engineering roadmap from streaming ingestion to analytics-ready layers. Lead and evolve our real-time data pipelines using Apache Kafka/MSK , Debezium CDC , and Apache Pinot . Architect and maintain custom-built Java/Python-based frameworks for data ingestion, validation, transformation, and replication. Design and manage data lakes using Apache Iceberg , Glue , and Athena over S3 . Define and enforce data modelling standards across Pinot , Redshift and warehouse layers. Lead implementation and scaling of open-source BI tools (Superset) and orchestration platforms (Airflow) . Drive cloud-native deployment strategies using ECS/EKS, Terraform/CDK, Docker . Collaborate with ML, product, and business teams to support advanced analytics and AI use cases. Mentor data engineers and evangelize modern architecture and engineering best practices. Ensure observability, lineage, data quality, and governance across the data stack. What Matters In This Role Proven expertise in Kafka/MSK , Debezium , and real-time event-driven architectures Hands-on experience with Pinot , Redshift , RocksDB or noSQL DB Strong background in custom tooling using Java and Python Experience with Apache Airflow , Superset , Iceberg , Athena , and Glue Strong AWS ecosystem knowledge (IAM, S3, Lambda, Glue, ECS/EKS, CloudWatch, etc.) Deep understanding of data lake architecture , streaming vs batch processing , CDC concepts Familiarity with modern data formats (Parquet, Avro) and storage abstractions Nice-to-Have: Exposure to dbt , Trino/Presto , ClickHouse , or Druid Familiarity with data security practices , encryption at rest/in-transit , GDPR/PCI compliance Experience with DevOps practices , GitHub Actions , CI/CD , and Terraform/Cloud Formation What We Value In Our People You take the shot: You Decide Fast and You Deliver Right You are the CEO of what you do: you show ownership and make things happen You own tomorrow: by building solutions for the merchants and doing the right thing You sign your work like an artist: You seek to learn and take pride in the work you do
Posted 1 week ago
0 years
7 - 9 Lacs
Jaipur
Remote
Location Jaipur Employment Type Full time Department Product Support Life at UiPath The people at UiPath believe in the transformative power of automation to change how the world works. We’re committed to creating category-leading enterprise software that unleashes that power. To make that happen, we need people who are curious, self-propelled, generous, and genuine. People who love being part of a fast-moving, fast-thinking growth company. And people who care—about each other, about UiPath, and about our larger purpose. Could that be you? Your mission We’re looking for a Support Engineer to join the team in Jaipur. It’s a customer-facing role that’s all about problem-solving – great for someone who enjoys helping others and has a real interest in tech. It’s especially suited to anyone thinking about a future in Data Engineering, Data Science or Platform Ops. You’ll need some basic knowledge of SQL and Python, but this isn’t a software development job. Day-to-day, you’ll be working closely with customers, getting to grips with their issues, and helping them troubleshoot problems on the Peak platform and with their deployed applications. You’ll need to be curious, proactive, and comfortable owning a problem from start to finish. If you’re someone who enjoys figuring things out, explaining things clearly, and digging into the root cause of an issue – this could be a great fit. What you'll do at UiPath Resolve Technical Issues: Troubleshoot and resolve customer issues on the Peak platform, using your analytical and problem-solving skills. Own the Process: Take full ownership of problems - investigate, follow up, escalate when needed, and ensure resolution is delivered. Investigate Errors: Dig into application logs, API responses, and system outputs to identify and resolve errors within Peak-built customer applications. Write Useful Scripts: Use scripting (e.g. in Python or Bash) to automate routine support tasks, extract data, or investigate technical issues efficiently. Monitor Systems: Help monitor infrastructure and application health, proactively spotting and flagging unusual behaviour before it becomes a problem. Support Infrastructure Security: Assist with routine security updates and checks, ensuring our systems remain secure and up-to-date. Communicate Clearly: Provide timely, professional updates to both internal teams and customers. You’ll often be the bridge between technical and non-technical people. Contribute to Documentation: Help us build out internal documentation and guides to make solving future issues faster and easier. Be Part of the Team: Participate in a shared on-call rotation to support our customers when they need us most. What you'll bring to the team Educational Requirements : A computer science degree or a related field, or equivalent academic experience in technology. Technical Skills : Comfortable using Python, Bash, and SQL for scripting, querying data, and troubleshooting. Familiar with Linux and the command line, and confident navigating file systems or running basic system commands. Exposure to cloud platforms (e.g. AWS, GCP, Azure) is a bonus - especially if you’ve explored tools like Snowflake, Redshift, or other modern data warehouses. Any experience with managing or supporting data workflows, backups, restores, or investigating issues across datasets is a strong plus. Communication Skills : Strong verbal and written communication skills in English. Ability to explain technical concepts clearly and concisely to both technical and non-technical audiences. Personal Attributes : Well-organised with the ability to handle multiple tasks simultaneously. Strong problem-solving and analytical skills. Fast learner with the ability to adapt to new tools and technologies quickly. Excellent interpersonal skills and the ability to work effectively in a team environment. Maybe you don’t tick all the boxes above—but still think you’d be great for the job? Go ahead, apply anyway. Please. Because we know that experience comes in all shapes and sizes—and passion can’t be learned. Many of our roles allow for flexibility in when and where work gets done. Depending on the needs of the business and the role, the number of hybrid, office-based, and remote workers will vary from team to team. Applications are assessed on a rolling basis and there is no fixed deadline for this requisition. The application window may change depending on the volume of applications received or may close immediately if a qualified candidate is selected. We value a range of diverse backgrounds, experiences and ideas. We pride ourselves on our diversity and inclusive workplace that provides equal opportunities to all persons regardless of age, race, color, religion, sex, sexual orientation, gender identity, and expression, national origin, disability, neurodiversity, military and/or veteran status, or any other protected classes. Additionally, UiPath provides reasonable accommodations for candidates on request and respects applicants' privacy rights. To review these and other legal disclosures, visit our privacy policy.
Posted 1 week ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
Remote
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity We are looking for a seasoned and strategic-thinking Senior AWS DataOps Engineer to join our growing global data team. In this role, you will take ownership of critical data workflows and work closely with cross-functional teams to support, optimize, and scale cloud-based data pipelines. You will bring leadership to data operations, contribute to architectural decisions, and help ensure the integrity, availability, and performance of our AWS data infrastructure. Your Key Responsibilities Lead the design, monitoring, and optimization of AWS-based data pipelines using services like AWS Glue, EMR, Lambda, and Amazon S3. Oversee and enhance complex ETL workflows involving IICS (Informatica Intelligent Cloud Services), Databricks, and native AWS tools. Collaborate with data engineering and analytics teams to streamline ingestion into Amazon Redshift and lead data validation strategies. Manage job orchestration using Apache Airflow, AWS Data Pipeline, or equivalent tools, ensuring SLA adherence. Guide SQL query optimization across Redshift and other AWS databases for analytics and operational use cases. Perform root cause analysis of critical failures, mentor junior staff on best practices, and implement preventive measures. Lead deployment activities through robust CI/CD pipelines, applying DevOps principles and automation. Own the creation and governance of SOPs, runbooks, and technical documentation for data operations. Partner with vendors, security, and infrastructure teams to ensure compliance, scalability, and cost-effective architecture. Skills And Attributes For Success Expertise in AWS data services and ability to lead architectural discussions. Analytical thinker with the ability to design and optimize end-to-end data workflows. Excellent debugging and incident resolution skills in large-scale data environments. Strong leadership and mentoring capabilities, with clear communication across business and technical teams. A growth mindset with a passion for building reliable, scalable data systems. Proven ability to manage priorities and navigate ambiguity in a fast-paced environment. To qualify for the role, you must have 5–8 years of experience in DataOps, Data Engineering, or related roles. Strong hands-on expertise in Databricks. Deep understanding of ETL pipelines and modern data integration patterns. Proven experience with Amazon S3, EMR, Glue, Lambda, and Amazon Redshift in production environments. Experience in Airflow or AWS Data Pipeline for orchestration and scheduling. Advanced knowledge of IICS or similar ETL tools for data transformation and automation. SQL skills with emphasis on performance tuning, complex joins, and window functions. Technologies and Tools Must haves Proficient in Amazon S3, EMR (Elastic MapReduce), AWS Glue, and Lambda Expert in Databricks – ability to develop, optimize, and troubleshoot advanced notebooks Strong experience with Amazon Redshift for scalable data warehousing and analytics Solid understanding of orchestration tools like Apache Airflow or AWS Data Pipeline Hands-on with IICS (Informatica Intelligent Cloud Services) or comparable ETL platforms Good to have Exposure to Power BI or Tableau for data visualization Familiarity with CDI, Informatica, or other enterprise-grade data integration platforms Understanding of DevOps and CI/CD automation tools for data engineering workflows SQL familiarity across large datasets and distributed databases What We Look For Enthusiastic learners with a passion for data op’s and practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 1 week ago
8.0 years
0 Lacs
Kanayannur, Kerala, India
Remote
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity We are looking for a seasoned and strategic-thinking Senior AWS DataOps Engineer to join our growing global data team. In this role, you will take ownership of critical data workflows and work closely with cross-functional teams to support, optimize, and scale cloud-based data pipelines. You will bring leadership to data operations, contribute to architectural decisions, and help ensure the integrity, availability, and performance of our AWS data infrastructure. Your Key Responsibilities Lead the design, monitoring, and optimization of AWS-based data pipelines using services like AWS Glue, EMR, Lambda, and Amazon S3. Oversee and enhance complex ETL workflows involving IICS (Informatica Intelligent Cloud Services), Databricks, and native AWS tools. Collaborate with data engineering and analytics teams to streamline ingestion into Amazon Redshift and lead data validation strategies. Manage job orchestration using Apache Airflow, AWS Data Pipeline, or equivalent tools, ensuring SLA adherence. Guide SQL query optimization across Redshift and other AWS databases for analytics and operational use cases. Perform root cause analysis of critical failures, mentor junior staff on best practices, and implement preventive measures. Lead deployment activities through robust CI/CD pipelines, applying DevOps principles and automation. Own the creation and governance of SOPs, runbooks, and technical documentation for data operations. Partner with vendors, security, and infrastructure teams to ensure compliance, scalability, and cost-effective architecture. Skills And Attributes For Success Expertise in AWS data services and ability to lead architectural discussions. Analytical thinker with the ability to design and optimize end-to-end data workflows. Excellent debugging and incident resolution skills in large-scale data environments. Strong leadership and mentoring capabilities, with clear communication across business and technical teams. A growth mindset with a passion for building reliable, scalable data systems. Proven ability to manage priorities and navigate ambiguity in a fast-paced environment. To qualify for the role, you must have 5–8 years of experience in DataOps, Data Engineering, or related roles. Strong hands-on expertise in Databricks. Deep understanding of ETL pipelines and modern data integration patterns. Proven experience with Amazon S3, EMR, Glue, Lambda, and Amazon Redshift in production environments. Experience in Airflow or AWS Data Pipeline for orchestration and scheduling. Advanced knowledge of IICS or similar ETL tools for data transformation and automation. SQL skills with emphasis on performance tuning, complex joins, and window functions. Technologies and Tools Must haves Proficient in Amazon S3, EMR (Elastic MapReduce), AWS Glue, and Lambda Expert in Databricks – ability to develop, optimize, and troubleshoot advanced notebooks Strong experience with Amazon Redshift for scalable data warehousing and analytics Solid understanding of orchestration tools like Apache Airflow or AWS Data Pipeline Hands-on with IICS (Informatica Intelligent Cloud Services) or comparable ETL platforms Good to have Exposure to Power BI or Tableau for data visualization Familiarity with CDI, Informatica, or other enterprise-grade data integration platforms Understanding of DevOps and CI/CD automation tools for data engineering workflows SQL familiarity across large datasets and distributed databases What We Look For Enthusiastic learners with a passion for data op’s and practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 1 week ago
8.0 years
0 Lacs
Trivandrum, Kerala, India
Remote
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity We are looking for a seasoned and strategic-thinking Senior AWS DataOps Engineer to join our growing global data team. In this role, you will take ownership of critical data workflows and work closely with cross-functional teams to support, optimize, and scale cloud-based data pipelines. You will bring leadership to data operations, contribute to architectural decisions, and help ensure the integrity, availability, and performance of our AWS data infrastructure. Your Key Responsibilities Lead the design, monitoring, and optimization of AWS-based data pipelines using services like AWS Glue, EMR, Lambda, and Amazon S3. Oversee and enhance complex ETL workflows involving IICS (Informatica Intelligent Cloud Services), Databricks, and native AWS tools. Collaborate with data engineering and analytics teams to streamline ingestion into Amazon Redshift and lead data validation strategies. Manage job orchestration using Apache Airflow, AWS Data Pipeline, or equivalent tools, ensuring SLA adherence. Guide SQL query optimization across Redshift and other AWS databases for analytics and operational use cases. Perform root cause analysis of critical failures, mentor junior staff on best practices, and implement preventive measures. Lead deployment activities through robust CI/CD pipelines, applying DevOps principles and automation. Own the creation and governance of SOPs, runbooks, and technical documentation for data operations. Partner with vendors, security, and infrastructure teams to ensure compliance, scalability, and cost-effective architecture. Skills And Attributes For Success Expertise in AWS data services and ability to lead architectural discussions. Analytical thinker with the ability to design and optimize end-to-end data workflows. Excellent debugging and incident resolution skills in large-scale data environments. Strong leadership and mentoring capabilities, with clear communication across business and technical teams. A growth mindset with a passion for building reliable, scalable data systems. Proven ability to manage priorities and navigate ambiguity in a fast-paced environment. To qualify for the role, you must have 5–8 years of experience in DataOps, Data Engineering, or related roles. Strong hands-on expertise in Databricks. Deep understanding of ETL pipelines and modern data integration patterns. Proven experience with Amazon S3, EMR, Glue, Lambda, and Amazon Redshift in production environments. Experience in Airflow or AWS Data Pipeline for orchestration and scheduling. Advanced knowledge of IICS or similar ETL tools for data transformation and automation. SQL skills with emphasis on performance tuning, complex joins, and window functions. Technologies and Tools Must haves Proficient in Amazon S3, EMR (Elastic MapReduce), AWS Glue, and Lambda Expert in Databricks – ability to develop, optimize, and troubleshoot advanced notebooks Strong experience with Amazon Redshift for scalable data warehousing and analytics Solid understanding of orchestration tools like Apache Airflow or AWS Data Pipeline Hands-on with IICS (Informatica Intelligent Cloud Services) or comparable ETL platforms Good to have Exposure to Power BI or Tableau for data visualization Familiarity with CDI, Informatica, or other enterprise-grade data integration platforms Understanding of DevOps and CI/CD automation tools for data engineering workflows SQL familiarity across large datasets and distributed databases What We Look For Enthusiastic learners with a passion for data op’s and practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 1 week ago
8.0 years
0 Lacs
Pune, Maharashtra, India
Remote
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity We are looking for a seasoned and strategic-thinking Senior AWS DataOps Engineer to join our growing global data team. In this role, you will take ownership of critical data workflows and work closely with cross-functional teams to support, optimize, and scale cloud-based data pipelines. You will bring leadership to data operations, contribute to architectural decisions, and help ensure the integrity, availability, and performance of our AWS data infrastructure. Your Key Responsibilities Lead the design, monitoring, and optimization of AWS-based data pipelines using services like AWS Glue, EMR, Lambda, and Amazon S3. Oversee and enhance complex ETL workflows involving IICS (Informatica Intelligent Cloud Services), Databricks, and native AWS tools. Collaborate with data engineering and analytics teams to streamline ingestion into Amazon Redshift and lead data validation strategies. Manage job orchestration using Apache Airflow, AWS Data Pipeline, or equivalent tools, ensuring SLA adherence. Guide SQL query optimization across Redshift and other AWS databases for analytics and operational use cases. Perform root cause analysis of critical failures, mentor junior staff on best practices, and implement preventive measures. Lead deployment activities through robust CI/CD pipelines, applying DevOps principles and automation. Own the creation and governance of SOPs, runbooks, and technical documentation for data operations. Partner with vendors, security, and infrastructure teams to ensure compliance, scalability, and cost-effective architecture. Skills And Attributes For Success Expertise in AWS data services and ability to lead architectural discussions. Analytical thinker with the ability to design and optimize end-to-end data workflows. Excellent debugging and incident resolution skills in large-scale data environments. Strong leadership and mentoring capabilities, with clear communication across business and technical teams. A growth mindset with a passion for building reliable, scalable data systems. Proven ability to manage priorities and navigate ambiguity in a fast-paced environment. To qualify for the role, you must have 5–8 years of experience in DataOps, Data Engineering, or related roles. Strong hands-on expertise in Databricks. Deep understanding of ETL pipelines and modern data integration patterns. Proven experience with Amazon S3, EMR, Glue, Lambda, and Amazon Redshift in production environments. Experience in Airflow or AWS Data Pipeline for orchestration and scheduling. Advanced knowledge of IICS or similar ETL tools for data transformation and automation. SQL skills with emphasis on performance tuning, complex joins, and window functions. Technologies and Tools Must haves Proficient in Amazon S3, EMR (Elastic MapReduce), AWS Glue, and Lambda Expert in Databricks – ability to develop, optimize, and troubleshoot advanced notebooks Strong experience with Amazon Redshift for scalable data warehousing and analytics Solid understanding of orchestration tools like Apache Airflow or AWS Data Pipeline Hands-on with IICS (Informatica Intelligent Cloud Services) or comparable ETL platforms Good to have Exposure to Power BI or Tableau for data visualization Familiarity with CDI, Informatica, or other enterprise-grade data integration platforms Understanding of DevOps and CI/CD automation tools for data engineering workflows SQL familiarity across large datasets and distributed databases What We Look For Enthusiastic learners with a passion for data op’s and practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 1 week ago
8.0 years
0 Lacs
Noida, Uttar Pradesh, India
Remote
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity We are looking for a seasoned and strategic-thinking Senior AWS DataOps Engineer to join our growing global data team. In this role, you will take ownership of critical data workflows and work closely with cross-functional teams to support, optimize, and scale cloud-based data pipelines. You will bring leadership to data operations, contribute to architectural decisions, and help ensure the integrity, availability, and performance of our AWS data infrastructure. Your Key Responsibilities Lead the design, monitoring, and optimization of AWS-based data pipelines using services like AWS Glue, EMR, Lambda, and Amazon S3. Oversee and enhance complex ETL workflows involving IICS (Informatica Intelligent Cloud Services), Databricks, and native AWS tools. Collaborate with data engineering and analytics teams to streamline ingestion into Amazon Redshift and lead data validation strategies. Manage job orchestration using Apache Airflow, AWS Data Pipeline, or equivalent tools, ensuring SLA adherence. Guide SQL query optimization across Redshift and other AWS databases for analytics and operational use cases. Perform root cause analysis of critical failures, mentor junior staff on best practices, and implement preventive measures. Lead deployment activities through robust CI/CD pipelines, applying DevOps principles and automation. Own the creation and governance of SOPs, runbooks, and technical documentation for data operations. Partner with vendors, security, and infrastructure teams to ensure compliance, scalability, and cost-effective architecture. Skills And Attributes For Success Expertise in AWS data services and ability to lead architectural discussions. Analytical thinker with the ability to design and optimize end-to-end data workflows. Excellent debugging and incident resolution skills in large-scale data environments. Strong leadership and mentoring capabilities, with clear communication across business and technical teams. A growth mindset with a passion for building reliable, scalable data systems. Proven ability to manage priorities and navigate ambiguity in a fast-paced environment. To qualify for the role, you must have 5–8 years of experience in DataOps, Data Engineering, or related roles. Strong hands-on expertise in Databricks. Deep understanding of ETL pipelines and modern data integration patterns. Proven experience with Amazon S3, EMR, Glue, Lambda, and Amazon Redshift in production environments. Experience in Airflow or AWS Data Pipeline for orchestration and scheduling. Advanced knowledge of IICS or similar ETL tools for data transformation and automation. SQL skills with emphasis on performance tuning, complex joins, and window functions. Technologies and Tools Must haves Proficient in Amazon S3, EMR (Elastic MapReduce), AWS Glue, and Lambda Expert in Databricks – ability to develop, optimize, and troubleshoot advanced notebooks Strong experience with Amazon Redshift for scalable data warehousing and analytics Solid understanding of orchestration tools like Apache Airflow or AWS Data Pipeline Hands-on with IICS (Informatica Intelligent Cloud Services) or comparable ETL platforms Good to have Exposure to Power BI or Tableau for data visualization Familiarity with CDI, Informatica, or other enterprise-grade data integration platforms Understanding of DevOps and CI/CD automation tools for data engineering workflows SQL familiarity across large datasets and distributed databases What We Look For Enthusiastic learners with a passion for data op’s and practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 1 week ago
15.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Roles & Responsibilities We are seeking an experienced and visionary Data Architect with over 15 years of experience to lead the design and implementation of scalable, secure, and high-performing data architectures. The ideal candidate should have a deep understanding of cloud-native architectures, enterprise data platforms, and end-to-end data lifecycle management. You will work closely with business, engineering, and product teams to craft robust data solutions that drive business intelligence, analytics, and AI initiatives. Key Responsibilities Design and implement enterprise-grade data architectures using cloud platforms (e.g., AWS, Azure, GCP). Lead the definition of data architecture standards, guidelines, and best practices. Architect scalable data solutions including data lakes, data warehouses, and real-time streaming platforms. Collaborate with data engineers, analysts, and data scientists to understand data requirements and deliver optimal solutions. Oversee data modeling activities including conceptual, logical, and physical data models. Ensure data security, privacy, and compliance with applicable regulations (e.g., GDPR, HIPAA). Define and implement data governance strategies in collaboration with stakeholders. Evaluate and recommend data-related tools and technologies. Provide architectural guidance and mentorship to data engineering teams. Participate in client discussions, pre-sales, and proposal building (if in a consulting environment). Required Skills & Qualifications 15+ years of experience in data architecture, data engineering, or database development. Strong experience architecting data solutions on at least one major cloud platform (AWS, Azure, or GCP). Deep understanding of data management principles, data modeling, ETL/ELT pipelines, and data warehousing. Hands-on experience with modern data platforms and tools (e.g., Snowflake, Databricks, BigQuery, Redshift, Synapse, Apache Spark). Proficiency with programming languages such as Python, SQL, or Java. Familiarity with real-time data processing frameworks like Kafka, Kinesis, or Azure Event Hub. Experience implementing data governance, data cataloging, and data quality frameworks. Knowledge of DevOps practices, CI/CD pipelines for data, and Infrastructure as Code (IaC) is a plus. Excellent problem-solving, communication, and stakeholder management skills. Bachelor's or Master's degree in Computer Science, Information Technology, or a related field. Cloud Architect or Data Architect certification (AWS/Azure/GCP) is a strong plus. Preferred Certifications AWS Certified Solutions Architect – Professional Microsoft Certified: Azure Solutions Architect Expert Google Cloud Professional Data Engineer TOGAF or equivalent architecture frameworks What We Offer A collaborative and inclusive work environment Opportunity to work on cutting-edge data and AI projects Flexible work options Experience Competitive compensation and benefits package 16-18 Years Skills Primary Skill: Data Architecture Sub Skill(s): Data Architecture Additional Skill(s): Data Architecture About The Company Infogain is a human-centered digital platform and software engineering company based out of Silicon Valley. We engineer business outcomes for Fortune 500 companies and digital natives in the technology, healthcare, insurance, travel, telecom, and retail & CPG industries using technologies such as cloud, microservices, automation, IoT, and artificial intelligence. We accelerate experience-led transformation in the delivery of digital platforms. Infogain is also a Microsoft (NASDAQ: MSFT) Gold Partner and Azure Expert Managed Services Provider (MSP). Infogain, an Apax Funds portfolio company, has offices in California, Washington, Texas, the UK, the UAE, and Singapore, with delivery centers in Seattle, Houston, Austin, Kraków, Noida, Gurgaon, Mumbai, Pune, and Bengaluru.
Posted 1 week ago
2.0 - 5.0 years
0 Lacs
Pune, Maharashtra
On-site
COMPANY OVERVIEW Domo is a native cloud-native data experiences innovator that puts data to work for everyone. Underpinned by AI, data science, and a secure data foundation, our platform makes data actionable with user-friendly dashboards and apps. With Domo, companies get intuitive, agile data experiences that power exponential business impact. POSITION SUMMARY Our Technical Support team is looking for problem solvers with executive presence and polish—highly versatile, reliable, self-starting individuals with deep technical troubleshooting skills and experience. You will help Domo clients facilitate their digital transformation and strategic initiatives and increase brand loyalty and referenceability through world-class technical support. When our customers succeed, we succeed. The Technical Support team is staffed 24/7, which allows our global customers to contact us at their convenience. Support Team members build strong, lasting relationships with customers by understanding their needs and concerns. This team takes the lead in providing a world-class experience for every person who contacts Domo through our Support Team. KEY RESPONSIBILITIES Provide exceptional service by connecting, solving, and building relationships with our customers. Interactions may include case work such as telephone, email, Zoom, in person, or other internal tools, as needed and determined by the business Thinking outside the box, our advisors are offered a high degree of latitude to find and develop solutions. Successful candidates will demonstrate independent thinking that consistently leads to robust and scalable solutions for our customers; Perpetually expand your knowledge of Domo’s platform, Business Intelligence, data, and analytics. On-the-job training, time for side projects, and Domo certification; Provide timely (SLAs), constant, and ongoing communication with your peers and customers regarding their support cases until those cases are solved. JOB REQUIREMENTS Essential: Bachelor's degree in a technical field (computer science, mathematics, statistics, analytics, etc.) or 3-5 years related experience in a relevant field. Show us that you know how to learn, find answers, and develop solutions on your own. At least 2 years of experience in a support role ideally in a customer facing environment. Communicate clearly and effectively with customers to fully meet their needs. You will be working with experts in their field; quickly establishing rapport and trust with them is critical. Strong SQL experience is a must. From memory, can you explain the basic purpose and SQL syntax behind joins, unions, selects, grouping, aggregation, indexes, subqueries, etc. Software application support experience. Preference given for SaaS, analytics, data, and Business Intelligence fields. Tell us about your experience working methodically through queues, following through on commitments, SOP’s, company policies, professional communication etiquette through verbal and written correspondence. Flexible and adaptable to rapid change. This is a fast-paced industry and there will always be something new to learn. Desired: APIs - REST/SOAP, endpoints, uses, authentication, methods, Postman; Programming languages - Python, JavaScript, Java, etc. Relational databases - MySQL, PostgreSQL, MSSQL, Redshift, Oracle, ODBC, OLE DB, JDBC Statistical computing - R, Jupyter JSON/XML – Reading, parsing, XPath, etc. SSO/IDP – OpenID Connect, SAML, Okta, Azure AD, Ping Identity Snowflake Data Cloud / ETL. LOCATION: Pune, Maharashtra, India INDIA BENEFITS & PERKS Medical cash allowance provided Maternity and Paternity Leave policy Baby bucks: cash allowance to spend on anything for every newborn or child adopted Haute Mama: cash allowance to spend on Maternity Wardrobe (only for women employees) 18 days paid time off + 10 holidays + 12 medical leaves Sodexo Meal Pass Health and Wellness Benefit One-time Technology Benefit towards the purchase of a tablet or smartwatch Corporate National Pension Scheme Employee Assistance Programme (EAP) Domo is an equal opportunity employer. #LI-TU1 #LI-Hybrid
Posted 1 week ago
10.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Principal Data Engineer _ Hyderabad (Onsite) Job Title. Principal Data Engineer Work Location. Hyderabad (Onsite) Experience. 10+ Years Job Description: 10+ years of experience in data engineering, with at least 3 years in a technical leadership role. Strong expertise in SQL, Python or Scala, and modern ETL/ELT frameworks. Deep knowledge of data warehousing solutions (e.g., Snowflake, Redshift, BigQuery) and distributed systems (e.g., Hadoop, Spark). Proven experience with cloud platforms (AWS, Azure, or GCP) and associated data services (e.g., S3, Glue, Dataflow, Databricks). Hands-on experience with streaming platforms such as Kafka, Flink, or Kinesis. Solid understanding of data modeling, data lakes, data governance, and security. Excellent communication, leadership, and stakeholder management skills. Preferred Qualifications: Exposure to tools like Airflow, dbt, Terraform, or Kubernetes. Familiarity with data cataloging and lineage tools (e.g., Alation, Collibra). Domain experience in [e.g., Banking, Healthcare, Finance, E-commerce] is a plus. Experience in designing data platforms for AI/ML workloads.
Posted 1 week ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Syensqo is all about chemistry. We’re not just referring to chemical reactions here, but also to the magic that occurs when the brightest minds get to work together. This is where our true strength lies. In you. In your future colleagues and in all your differences. And of course, in your ideas to improve lives while preserving our planet’s beauty for the generations to come. Join us at Syensqo, where our IT team is gearing up to enhance its capabilities. We play a crucial role in the group's transformation—accelerating growth, reshaping progress, and creating sustainable shared value. IT team is making operational adjustments to supercharge value across the entire organization. Here at Syensqo, we're one strong team! Our commitment to accountability drives us as we work hard to deliver value for our customers and stakeholders. In our dynamic and collaborative work environment, we add a touch of enjoyment while staying true to our motto: reinvent progress. Come be part of our transformation journey and contribute to the change as a future team member. We are looking for: As a Data/ML Engineer, you will play a central role in defining, implementing, and maintaining cloud governance frameworks across the organization. You will collaborate with cross-functional teams to ensure secure, compliant, and efficient use of cloud resources for data and machine learning workloads. Your expertise in full-stack automation, DevOps practices, and Infrastructure as Code (IaC) will drive the standardization and scalability of our cloud-based data and ML platforms. Key requirements are: Ensuring cloud data governance Define and maintain central cloud governance policies, standards, and best practices for data, AI and ML workloads Ensure compliance with security, privacy, and regulatory requirements across all cloud environments Monitor and optimize cloud resource usage, cost, and performance for data, AI and ML workloads Design and Implement Data Pipelines Co-develop, co-construct, test, and maintain highly scalable and reliable data architectures, including ETL processes, data warehouses, and data lakes with the Data Platform Team Build and Deploy ML Systems Co-design, co-develop, and deploy machine learning models and associated services into production environments, ensuring performance, reliability, and scalability Infrastructure Management Manage and optimize cloud-based infrastructure (e.g., AWS, Azure, GCP) for data storage, processing, and ML model serving Collaboration Work collaboratively with data scientists, ML engineers, security and business stakeholders to align cloud governance with organizational needs Provide guidance and support to teams on cloud architecture, data management, and ML operations. Work collaboratively with other teams to transition prototypes and experimental models into robust, production-ready solutions Data Governance and Quality: Implement best practices for data governance, data quality, and data security to ensure the integrity and reliability of our data assets. Performance and Optimisation: Identify and implement performance improvements for data pipelines and ML models, optimizing for speed, cost-efficiency, and resource utilization. Monitoring and Alerting Establish and maintain monitoring, logging, and alerting systems for data pipelines and ML models to proactively identify and resolve issues Tooling and Automation Design and implement full-stack automation for data pipelines, ML workflows, and cloud infrastructure Build and manage cloud infrastructure using IaC tools (e.g., Terraform, CloudFormation) Develop and maintain CI/CD pipelines for data and ML projects Promote DevOps culture and best practices within the organization Develop and maintain tools and automation scripts to streamline data operations, model training, and deployment processes Stay Current on new ML / AI trends: Keep abreast of the latest advancements in data engineering, machine learning, and cloud technologies, evaluating and recommending new tools and approach Document processes, architectures, and standards for knowledge sharing and onboarding Education and experience Education: Bachelor's or Master's degree in Computer Science, Data Science, Engineering, or a related quantitative field. (Relevant work experience may be considered in lieu of a degree). Programming: Strong proficiency in Python (essential) and experience with other relevant languages like Java, Scala, or Go. Data Warehousing/Databases: Solid understanding and experience with relational databases (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g., MongoDB, Cassandra). Experience with data warehousing solutions (e.g., Snowflake, Redshift, BigQuery) is highly desirable. Big Data Technologies: Hands-on experience with big data processing frameworks (e.g., Spark, Flink, Hadoop). Cloud Platforms: Experience with at least one major cloud provider (AWS, Azure, or GCP) and their relevant data and ML services (e.g., S3, EC2, Lambda, EMR, SageMaker, Dataflow, BigQuery, Azure Data Factory, Azure ML). ML Concepts: Fundamental understanding of machine learning concepts, algorithms, and workflows. MLOps Principles: Familiarity with MLOps principles and practices for deploying, monitoring, and managing ML models in production. Version Control: Proficiency with Git and collaborative development workflows. Problem-Solving: Excellent analytical and problem-solving skills with a strong attention to detail. Communication: Strong communication skills, able to articulate complex technical concepts to both technical and non-technical stakeholders. Bonus Points (Highly Desirable Skills & Experience): Experience with containerisation technologies (Docker, Kubernetes). Familiarity with CI/CD pipelines for data and ML deployments. Experience with stream processing technologies (e.g., Kafka, Kinesis). Knowledge of data visualization tools (e.g., Tableau, Power BI, Looker). Contributions to open-source projects or a strong portfolio of personal projects. Experience with [specific domain knowledge relevant to your company, e.g., financial data, healthcare data, e-commerce data]. Language skills Fluent English What’s in it for the candidate Be part of a highly motivated team of explorers Help make a difference and thrive in Cloud and AI technology Chart your own course and build a fantastic career Have fun and enjoy life with an industry leading remuneration pack About Us Syensqo is a science company developing groundbreaking solutions that enhance the way we live, work, travel and play. Inspired by the scientific councils which Ernest Solvay initiated in 1911, we bring great minds together to push the limits of science and innovation for the benefit of our customers, with a diverse, global team of more than 13,000 associates. Our solutions contribute to safer, cleaner, and more sustainable products found in homes, food and consumer goods, planes, cars, batteries, smart devices and health care applications. Our innovation power enables us to deliver on the ambition of a circular economy and explore breakthrough technologies that advance humanity. At Syensqo, we seek to promote unity and not uniformity. We value the diversity that individuals bring and we invite you to consider a future with us, regardless of background, age, gender, national origin, ethnicity, religion, sexual orientation, ability or identity. We encourage individuals who may require any assistance or accommodations to let us know to ensure a seamless application experience. We are here to support you throughout the application journey and want to ensure all candidates are treated equally. If you are unsure whether you meet all the criteria or qualifications listed in the job description, we still encourage you to apply.
Posted 1 week ago
15.0 years
0 Lacs
Hyderabad, Telangana, India
Remote
Working with Us Challenging. Meaningful. Life-changing. Those aren't words that are usually associated with a job. But working at Bristol Myers Squibb is anything but usual. Here, uniquely interesting work happens every day, in every department. From optimizing a production line to the latest breakthroughs in cell therapy, this is work that transforms the lives of patients, and the careers of those who do it. You'll get the chance to grow and thrive through opportunities uncommon in scale and scope, alongside high-achieving teams. Take your career farther than you thought possible. Bristol Myers Squibb recognizes the importance of balance and flexibility in our work environment. We offer a wide variety of competitive benefits, services and programs that provide our employees with the resources to pursue their goals, both at work and in their personal lives. Read more careers.bms.com/working-with-us . Position Summary The Senior Manager, Senior Solution Engineer, Drug Development Information Technology (DDIT) will be part of the product team committed to bridge the gap between technology and business needs within the Clinical Data Ecosystem (CDE) that primarily delivers technology strategy and solutions for clinical trial execution, Global Biostatistics and Data Sciences, Clinical Data Management (Clinical Analytics, Site Selection, Feasibility, Real Word Evidence) The role is based out of our Hyderabad office, and is part of the Research and Development (R&D) BI&T data team that delivers data and analytics capabilities for across DD. You will be specifically supporting our Clinical Data Filing and Sharing (CDFS) Product line. This is a critical role that supports systems necessary for BMS' direct value change; regulated analysis and reporting for every trial in BMS Desired Candidate Characteristics " Have a strong commitment to a career in technology with a passion for healthcare" Ability to understand the needs of the business and commitment to deliver the best user experience and adoption" Able to collaborate across multiple teams"" Demonstrated leadership experience Excellent communication skills" Innovative and inquisitive nature to ask questions, offer bold ideas and challenge the status quo" Agility to learn new tools and processes" As the candidate grows in their role, they will get additional training and there is opportunity to expand responsibilities and exposure to additional areas withing Drug Development. This includes working with Data Product Leads, providing input and innovation opportunities to modernize with cutting edge technologies (Agentic AI, advanced automation, visualization and application development techniques). Key Responsibilities Architect and lead the evolution of the Statistical Computing Environment (SCE) platform to support modern clinical trial requirements Partner with data management and statistical programming leads to support seamless integration of data pipelines and metadata-driven standards Lead the development of automated workflows for clinical data ingestion, transformation, analysis, and reporting within the SCE. Drive process automation and efficiency initiatives in data preparation and statistical programming workflows Develop and implement solutions to enhance system performance, stability and security Act as a subject matter expert for AWS SAS implementations and integration with clinical systems. Lead the implementation of cloud-based infrastructure using AWS EC2, Auto Scaling, CloudWatch, and AWS related packages Provide architectural guidance and oversight for CDISC SDTM/ADaM data standards and eCTD regulatory submissions. Collaborate with cross-functional teams to identify product improvements and enhancements. Administer production environment and diagnose & resolve technical issues in a timely manner, documenting solutions for future reference. Coordinate with vendors, suppliers, and contractors to ensure the timely delivery of products and services Serve as a technical mentor for development and operations teams supporting SCE solutions. Analyze business challenges and identify areas for improvement through technology solutions. Ensure regulatory and security compliance through proper governance and access controls. Provide guidance to the resources supporting projects, enhancements, and operations. Stay up to date with the latest technology trends and industry best practices. Qualifications & Experience Master's or bachelor's degree in computer science, information technology, or related field preferred. 15 + years of experience in software development and engineering, clinical development or data science field 8-10 years of hands-on experience working on implementing and operation of different type of Statistical Clinical Environment (SCE) with Life Sciences and Healthcare business vertical. Strong experience with SAS in an AWS-hosted environment including EC2, S3, IAM, Glue, Athena, and Lambda Hands-on development experience managing and delivering data solutions with AWS data, analytics, AI technologies such as AWS Glue, Redshift, RDS (PostgreSQL), S3, Athena, Lambda, Databricks, Business Intelligence and Visualization tools etc. Experience with R, Python, or other programming languages for data analysis or automation. Experience in shell/ Python scripting and Linux automation for operational monitoring and alerting across the environment Familiarity with cloud DevOps practices, infrastructure-as-code (e.g., CloudFormation, Terraform). Expertise in SAS Grid architecture, grid node orchestration, and job lifecycle management. Strong working knowledge of SASGSUB, job submission parameters, and performance tuning. Understanding of submission readiness and Health Authority requirements for data traceability and transparency. Excellent communication, collaboration and interpersonal skills to interact with diverse stakeholders. Ability to work both independently and collaboratively in a team-oriented environment. Comfortable working in a fast-paced environment with minimal oversight. Prior experience working in an Agile based environment. If you come across a role that intrigues you but doesn't perfectly line up with your resume, we encourage you to apply anyway. You could be one step away from work that will transform your life and career. Uniquely Interesting Work, Life-changing Careers With a single vision as inspiring as Transforming patients' lives through science™ , every BMS employee plays an integral role in work that goes far beyond ordinary. Each of us is empowered to apply our individual talents and unique perspectives in a supportive culture, promoting global participation in clinical trials, while our shared values of passion, innovation, urgency, accountability, inclusion and integrity bring out the highest potential of each of our colleagues. On-site Protocol BMS has an occupancy structure that determines where an employee is required to conduct their work. This structure includes site-essential, site-by-design, field-based and remote-by-design jobs. The occupancy type that you are assigned is determined by the nature and responsibilities of your role Site-essential roles require 100% of shifts onsite at your assigned facility. Site-by-design roles may be eligible for a hybrid work model with at least 50% onsite at your assigned facility. For these roles, onsite presence is considered an essential job function and is critical to collaboration, innovation, productivity, and a positive Company culture. For field-based and remote-by-design roles the ability to physically travel to visit customers, patients or business partners and to attend meetings on behalf of BMS as directed is an essential job function. BMS is dedicated to ensuring that people with disabilities can excel through a transparent recruitment process, reasonable workplace accommodations/adjustments and ongoing support in their roles. Applicants can request a reasonable workplace accommodation/adjustment prior to accepting a job offer. If you require reasonable accommodations/adjustments in completing this application, or in any part of the recruitment process, direct your inquiries to adastaffingsupport@bms.com . Visit careers.bms.com/ eeo -accessibility to access our complete Equal Employment Opportunity statement. BMS cares about your well-being and the well-being of our staff, customers, patients, and communities. As a result, the Company strongly recommends that all employees be fully vaccinated for Covid-19 and keep up to date with Covid-19 boosters. BMS will consider for employment qualified applicants with arrest and conviction records, pursuant to applicable laws in your area. If you live in or expect to work from Los Angeles County if hired for this position, please visit this page for important additional information https //careers.bms.com/california-residents/ Any data processed in connection with role applications will be treated in accordance with applicable data privacy policies and regulations.
Posted 1 week ago
10.0 years
0 Lacs
Delhi, India
Remote
JOB_POSTING-3-72216-3 Job Description Role Title: VP, Data Engineering Tech Lead (L12) Company Overview COMPANY OVERVIEW: Synchrony (NYSE: SYF) is a premier consumer financial services company delivering one of the industry’s most complete digitally enabled product suites. Our experience, expertise and scale encompass a broad spectrum of industries including digital, health and wellness, retail, telecommunications, home, auto, outdoors, pet and more. We have recently been ranked #5 among India’s Best Companies to Work for 2023, #21 under LinkedIn Top Companies in India list, and received Top 25 BFSI recognition from Great Place To Work India. We have been ranked Top 5 among India’s Best Workplaces in Diversity, Equity, and Inclusion, and Top 10 among India’s Best Workplaces for Women in 2022. We offer 100% Work from Home flexibility for all our Functional employees and provide some of the best-in-class Employee Benefits and Programs catering to work-life balance and overall well-being. In addition to this, we also have Regional Engagement Hubs across India and a co-working space in Bangalore Organizational Overview Organizational Overview: This role will be part of the Data Architecture & Analytics group part of CTO organization Data team is responsible for designing and developing scalable data pipelines for efficient data ingestion, transformation, and loading(ETL). Collaborating with cross-functional teams to integrate new data sources and ensure data quality and consistency. Building and maintaining data models to facilitate data access and analysis by Data Scientists and Analysts. Responsible for the SYF public cloud platform & services. Govern health, performance, capacity, and costs of resources and ensure adherence to service levels Build well defined processes for cloud application development and service enablement. Role Summary/Purpose We are seeking a highly skilled Cloud Technical Lead with expertise in Data Engineering who will work in multi-disciplinary environments harnessing data to provide valuable impact for our clients. The Cloud Technical Lead will work closely with technology and functional teams to drive migration of legacy on-premises data systems/platforms to cloud-based solutions. The successful candidate will need to develop intimate knowledge of SYF key data domains (originations, loan activity, collection, etc.) and maintain a holistic view across SYF functions to minimize redundancies and optimize the analytics environment. Key Responsibilities Manage end-to-end project lifecycle, including planning, execution, and delivery of cloud-based data engineering projects. Providing guidance on suitable options, designing, and creating data pipeline for the analytical solutions across data lake, data warehouses and cloud implementations. Architect and design robust data pipelines and ETL processes leveraging Ab Initio and Amazon Redshift. Ensure data integration, transformation, and storage process are optimized for scalability and performance in cloud environment. Ensure data security, governance, and compliance in the cloud infrastructure. Provide leadership and guidance to data engineering teams, ensuring best practices are followed. Ensure timely delivery of high-quality solutions in an Agile environment. Required Skills/Knowledge Minimum 10+ years of experience with Bachelor's degree in Computer Science or similar technical field of study or in lieu of a degree 12+ years of relevant experience Minimum 10+ years of experience in managing large scale data platforms (Data warehouse/Data Late/Cloud) environments Minimum 10+ years of financial services experience Minimum 6+ years of experience working with Data Warehouses/Data Lake/Cloud. 6+ years’ of hards-on programming experience in ETL tools - Ab Initio or Informatica highly preferred. Be able to read and reverse engineer the logic in Ab Initio graphs. Hands on experience with cloud platforms such as S3, Redshift, Snowflake, etc. Working knowledge of Hive, Spark, Kafka and other data lake technologies. Strong familiarity with data governance, data lineage, data processes, DML, and data architecture control execution. Experience to analyze system requirements and implement migration methods for existing data. Ability to develop and maintain strong collaborative relationships at all levels across IT and the business. Excellent written and oral communication skills, along with a strong ability to lead and influence others. Experience working iteratively in a fast-paced agile environment. Demonstrated ability to drive change and work effectively across business and geographical boundaries. Expertise in evaluating technology and solution engineering, with strong focus on architecture and deployment of new technology Superior decision-making, client relationship, and vendor management skills. Desired Skills/Knowledge Prior work experience in a credit card/banking/fintech company. Experience dealing with sensitive data in a highly regulated environment. Demonstrated implementation of complex and innovative solutions. Agile experience using JIRA or similar Agile tools. Eligibility Criteria Bachelor's degree in Computer Science or similar technical field of study (Masters degree preferred) Minimum 12+ years of experience in managing large scale data platforms (Data warehouse/Data Late/Cloud) environments Minimum 12+ years of financial services experience Minimum 8+ years of experience working with Oracle Data Warehouses/Data Lake/Cloud 8+ years’ of programming experience in ETL tools - Ab Initio or Informatica highly preferred. Be able to read and reverse engineer the logic in Ab Initio graphs. Hands on experience with cloud platforms such as S3, Redshift, Snowflake, etc. Rigorous data analysis through SQL in Oracle and various Hadoop technologies. Involvement in large scale data analytics migration from on premises to a public cloud Strong familiarity with data governance, data lineage, data processes, DML, and data architecture control execution. Experience to analyze system requirements and implement migration methods for existing data. Excellent written and oral communication skills, along with a strong ability to lead and influence others. Experience working iteratively in a fast-paced agile environment. Work Timings: 3:00 PM IST to 12:00 AM IST (WORK TIMINGS: This role qualifies for Enhanced Flexibility and Choice offered in Synchrony India and will require the incumbent to be available between 06:00 AM Eastern Time – 11:30 AM Eastern Time (timings are anchored to US Eastern hours and will adjust twice a year locally). This window is for meetings with India and US teams. The remaining hours will be flexible for the employee to choose. Exceptions may apply periodically due to business needs. Please discuss this with the hiring manager for more details .) For Internal Applicants Understand the criteria or mandatory skills required for the role, before applying Inform your manager and HRM before applying for any role on Workday Ensure that your professional profile is updated (fields such as education, prior experience, other skills) and it is mandatory to upload your updated resume (Word or PDF format) Must not be any corrective action plan (First Formal/Final Formal, PIP) L10+ Employees who have completed 18 months in the organization and 12 months in current role and level are only eligible. L10+ Employees can apply Level / Grade : 12 Job Family Group Information Technology
Posted 1 week ago
10.0 years
0 Lacs
Kolkata, West Bengal, India
Remote
JOB_POSTING-3-72216-2 Job Description Role Title: VP, Data Engineering Tech Lead (L12) Company Overview COMPANY OVERVIEW: Synchrony (NYSE: SYF) is a premier consumer financial services company delivering one of the industry’s most complete digitally enabled product suites. Our experience, expertise and scale encompass a broad spectrum of industries including digital, health and wellness, retail, telecommunications, home, auto, outdoors, pet and more. We have recently been ranked #5 among India’s Best Companies to Work for 2023, #21 under LinkedIn Top Companies in India list, and received Top 25 BFSI recognition from Great Place To Work India. We have been ranked Top 5 among India’s Best Workplaces in Diversity, Equity, and Inclusion, and Top 10 among India’s Best Workplaces for Women in 2022. We offer 100% Work from Home flexibility for all our Functional employees and provide some of the best-in-class Employee Benefits and Programs catering to work-life balance and overall well-being. In addition to this, we also have Regional Engagement Hubs across India and a co-working space in Bangalore Organizational Overview Organizational Overview: This role will be part of the Data Architecture & Analytics group part of CTO organization Data team is responsible for designing and developing scalable data pipelines for efficient data ingestion, transformation, and loading(ETL). Collaborating with cross-functional teams to integrate new data sources and ensure data quality and consistency. Building and maintaining data models to facilitate data access and analysis by Data Scientists and Analysts. Responsible for the SYF public cloud platform & services. Govern health, performance, capacity, and costs of resources and ensure adherence to service levels Build well defined processes for cloud application development and service enablement. Role Summary/Purpose We are seeking a highly skilled Cloud Technical Lead with expertise in Data Engineering who will work in multi-disciplinary environments harnessing data to provide valuable impact for our clients. The Cloud Technical Lead will work closely with technology and functional teams to drive migration of legacy on-premises data systems/platforms to cloud-based solutions. The successful candidate will need to develop intimate knowledge of SYF key data domains (originations, loan activity, collection, etc.) and maintain a holistic view across SYF functions to minimize redundancies and optimize the analytics environment. Key Responsibilities Manage end-to-end project lifecycle, including planning, execution, and delivery of cloud-based data engineering projects. Providing guidance on suitable options, designing, and creating data pipeline for the analytical solutions across data lake, data warehouses and cloud implementations. Architect and design robust data pipelines and ETL processes leveraging Ab Initio and Amazon Redshift. Ensure data integration, transformation, and storage process are optimized for scalability and performance in cloud environment. Ensure data security, governance, and compliance in the cloud infrastructure. Provide leadership and guidance to data engineering teams, ensuring best practices are followed. Ensure timely delivery of high-quality solutions in an Agile environment. Required Skills/Knowledge Minimum 10+ years of experience with Bachelor's degree in Computer Science or similar technical field of study or in lieu of a degree 12+ years of relevant experience Minimum 10+ years of experience in managing large scale data platforms (Data warehouse/Data Late/Cloud) environments Minimum 10+ years of financial services experience Minimum 6+ years of experience working with Data Warehouses/Data Lake/Cloud. 6+ years’ of hards-on programming experience in ETL tools - Ab Initio or Informatica highly preferred. Be able to read and reverse engineer the logic in Ab Initio graphs. Hands on experience with cloud platforms such as S3, Redshift, Snowflake, etc. Working knowledge of Hive, Spark, Kafka and other data lake technologies. Strong familiarity with data governance, data lineage, data processes, DML, and data architecture control execution. Experience to analyze system requirements and implement migration methods for existing data. Ability to develop and maintain strong collaborative relationships at all levels across IT and the business. Excellent written and oral communication skills, along with a strong ability to lead and influence others. Experience working iteratively in a fast-paced agile environment. Demonstrated ability to drive change and work effectively across business and geographical boundaries. Expertise in evaluating technology and solution engineering, with strong focus on architecture and deployment of new technology Superior decision-making, client relationship, and vendor management skills. Desired Skills/Knowledge Prior work experience in a credit card/banking/fintech company. Experience dealing with sensitive data in a highly regulated environment. Demonstrated implementation of complex and innovative solutions. Agile experience using JIRA or similar Agile tools. Eligibility Criteria Bachelor's degree in Computer Science or similar technical field of study (Masters degree preferred) Minimum 12+ years of experience in managing large scale data platforms (Data warehouse/Data Late/Cloud) environments Minimum 12+ years of financial services experience Minimum 8+ years of experience working with Oracle Data Warehouses/Data Lake/Cloud 8+ years’ of programming experience in ETL tools - Ab Initio or Informatica highly preferred. Be able to read and reverse engineer the logic in Ab Initio graphs. Hands on experience with cloud platforms such as S3, Redshift, Snowflake, etc. Rigorous data analysis through SQL in Oracle and various Hadoop technologies. Involvement in large scale data analytics migration from on premises to a public cloud Strong familiarity with data governance, data lineage, data processes, DML, and data architecture control execution. Experience to analyze system requirements and implement migration methods for existing data. Excellent written and oral communication skills, along with a strong ability to lead and influence others. Experience working iteratively in a fast-paced agile environment. Work Timings: 3:00 PM IST to 12:00 AM IST (WORK TIMINGS: This role qualifies for Enhanced Flexibility and Choice offered in Synchrony India and will require the incumbent to be available between 06:00 AM Eastern Time – 11:30 AM Eastern Time (timings are anchored to US Eastern hours and will adjust twice a year locally). This window is for meetings with India and US teams. The remaining hours will be flexible for the employee to choose. Exceptions may apply periodically due to business needs. Please discuss this with the hiring manager for more details .) For Internal Applicants Understand the criteria or mandatory skills required for the role, before applying Inform your manager and HRM before applying for any role on Workday Ensure that your professional profile is updated (fields such as education, prior experience, other skills) and it is mandatory to upload your updated resume (Word or PDF format) Must not be any corrective action plan (First Formal/Final Formal, PIP) L10+ Employees who have completed 18 months in the organization and 12 months in current role and level are only eligible. L10+ Employees can apply Level / Grade : 12 Job Family Group Information Technology
Posted 1 week ago
10.0 years
0 Lacs
Hyderabad, Telangana, India
Remote
JOB_POSTING-3-72216-1 Job Description Role Title: VP, Data Engineering Tech Lead (L12) Company Overview COMPANY OVERVIEW: Synchrony (NYSE: SYF) is a premier consumer financial services company delivering one of the industry’s most complete digitally enabled product suites. Our experience, expertise and scale encompass a broad spectrum of industries including digital, health and wellness, retail, telecommunications, home, auto, outdoors, pet and more. We have recently been ranked #5 among India’s Best Companies to Work for 2023, #21 under LinkedIn Top Companies in India list, and received Top 25 BFSI recognition from Great Place To Work India. We have been ranked Top 5 among India’s Best Workplaces in Diversity, Equity, and Inclusion, and Top 10 among India’s Best Workplaces for Women in 2022. We offer 100% Work from Home flexibility for all our Functional employees and provide some of the best-in-class Employee Benefits and Programs catering to work-life balance and overall well-being. In addition to this, we also have Regional Engagement Hubs across India and a co-working space in Bangalore Organizational Overview Organizational Overview: This role will be part of the Data Architecture & Analytics group part of CTO organization Data team is responsible for designing and developing scalable data pipelines for efficient data ingestion, transformation, and loading(ETL). Collaborating with cross-functional teams to integrate new data sources and ensure data quality and consistency. Building and maintaining data models to facilitate data access and analysis by Data Scientists and Analysts. Responsible for the SYF public cloud platform & services. Govern health, performance, capacity, and costs of resources and ensure adherence to service levels Build well defined processes for cloud application development and service enablement. Role Summary/Purpose We are seeking a highly skilled Cloud Technical Lead with expertise in Data Engineering who will work in multi-disciplinary environments harnessing data to provide valuable impact for our clients. The Cloud Technical Lead will work closely with technology and functional teams to drive migration of legacy on-premises data systems/platforms to cloud-based solutions. The successful candidate will need to develop intimate knowledge of SYF key data domains (originations, loan activity, collection, etc.) and maintain a holistic view across SYF functions to minimize redundancies and optimize the analytics environment. Key Responsibilities Manage end-to-end project lifecycle, including planning, execution, and delivery of cloud-based data engineering projects. Providing guidance on suitable options, designing, and creating data pipeline for the analytical solutions across data lake, data warehouses and cloud implementations. Architect and design robust data pipelines and ETL processes leveraging Ab Initio and Amazon Redshift. Ensure data integration, transformation, and storage process are optimized for scalability and performance in cloud environment. Ensure data security, governance, and compliance in the cloud infrastructure. Provide leadership and guidance to data engineering teams, ensuring best practices are followed. Ensure timely delivery of high-quality solutions in an Agile environment. Required Skills/Knowledge Minimum 10+ years of experience with Bachelor's degree in Computer Science or similar technical field of study or in lieu of a degree 12+ years of relevant experience Minimum 10+ years of experience in managing large scale data platforms (Data warehouse/Data Late/Cloud) environments Minimum 10+ years of financial services experience Minimum 6+ years of experience working with Data Warehouses/Data Lake/Cloud. 6+ years’ of hards-on programming experience in ETL tools - Ab Initio or Informatica highly preferred. Be able to read and reverse engineer the logic in Ab Initio graphs. Hands on experience with cloud platforms such as S3, Redshift, Snowflake, etc. Working knowledge of Hive, Spark, Kafka and other data lake technologies. Strong familiarity with data governance, data lineage, data processes, DML, and data architecture control execution. Experience to analyze system requirements and implement migration methods for existing data. Ability to develop and maintain strong collaborative relationships at all levels across IT and the business. Excellent written and oral communication skills, along with a strong ability to lead and influence others. Experience working iteratively in a fast-paced agile environment. Demonstrated ability to drive change and work effectively across business and geographical boundaries. Expertise in evaluating technology and solution engineering, with strong focus on architecture and deployment of new technology Superior decision-making, client relationship, and vendor management skills. Desired Skills/Knowledge Prior work experience in a credit card/banking/fintech company. Experience dealing with sensitive data in a highly regulated environment. Demonstrated implementation of complex and innovative solutions. Agile experience using JIRA or similar Agile tools. Eligibility Criteria Bachelor's degree in Computer Science or similar technical field of study (Masters degree preferred) Minimum 12+ years of experience in managing large scale data platforms (Data warehouse/Data Late/Cloud) environments Minimum 12+ years of financial services experience Minimum 8+ years of experience working with Oracle Data Warehouses/Data Lake/Cloud 8+ years’ of programming experience in ETL tools - Ab Initio or Informatica highly preferred. Be able to read and reverse engineer the logic in Ab Initio graphs. Hands on experience with cloud platforms such as S3, Redshift, Snowflake, etc. Rigorous data analysis through SQL in Oracle and various Hadoop technologies. Involvement in large scale data analytics migration from on premises to a public cloud Strong familiarity with data governance, data lineage, data processes, DML, and data architecture control execution. Experience to analyze system requirements and implement migration methods for existing data. Excellent written and oral communication skills, along with a strong ability to lead and influence others. Experience working iteratively in a fast-paced agile environment. Work Timings: 3:00 PM IST to 12:00 AM IST (WORK TIMINGS: This role qualifies for Enhanced Flexibility and Choice offered in Synchrony India and will require the incumbent to be available between 06:00 AM Eastern Time – 11:30 AM Eastern Time (timings are anchored to US Eastern hours and will adjust twice a year locally). This window is for meetings with India and US teams. The remaining hours will be flexible for the employee to choose. Exceptions may apply periodically due to business needs. Please discuss this with the hiring manager for more details .) For Internal Applicants Understand the criteria or mandatory skills required for the role, before applying Inform your manager and HRM before applying for any role on Workday Ensure that your professional profile is updated (fields such as education, prior experience, other skills) and it is mandatory to upload your updated resume (Word or PDF format) Must not be any corrective action plan (First Formal/Final Formal, PIP) L10+ Employees who have completed 18 months in the organization and 12 months in current role and level are only eligible. L10+ Employees can apply Level / Grade : 12 Job Family Group Information Technology
Posted 1 week ago
10.0 years
0 Lacs
Hyderabad, Telangana, India
Remote
JOB_POSTING-3-72216 Job Description Role Title: VP, Data Engineering Tech Lead (L12) Company Overview COMPANY OVERVIEW: Synchrony (NYSE: SYF) is a premier consumer financial services company delivering one of the industry’s most complete digitally enabled product suites. Our experience, expertise and scale encompass a broad spectrum of industries including digital, health and wellness, retail, telecommunications, home, auto, outdoors, pet and more. We have recently been ranked #5 among India’s Best Companies to Work for 2023, #21 under LinkedIn Top Companies in India list, and received Top 25 BFSI recognition from Great Place To Work India. We have been ranked Top 5 among India’s Best Workplaces in Diversity, Equity, and Inclusion, and Top 10 among India’s Best Workplaces for Women in 2022. We offer 100% Work from Home flexibility for all our Functional employees and provide some of the best-in-class Employee Benefits and Programs catering to work-life balance and overall well-being. In addition to this, we also have Regional Engagement Hubs across India and a co-working space in Bangalore Organizational Overview Organizational Overview: This role will be part of the Data Architecture & Analytics group part of CTO organization Data team is responsible for designing and developing scalable data pipelines for efficient data ingestion, transformation, and loading(ETL). Collaborating with cross-functional teams to integrate new data sources and ensure data quality and consistency. Building and maintaining data models to facilitate data access and analysis by Data Scientists and Analysts. Responsible for the SYF public cloud platform & services. Govern health, performance, capacity, and costs of resources and ensure adherence to service levels Build well defined processes for cloud application development and service enablement. Role Summary/Purpose We are seeking a highly skilled Cloud Technical Lead with expertise in Data Engineering who will work in multi-disciplinary environments harnessing data to provide valuable impact for our clients. The Cloud Technical Lead will work closely with technology and functional teams to drive migration of legacy on-premises data systems/platforms to cloud-based solutions. The successful candidate will need to develop intimate knowledge of SYF key data domains (originations, loan activity, collection, etc.) and maintain a holistic view across SYF functions to minimize redundancies and optimize the analytics environment. Key Responsibilities Manage end-to-end project lifecycle, including planning, execution, and delivery of cloud-based data engineering projects. Providing guidance on suitable options, designing, and creating data pipeline for the analytical solutions across data lake, data warehouses and cloud implementations. Architect and design robust data pipelines and ETL processes leveraging Ab Initio and Amazon Redshift. Ensure data integration, transformation, and storage process are optimized for scalability and performance in cloud environment. Ensure data security, governance, and compliance in the cloud infrastructure. Provide leadership and guidance to data engineering teams, ensuring best practices are followed. Ensure timely delivery of high-quality solutions in an Agile environment. Required Skills/Knowledge Minimum 10+ years of experience with Bachelor's degree in Computer Science or similar technical field of study or in lieu of a degree 12+ years of relevant experience Minimum 10+ years of experience in managing large scale data platforms (Data warehouse/Data Late/Cloud) environments Minimum 10+ years of financial services experience Minimum 6+ years of experience working with Data Warehouses/Data Lake/Cloud. 6+ years’ of hards-on programming experience in ETL tools - Ab Initio or Informatica highly preferred. Be able to read and reverse engineer the logic in Ab Initio graphs. Hands on experience with cloud platforms such as S3, Redshift, Snowflake, etc. Working knowledge of Hive, Spark, Kafka and other data lake technologies. Strong familiarity with data governance, data lineage, data processes, DML, and data architecture control execution. Experience to analyze system requirements and implement migration methods for existing data. Ability to develop and maintain strong collaborative relationships at all levels across IT and the business. Excellent written and oral communication skills, along with a strong ability to lead and influence others. Experience working iteratively in a fast-paced agile environment. Demonstrated ability to drive change and work effectively across business and geographical boundaries. Expertise in evaluating technology and solution engineering, with strong focus on architecture and deployment of new technology Superior decision-making, client relationship, and vendor management skills. Desired Skills/Knowledge Prior work experience in a credit card/banking/fintech company. Experience dealing with sensitive data in a highly regulated environment. Demonstrated implementation of complex and innovative solutions. Agile experience using JIRA or similar Agile tools. Eligibility Criteria Bachelor's degree in Computer Science or similar technical field of study (Masters degree preferred) Minimum 12+ years of experience in managing large scale data platforms (Data warehouse/Data Late/Cloud) environments Minimum 12+ years of financial services experience Minimum 8+ years of experience working with Oracle Data Warehouses/Data Lake/Cloud 8+ years’ of programming experience in ETL tools - Ab Initio or Informatica highly preferred. Be able to read and reverse engineer the logic in Ab Initio graphs. Hands on experience with cloud platforms such as S3, Redshift, Snowflake, etc. Rigorous data analysis through SQL in Oracle and various Hadoop technologies. Involvement in large scale data analytics migration from on premises to a public cloud Strong familiarity with data governance, data lineage, data processes, DML, and data architecture control execution. Experience to analyze system requirements and implement migration methods for existing data. Excellent written and oral communication skills, along with a strong ability to lead and influence others. Experience working iteratively in a fast-paced agile environment. Work Timings: 3:00 PM IST to 12:00 AM IST (WORK TIMINGS: This role qualifies for Enhanced Flexibility and Choice offered in Synchrony India and will require the incumbent to be available between 06:00 AM Eastern Time – 11:30 AM Eastern Time (timings are anchored to US Eastern hours and will adjust twice a year locally). This window is for meetings with India and US teams. The remaining hours will be flexible for the employee to choose. Exceptions may apply periodically due to business needs. Please discuss this with the hiring manager for more details .) For Internal Applicants Understand the criteria or mandatory skills required for the role, before applying Inform your manager and HRM before applying for any role on Workday Ensure that your professional profile is updated (fields such as education, prior experience, other skills) and it is mandatory to upload your updated resume (Word or PDF format) Must not be any corrective action plan (First Formal/Final Formal, PIP) L10+ Employees who have completed 18 months in the organization and 12 months in current role and level are only eligible. L10+ Employees can apply Level / Grade : 12 Job Family Group Information Technology
Posted 1 week ago
14.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Description Our Analytics and Insights Managed Services team brings a unique combination of industry expertise, technology, data management and managed services experience to create sustained outcomes for our clients and improve business performance. We empower companies to transform their approach to analytics and insights and optimizing processes for efficiency and client satisfaction. The role requires a deep understanding of IT services, operational excellence, and client-centric solutions. Job Requirements And Preferences: Minimum Degree Required: Bachelor’s degree in information technology, Data Science, Computer Science, Statistics, or a related field (Master’s degree preferred) Minimum Years of Experience: 14 year(s) with at least 3 years in a managerial or leadership role. Proven experience in managing data analytics services for external clients, preferably within a managed services or consulting environment Technical Skills: Experience and knowhow of working with a combination/subset tools and technologies listed below Proficiency in data analytics tools (e.g., Power BI, Tableau, QlikView), Data Integration tools (ETL, Informatica, Talend, Snowflake etc.) and programming languages (e.g., Python, R, SAS, SQL). Strong understanding of Data & Analytics services cloud platforms (e.g., AWS, Azure, GCP) like AWS Glue, EMR, ADF, Redshift, Synapse, BigQuery etc and big data technologies (e.g., Hadoop, Spark). Familiarity with traditional Data warehousing tools like Teradata, Netezza etc Familiarity with machine learning, AI, and automation in data analytics. Certification in data-related disciplines preferred Leadership: Demonstrated ability to lead teams, manage complex projects, and deliver results. Communication: Excellent verbal and written communication skills, with the ability to present complex information to non-technical stakeholders Roles & Responsibilities: Demonstrates intimate abilities and/or a proven record of success as a team leader, emphasizing the following: Client Relationship Management: Serve as the focal point for level client interactions, maintaining strong relationships. Manage client escalations and ensure timely resolution of issues. Face of the team for strategic client discussions, Governance and regular cadence with Client Service Delivery Management: Responsibly Lead end-to-end delivery of managed data analytics services to clients, ensuring projects meet business requirements, timelines, and quality standards Deliver Minor Enhancements and Bug Fixes aligned to client’s service delivery model Good Experience setting up Incident Management, Problem Management processes for the engagement Collaborate with cross-functional teams, including data engineers, data scientists, and business analysts, to deliver end-to-end solutions Monitor, manage & report service-level agreements (SLAs) and key performance indicators (KPIs). Solid financial acumen with experience in budget management. Problem-solving and decision-making skills, with the ability to think strategically Operational Excellence & practice growth: Implement and oversee standardized processes, workflows, and best practices to ensure efficient operations. Utilize tools and systems for service monitoring, reporting, and automation to improve service delivery. Drive innovation and automation in data integration, processing, analysis, and reporting workflows Keep up to date with industry trends, emerging technologies, and regulatory requirements impacting managed services. Risk and Compliance: Ensure data security, privacy, and compliance with relevant standards and regulations Ensure all managed services are delivered in compliance with relevant regulatory requirements and industry standards. Proactively identify and mitigate operational risks that could affect service delivery. Team Leadership & Development: Lead and mentor a team of service managers and technical professionals to ensure high performance and continuous development. Foster a culture of collaboration, accountability, and excellence within the team. Ensure the team is trained on the latest industry best practices, tools, and methodologies. Capacity Management, experience with practice development, strong understanding of agile practices, cloud platforms, and infrastructure management Pre-Sales Experience: Collaborate with sales teams to identify opportunities for growth and expansion of services. Experience in solutioning of responses and operating model including Estimation frameworks, content contribution, solution architecture in responding to RFPs Job Description Our Analytics and Insights Managed Services team brings a unique combination of industry expertise, technology, data management and managed services experience to create sustained outcomes for our clients and improve business performance. We empower companies to transform their approach to analytics and insights and optimizing processes for efficiency and client satisfaction. The role requires a deep understanding of IT services, operational excellence, and client-centric solutions. Job Requirements And Preferences: Minimum Degree Required: Bachelor’s degree in information technology, Data Science, Computer Science, Statistics, or a related field (Master’s degree preferred) Minimum Years of Experience: 14 year(s) with at least 3 years in a managerial or leadership role. Proven experience in managing data analytics services for external clients, preferably within a managed services or consulting environment Technical Skills: Experience and knowhow of working with a combination/subset tools and technologies listed below Proficiency in data analytics tools (e.g., Power BI, Tableau, QlikView), Data Integration tools (ETL, Informatica, Talend, Snowflake etc.) and programming languages (e.g., Python, R, SAS, SQL). Strong understanding of Data & Analytics services cloud platforms (e.g., AWS, Azure, GCP) like AWS Glue, EMR, ADF, Redshift, Synapse, BigQuery etc and big data technologies (e.g., Hadoop, Spark). Familiarity with traditional Datawarehousing tools like Teradata, Netezza etc Familiarity with machine learning, AI, and automation in data analytics. Certification in data-related disciplines preferred Leadership: Demonstrated ability to lead teams, manage complex projects, and deliver results. Communication: Excellent verbal and written communication skills, with the ability to present complex information to non-technical stakeholders Roles & Responsibilities: Demonstrates intimate abilities and/or a proven record of success as a team leader, emphasizing the following: Client Relationship Management: Serve as the focal point for level client interactions, maintaining strong relationships. Manage client escalations and ensure timely resolution of issues. Face of the team for strategic client discussions, Governance and regular cadence with Client Service Delivery Management: Responsibly Lead end-to-end delivery of managed data analytics services to clients, ensuring projects meet business requirements, timelines, and quality standards Deliver Minor Enhancements and Bug Fixes aligned to client’s service delivery model Good Experience setting up Incident Management, Problem Management processes for the engagement Collaborate with cross-functional teams, including data engineers, data scientists, and business analysts, to deliver end-to-end solutions Monitor, manage & report service-level agreements (SLAs) and key performance indicators (KPIs). Solid financial acumen with experience in budget management. Problem-solving and decision-making skills, with the ability to think strategically Operational Excellence & practice growth: Implement and oversee standardized processes, workflows, and best practices to ensure efficient operations. Utilize tools and systems for service monitoring, reporting, and automation to improve service delivery. Drive innovation and automation in data integration, processing, analysis, and reporting workflows Keep up to date with industry trends, emerging technologies, and regulatory requirements impacting managed services. Risk and Compliance: Ensure data security, privacy, and compliance with relevant standards and regulations Ensure all managed services are delivered in compliance with relevant regulatory requirements and industry standards. Proactively identify and mitigate operational risks that could affect service delivery. Team Leadership & Development: Lead and mentor a team of service managers and technical professionals to ensure high performance and continuous development. Foster a culture of collaboration, accountability, and excellence within the team. Ensure the team is trained on the latest industry best practices, tools, and methodologies. Capacity Management, experience with practice development, strong understanding of agile practices, cloud platforms, and infrastructure management Pre-Sales Experience: Collaborate with sales teams to identify opportunities for growth and expansion of services. Experience in solutioning of responses and operating model including Estimation frameworks, content contribution, solution architecture in responding to RFPs
Posted 1 week ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Description: Associate Director- Data Engineering We at Pine Labs are looking for those who share our core belief - Every Day is Game day. We bring our best selves to work each day to realize our mission of enriching the world through the power of digital commerce and financial services. Responsibilities Own the data engineering roadmap from streaming ingestion to analytics-ready layers. Lead and evolve our real-time data pipelines using Apache Kafka/MSK, Debezium CDC, and Apache Pinot. Architect and maintain custom-built Java/Python-based frameworks for data ingestion, validation, transformation, and replication. Design and manage data lakes using Apache Iceberg, Glue, and Athena over S3. Define and enforce data modelling standards across Pinot, Redshift and warehouse layers. Lead implementation and scaling of open-source BI tools (Superset) and orchestration platforms (Airflow). Drive cloud-native deployment strategies using ECS/EKS, Terraform/CDK, Docker. Collaborate with ML, product, and business teams to support advanced analytics and AI use cases. Mentor data engineers and evangelize modern architecture and engineering best practices. Ensure observability, lineage, data quality, and governance across the data stack. What Matters In This Role Proven expertise in Kafka/MSK, Debezium, and real-time event-driven architectures Hands-on experience with Pinot, Redshift, RocksDB or noSQL DB Strong background in custom tooling using Java and Python Experience with Apache Airflow, Superset, Iceberg, Athena, and Glue Strong AWS ecosystem knowledge (IAM, S3, Lambda, Glue, ECS/EKS, CloudWatch, etc.) Deep understanding of data lake architecture, streaming vs batch processing, CDC concepts Familiarity with modern data formats (Parquet, Avro) and storage abstractions Nice-to-Have Exposure to dbt, Trino/Presto, ClickHouse, or Druid Familiarity with data security practices, encryption at rest/in-transit, GDPR/PCI compliance Experience with DevOps practices, GitHub Actions, CI/CD, and Terraform/Cloud Formation What We Value In Our People You take the shot: You Decide Fast and You Deliver Right You are the CEO of what you do: you show ownership and make things happen You own tomorrow: by building solutions for the merchants and doing the right thing You sign your work like an artist: You seek to learn and take pride in the work you do
Posted 1 week ago
10.0 years
0 Lacs
Pune, Maharashtra, India
On-site
About Teqfocus: At Teqfocus, we partner with global enterprises to deliver cutting-edge solutions across Cloud , Data , and Digital domains. Our team thrives on innovation, collaboration, and excellence. As a Data Architect , you will help design and implement robust, scalable data architectures across multiple cloud platforms for our clients. Location - Hybrid - Pune/Ranchi/Bengaluru/Delhi/NCR Role Summary: The Data Architect will lead the design and implementation of end-to-end data solutions across multiple clouds, including AWS , Snowflake , and Databricks . You will collaborate with cross-functional teams to understand business needs, define data strategy, and ensure high-quality, scalable, and secure data systems. Key Responsibilities: Architectural Design Design enterprise-grade data architectures for large-scale systems using AWS, Snowflake, and Databricks. Define and document logical and physical data models. Select and integrate appropriate data storage, processing, and governance technologies. Data Strategy & Governance Establish data strategy, standards, and best practices for data management and security. Define data governance policies, including data lineage, metadata management, and quality standards. Drive data privacy and compliance initiatives (e.g., GDPR, HIPAA as applicable). Implementation Leadership Lead and mentor data engineering teams in building ETL/ELT pipelines. Oversee migration projects from on-premises to cloud-based data platforms. Conduct performance tuning and optimization of data systems. Cross-Functional Collaboration Work with business stakeholders to gather requirements and translate them into technical solutions. Collaborate with cloud engineers, solution architects, and data scientists. Support pre-sales teams with solution design and proposal development when required. Innovation & Continuous Improvement Evaluate and recommend emerging data technologies. Foster a culture of continuous learning and knowledge sharing in the data engineering team. Stay up-to-date with the latest trends in multi-cloud data architecture. Requirements: Data Architecture Expertise 10+ years of overall experience, with at least 3 years in a dedicated Data Architect role. Deep understanding of modern data architecture patterns (e.g., Data Lakehouse, Data Mesh, Data Warehousing, Streaming). Cloud Platforms Proven experience designing and deploying solutions on AWS (e.g., S3, Glue, Redshift, Lambda). Expertise in Snowflake (data modeling, performance tuning, security features). Strong experience with Databricks (including Delta Lake, Spark jobs, MLflow integration). Data Engineering & Modeling Proficiency in data modeling (conceptual, logical, and physical). Expertise in ETL/ELT using tools like AWS Glue, Apache Spark, or Databricks. Strong SQL skills and experience with Python or Scala for data pipelines. Security & Governance Hands-on experience with data security and IAM on cloud platforms. Knowledge of data cataloging, lineage, and governance tools (e.g., AWS Glue Data Catalog, Alation, Collibra). Soft Skills Excellent communication and stakeholder management skills. Ability to simplify complex technical concepts for business users. Strong leadership and mentoring experience. Nice to Have: Certifications: AWS Certified Solutions Architect, AWS Certified Data Analytics, Snowflake SnowPro, Databricks Certified Architect. Exposure to Azure or GCP data services. Experience with orchestration tools like DBT , Airflow . Experience with data streaming technologies (e.g., Kafka , Kinesis ). Familiarity with BI tools such as Tableau and Power BI.
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
31300 Jobs | Dublin
Wipro
16502 Jobs | Bengaluru
EY
10539 Jobs | London
Accenture in India
10399 Jobs | Dublin 2
Uplers
8481 Jobs | Ahmedabad
Amazon
8475 Jobs | Seattle,WA
IBM
7957 Jobs | Armonk
Oracle
7438 Jobs | Redwood City
Muthoot FinCorp (MFL)
6169 Jobs | New Delhi
Capgemini
5811 Jobs | Paris,France