Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 years
10 - 30 Lacs
Mumbai Metropolitan Region
On-site
ETL DataStage Developer About The Opportunity A rapidly-expanding information technology consulting provider in the data integration & analytics space partners with Fortune 500 enterprises to modernise legacy warehouses and architect scalable, real-time data platforms. Operating at the intersection of professional services and enterprise software implementation, we deliver high-throughput ETL solutions that turn raw operational data into actionable business intelligence. Based on-site at a flagship client location in India, you will engineer resilient IBM DataStage pipelines that power critical reporting and analytics workloads. Role & Responsibilities Design, develop, and optimise DataStage ETL jobs for batch and near-real-time data ingestion. Analyse source-to-target mappings, create technical specs, and ensure end-to-end data lineage and quality. Tune performance through partitioning, parallel processing, and efficient stage design to meet SLA-driven loads. Collaborate with data architects, DBAs, and business analysts to integrate warehouses, marts, and downstream BI tools. Implement unit testing, peer reviews, and migration scripts to UAT and production following DevOps best practices. Monitor daily jobs, troubleshoot failures, and implement permanent fixes to guarantee high availability. Skills & Qualifications Must-Have 5+ years hands-on IBM DataStage development in enterprise environments. Strong SQL expertise across Oracle, DB2, or Teradata with proven query optimisation. Solid understanding of data warehousing concepts, dimensional modelling, and ETL best practices. Proficiency in Unix/Linux shell scripting for job scheduling and automation. Experience with version control and CI/CD tools (Git, Jenkins, etc.). Preferred Exposure to cloud data platforms (AWS Redshift, Azure Synapse, or GCP BigQuery). Knowledge of DataStage Parallel Extender and big data connectors. Familiarity with Data Quality or MDM tools such as Informatica IDQ. Benefits & Culture Highlights Client-facing projects offering ownership, visibility, and rapid career growth. Continuous learning budget covering certifications in DataStage and cloud ETL technologies. Collaborative, high-performance culture with merit-based rewards and on-site amenities. Skills: db2,sql,performance tuning,unix/linux shell scripting,git,teradata,datastage,oracle,ibm datastage,etl,data warehousing,jenkins
Posted 1 week ago
5.0 years
10 - 30 Lacs
Chennai, Tamil Nadu, India
On-site
ETL DataStage Developer About The Opportunity A rapidly-expanding information technology consulting provider in the data integration & analytics space partners with Fortune 500 enterprises to modernise legacy warehouses and architect scalable, real-time data platforms. Operating at the intersection of professional services and enterprise software implementation, we deliver high-throughput ETL solutions that turn raw operational data into actionable business intelligence. Based on-site at a flagship client location in India, you will engineer resilient IBM DataStage pipelines that power critical reporting and analytics workloads. Role & Responsibilities Design, develop, and optimise DataStage ETL jobs for batch and near-real-time data ingestion. Analyse source-to-target mappings, create technical specs, and ensure end-to-end data lineage and quality. Tune performance through partitioning, parallel processing, and efficient stage design to meet SLA-driven loads. Collaborate with data architects, DBAs, and business analysts to integrate warehouses, marts, and downstream BI tools. Implement unit testing, peer reviews, and migration scripts to UAT and production following DevOps best practices. Monitor daily jobs, troubleshoot failures, and implement permanent fixes to guarantee high availability. Skills & Qualifications Must-Have 5+ years hands-on IBM DataStage development in enterprise environments. Strong SQL expertise across Oracle, DB2, or Teradata with proven query optimisation. Solid understanding of data warehousing concepts, dimensional modelling, and ETL best practices. Proficiency in Unix/Linux shell scripting for job scheduling and automation. Experience with version control and CI/CD tools (Git, Jenkins, etc.). Preferred Exposure to cloud data platforms (AWS Redshift, Azure Synapse, or GCP BigQuery). Knowledge of DataStage Parallel Extender and big data connectors. Familiarity with Data Quality or MDM tools such as Informatica IDQ. Benefits & Culture Highlights Client-facing projects offering ownership, visibility, and rapid career growth. Continuous learning budget covering certifications in DataStage and cloud ETL technologies. Collaborative, high-performance culture with merit-based rewards and on-site amenities. Skills: db2,sql,performance tuning,unix/linux shell scripting,git,teradata,datastage,oracle,ibm datastage,etl,data warehousing,jenkins
Posted 1 week ago
5.0 years
10 - 30 Lacs
Chennai, Tamil Nadu, India
On-site
Industry & Sector A fast-growing IT services and data analytics consultancy serving Fortune 500 clients across banking, retail and healthcare relies on high-quality data pipelines to fuel BI and AI initiatives. We are hiring an on-site ETL Test Engineer to ensure the reliability, accuracy and performance of these mission-critical workloads. Role & Responsibilities Develop and maintain end-to-end test plans for data extraction, transformation and loading processes across multiple databases and cloud platforms. Create reusable SQL queries and automation scripts to validate data completeness, integrity and historical load accuracy at scale. Set up data-comparison, reconciliation and performance tests within Azure DevOps or Jenkins CI pipelines for nightly builds. Collaborate with data engineers to debug mapping issues, optimise jobs and drive defect resolution through Jira. Document test artefacts, traceability matrices and sign-off reports to support regulatory and audit requirements. Champion best practices for ETL quality, mentoring junior testers on automation frameworks and agile rituals. Skills & Qualifications Must-Have 5 years specialised in ETL testing within enterprise data warehouse or lakehouse environments. Hands-on proficiency in advanced SQL, joins, window functions and data profiling. Experience testing Informatica PowerCenter, SSIS, Talend or similar ETL tools. Exposure to big data ecosystems such as Hadoop or Spark and cloud warehouses like Snowflake or Redshift. Automation skills using Python or Shell with Selenium or pytest frameworks integrated into CI/CD. Strong defect management and stakeholder communication skills. Preferred Knowledge of BI visualisation layer testing with Tableau or Power BI. Performance benchmarking for batch loads and CDC streams. ISTQB or equivalent testing certification. Benefits & Culture Highlights Work on high-impact data programmes for global brands using modern cloud technologies. Clear career ladder with sponsored certifications and internal hackathons. Collaborative, merit-driven culture that values innovation and continuous learning. Location: On-site, India. Candidates must be willing to work from client premises and collaborate closely with cross-functional teams. Skills: sql,shell,informatica powercenter,defect tracking,stakeholder communication,agile,pytest,redshift,ssis,defect management,snowflake,selenium,talend,python,etl testing,test automation,spark,hadoop,data warehousing
Posted 1 week ago
5.0 years
10 - 30 Lacs
Bengaluru, Karnataka, India
On-site
ETL DataStage Developer About The Opportunity A rapidly-expanding information technology consulting provider in the data integration & analytics space partners with Fortune 500 enterprises to modernise legacy warehouses and architect scalable, real-time data platforms. Operating at the intersection of professional services and enterprise software implementation, we deliver high-throughput ETL solutions that turn raw operational data into actionable business intelligence. Based on-site at a flagship client location in India, you will engineer resilient IBM DataStage pipelines that power critical reporting and analytics workloads. Role & Responsibilities Design, develop, and optimise DataStage ETL jobs for batch and near-real-time data ingestion. Analyse source-to-target mappings, create technical specs, and ensure end-to-end data lineage and quality. Tune performance through partitioning, parallel processing, and efficient stage design to meet SLA-driven loads. Collaborate with data architects, DBAs, and business analysts to integrate warehouses, marts, and downstream BI tools. Implement unit testing, peer reviews, and migration scripts to UAT and production following DevOps best practices. Monitor daily jobs, troubleshoot failures, and implement permanent fixes to guarantee high availability. Skills & Qualifications Must-Have 5+ years hands-on IBM DataStage development in enterprise environments. Strong SQL expertise across Oracle, DB2, or Teradata with proven query optimisation. Solid understanding of data warehousing concepts, dimensional modelling, and ETL best practices. Proficiency in Unix/Linux shell scripting for job scheduling and automation. Experience with version control and CI/CD tools (Git, Jenkins, etc.). Preferred Exposure to cloud data platforms (AWS Redshift, Azure Synapse, or GCP BigQuery). Knowledge of DataStage Parallel Extender and big data connectors. Familiarity with Data Quality or MDM tools such as Informatica IDQ. Benefits & Culture Highlights Client-facing projects offering ownership, visibility, and rapid career growth. Continuous learning budget covering certifications in DataStage and cloud ETL technologies. Collaborative, high-performance culture with merit-based rewards and on-site amenities. Skills: db2,sql,performance tuning,unix/linux shell scripting,git,teradata,datastage,oracle,ibm datastage,etl,data warehousing,jenkins
Posted 1 week ago
5.0 years
10 - 30 Lacs
Bengaluru, Karnataka, India
On-site
Industry & Sector A fast-growing IT services and data analytics consultancy serving Fortune 500 clients across banking, retail and healthcare relies on high-quality data pipelines to fuel BI and AI initiatives. We are hiring an on-site ETL Test Engineer to ensure the reliability, accuracy and performance of these mission-critical workloads. Role & Responsibilities Develop and maintain end-to-end test plans for data extraction, transformation and loading processes across multiple databases and cloud platforms. Create reusable SQL queries and automation scripts to validate data completeness, integrity and historical load accuracy at scale. Set up data-comparison, reconciliation and performance tests within Azure DevOps or Jenkins CI pipelines for nightly builds. Collaborate with data engineers to debug mapping issues, optimise jobs and drive defect resolution through Jira. Document test artefacts, traceability matrices and sign-off reports to support regulatory and audit requirements. Champion best practices for ETL quality, mentoring junior testers on automation frameworks and agile rituals. Skills & Qualifications Must-Have 5 years specialised in ETL testing within enterprise data warehouse or lakehouse environments. Hands-on proficiency in advanced SQL, joins, window functions and data profiling. Experience testing Informatica PowerCenter, SSIS, Talend or similar ETL tools. Exposure to big data ecosystems such as Hadoop or Spark and cloud warehouses like Snowflake or Redshift. Automation skills using Python or Shell with Selenium or pytest frameworks integrated into CI/CD. Strong defect management and stakeholder communication skills. Preferred Knowledge of BI visualisation layer testing with Tableau or Power BI. Performance benchmarking for batch loads and CDC streams. ISTQB or equivalent testing certification. Benefits & Culture Highlights Work on high-impact data programmes for global brands using modern cloud technologies. Clear career ladder with sponsored certifications and internal hackathons. Collaborative, merit-driven culture that values innovation and continuous learning. Location: On-site, India. Candidates must be willing to work from client premises and collaborate closely with cross-functional teams. Skills: sql,shell,informatica powercenter,defect tracking,stakeholder communication,agile,pytest,redshift,ssis,defect management,snowflake,selenium,talend,python,etl testing,test automation,spark,hadoop,data warehousing
Posted 1 week ago
5.0 years
10 - 30 Lacs
Hyderabad, Telangana, India
On-site
ETL DataStage Developer About The Opportunity A rapidly-expanding information technology consulting provider in the data integration & analytics space partners with Fortune 500 enterprises to modernise legacy warehouses and architect scalable, real-time data platforms. Operating at the intersection of professional services and enterprise software implementation, we deliver high-throughput ETL solutions that turn raw operational data into actionable business intelligence. Based on-site at a flagship client location in India, you will engineer resilient IBM DataStage pipelines that power critical reporting and analytics workloads. Role & Responsibilities Design, develop, and optimise DataStage ETL jobs for batch and near-real-time data ingestion. Analyse source-to-target mappings, create technical specs, and ensure end-to-end data lineage and quality. Tune performance through partitioning, parallel processing, and efficient stage design to meet SLA-driven loads. Collaborate with data architects, DBAs, and business analysts to integrate warehouses, marts, and downstream BI tools. Implement unit testing, peer reviews, and migration scripts to UAT and production following DevOps best practices. Monitor daily jobs, troubleshoot failures, and implement permanent fixes to guarantee high availability. Skills & Qualifications Must-Have 5+ years hands-on IBM DataStage development in enterprise environments. Strong SQL expertise across Oracle, DB2, or Teradata with proven query optimisation. Solid understanding of data warehousing concepts, dimensional modelling, and ETL best practices. Proficiency in Unix/Linux shell scripting for job scheduling and automation. Experience with version control and CI/CD tools (Git, Jenkins, etc.). Preferred Exposure to cloud data platforms (AWS Redshift, Azure Synapse, or GCP BigQuery). Knowledge of DataStage Parallel Extender and big data connectors. Familiarity with Data Quality or MDM tools such as Informatica IDQ. Benefits & Culture Highlights Client-facing projects offering ownership, visibility, and rapid career growth. Continuous learning budget covering certifications in DataStage and cloud ETL technologies. Collaborative, high-performance culture with merit-based rewards and on-site amenities. Skills: db2,sql,performance tuning,unix/linux shell scripting,git,teradata,datastage,oracle,ibm datastage,etl,data warehousing,jenkins
Posted 1 week ago
5.0 years
10 - 30 Lacs
Hyderabad, Telangana, India
On-site
Industry & Sector A fast-growing IT services and data analytics consultancy serving Fortune 500 clients across banking, retail and healthcare relies on high-quality data pipelines to fuel BI and AI initiatives. We are hiring an on-site ETL Test Engineer to ensure the reliability, accuracy and performance of these mission-critical workloads. Role & Responsibilities Develop and maintain end-to-end test plans for data extraction, transformation and loading processes across multiple databases and cloud platforms. Create reusable SQL queries and automation scripts to validate data completeness, integrity and historical load accuracy at scale. Set up data-comparison, reconciliation and performance tests within Azure DevOps or Jenkins CI pipelines for nightly builds. Collaborate with data engineers to debug mapping issues, optimise jobs and drive defect resolution through Jira. Document test artefacts, traceability matrices and sign-off reports to support regulatory and audit requirements. Champion best practices for ETL quality, mentoring junior testers on automation frameworks and agile rituals. Skills & Qualifications Must-Have 5 years specialised in ETL testing within enterprise data warehouse or lakehouse environments. Hands-on proficiency in advanced SQL, joins, window functions and data profiling. Experience testing Informatica PowerCenter, SSIS, Talend or similar ETL tools. Exposure to big data ecosystems such as Hadoop or Spark and cloud warehouses like Snowflake or Redshift. Automation skills using Python or Shell with Selenium or pytest frameworks integrated into CI/CD. Strong defect management and stakeholder communication skills. Preferred Knowledge of BI visualisation layer testing with Tableau or Power BI. Performance benchmarking for batch loads and CDC streams. ISTQB or equivalent testing certification. Benefits & Culture Highlights Work on high-impact data programmes for global brands using modern cloud technologies. Clear career ladder with sponsored certifications and internal hackathons. Collaborative, merit-driven culture that values innovation and continuous learning. Location: On-site, India. Candidates must be willing to work from client premises and collaborate closely with cross-functional teams. Skills: sql,shell,informatica powercenter,defect tracking,stakeholder communication,agile,pytest,redshift,ssis,defect management,snowflake,selenium,talend,python,etl testing,test automation,spark,hadoop,data warehousing
Posted 1 week ago
5.0 years
10 - 30 Lacs
Mumbai Metropolitan Region
On-site
Industry & Sector A fast-growing IT services and data analytics consultancy serving Fortune 500 clients across banking, retail and healthcare relies on high-quality data pipelines to fuel BI and AI initiatives. We are hiring an on-site ETL Test Engineer to ensure the reliability, accuracy and performance of these mission-critical workloads. Role & Responsibilities Develop and maintain end-to-end test plans for data extraction, transformation and loading processes across multiple databases and cloud platforms. Create reusable SQL queries and automation scripts to validate data completeness, integrity and historical load accuracy at scale. Set up data-comparison, reconciliation and performance tests within Azure DevOps or Jenkins CI pipelines for nightly builds. Collaborate with data engineers to debug mapping issues, optimise jobs and drive defect resolution through Jira. Document test artefacts, traceability matrices and sign-off reports to support regulatory and audit requirements. Champion best practices for ETL quality, mentoring junior testers on automation frameworks and agile rituals. Skills & Qualifications Must-Have 5 years specialised in ETL testing within enterprise data warehouse or lakehouse environments. Hands-on proficiency in advanced SQL, joins, window functions and data profiling. Experience testing Informatica PowerCenter, SSIS, Talend or similar ETL tools. Exposure to big data ecosystems such as Hadoop or Spark and cloud warehouses like Snowflake or Redshift. Automation skills using Python or Shell with Selenium or pytest frameworks integrated into CI/CD. Strong defect management and stakeholder communication skills. Preferred Knowledge of BI visualisation layer testing with Tableau or Power BI. Performance benchmarking for batch loads and CDC streams. ISTQB or equivalent testing certification. Benefits & Culture Highlights Work on high-impact data programmes for global brands using modern cloud technologies. Clear career ladder with sponsored certifications and internal hackathons. Collaborative, merit-driven culture that values innovation and continuous learning. Location: On-site, India. Candidates must be willing to work from client premises and collaborate closely with cross-functional teams. Skills: sql,shell,informatica powercenter,defect tracking,stakeholder communication,agile,pytest,redshift,ssis,defect management,snowflake,selenium,talend,python,etl testing,test automation,spark,hadoop,data warehousing
Posted 1 week ago
15.0 years
0 Lacs
Hyderabad, Telangana, India
Remote
Working with Us Challenging. Meaningful. Life-changing. Those aren't words that are usually associated with a job. But working at Bristol Myers Squibb is anything but usual. Here, uniquely interesting work happens every day, in every department. From optimizing a production line to the latest breakthroughs in cell therapy, this is work that transforms the lives of patients, and the careers of those who do it. You'll get the chance to grow and thrive through opportunities uncommon in scale and scope, alongside high-achieving teams. Take your career farther than you thought possible. Bristol Myers Squibb recognizes the importance of balance and flexibility in our work environment. We offer a wide variety of competitive benefits, services and programs that provide our employees with the resources to pursue their goals, both at work and in their personal lives. Read more: careers.bms.com/working-with-us . Position Summary The Senior Manager, Senior Solution Engineer, Drug Development Information Technology (DDIT) will be part of the product team committed to bridge the gap between technology and business needs within the Clinical Data Ecosystem (CDE) that primarily delivers technology strategy and solutions for clinical trial execution, Global Biostatistics and Data Sciences, Clinical Data Management (Clinical Analytics, Site Selection, Feasibility, Real Word Evidence) The role is based out of our Hyderabad office, and is part of the Research and Development (R&D) BI&T data team that delivers data and analytics capabilities for across DD. You will be specifically supporting our Clinical Data Filing and Sharing (CDFS) Product line. This is a critical role that supports systems necessary for BMS' direct value change; regulated analysis and reporting for every trial in BMS Desired Candidate Characteristics:" Have a strong commitment to a career in technology with a passion for healthcare" Ability to understand the needs of the business and commitment to deliver the best user experience and adoption" Able to collaborate across multiple teams"" Demonstrated leadership experience Excellent communication skills" Innovative and inquisitive nature to ask questions, offer bold ideas and challenge the status quo" Agility to learn new tools and processes" As the candidate grows in their role, they will get additional training and there is opportunity to expand responsibilities and exposure to additional areas withing Drug Development. This includes working with Data Product Leads, providing input and innovation opportunities to modernize with cutting edge technologies (Agentic AI, advanced automation, visualization and application development techniques). Key Responsibilities Architect and lead the evolution of the Statistical Computing Environment (SCE) platform to support modern clinical trial requirements Partner with data management and statistical programming leads to support seamless integration of data pipelines and metadata-driven standards Lead the development of automated workflows for clinical data ingestion, transformation, analysis, and reporting within the SCE. Drive process automation and efficiency initiatives in data preparation and statistical programming workflows Develop and implement solutions to enhance system performance, stability and security Act as a subject matter expert for AWS SAS implementations and integration with clinical systems. Lead the implementation of cloud-based infrastructure using AWS EC2, Auto Scaling, CloudWatch, and AWS related packages Provide architectural guidance and oversight for CDISC SDTM/ADaM data standards and eCTD regulatory submissions. Collaborate with cross-functional teams to identify product improvements and enhancements. Administer production environment and diagnose & resolve technical issues in a timely manner, documenting solutions for future reference. Coordinate with vendors, suppliers, and contractors to ensure the timely delivery of products and services Serve as a technical mentor for development and operations teams supporting SCE solutions. Analyze business challenges and identify areas for improvement through technology solutions. Ensure regulatory and security compliance through proper governance and access controls. Provide guidance to the resources supporting projects, enhancements, and operations. Stay up to date with the latest technology trends and industry best practices. Qualifications & Experience: Master's or bachelor's degree in computer science, information technology, or related field preferred. 15 + years of experience in software development and engineering, clinical development or data science field 8-10 years of hands-on experience working on implementing and operation of different type of Statistical Clinical Environment (SCE) with Life Sciences and Healthcare business vertical. Strong experience with SAS in an AWS-hosted environment including EC2, S3, IAM, Glue, Athena, and Lambda Hands-on development experience managing and delivering data solutions with AWS data, analytics, AI technologies such as AWS Glue, Redshift, RDS (PostgreSQL), S3, Athena, Lambda, Databricks, Business Intelligence and Visualization tools etc. Experience with R, Python, or other programming languages for data analysis or automation. Experience in shell/ Python scripting and Linux automation for operational monitoring and alerting across the environment Familiarity with cloud DevOps practices, infrastructure-as-code (e.g., CloudFormation, Terraform). Expertise in SAS Grid architecture, grid node orchestration, and job lifecycle management. Strong working knowledge of SASGSUB, job submission parameters, and performance tuning. Understanding of submission readiness and Health Authority requirements for data traceability and transparency. Excellent communication, collaboration and interpersonal skills to interact with diverse stakeholders. Ability to work both independently and collaboratively in a team-oriented environment. Comfortable working in a fast-paced environment with minimal oversight. Prior experience working in an Agile based environment. If you come across a role that intrigues you but doesn't perfectly line up with your resume, we encourage you to apply anyway. You could be one step away from work that will transform your life and career. Uniquely Interesting Work, Life-changing Careers With a single vision as inspiring as Transforming patients' lives through science™ , every BMS employee plays an integral role in work that goes far beyond ordinary. Each of us is empowered to apply our individual talents and unique perspectives in a supportive culture, promoting global participation in clinical trials, while our shared values of passion, innovation, urgency, accountability, inclusion and integrity bring out the highest potential of each of our colleagues. On-site Protocol Responsibilities BMS has an occupancy structure that determines where an employee is required to conduct their work. This structure includes site-essential, site-by-design, field-based and remote-by-design jobs. The occupancy type that you are assigned is determined by the nature and responsibilities of your role: Site-essential roles require 100% of shifts onsite at your assigned facility. Site-by-design roles may be eligible for a hybrid work model with at least 50% onsite at your assigned facility. For these roles, onsite presence is considered an essential job function and is critical to collaboration, innovation, productivity, and a positive Company culture. For field-based and remote-by-design roles the ability to physically travel to visit customers, patients or business partners and to attend meetings on behalf of BMS as directed is an essential job function. BMS is dedicated to ensuring that people with disabilities can excel through a transparent recruitment process, reasonable workplace accommodations/adjustments and ongoing support in their roles. Applicants can request a reasonable workplace accommodation/adjustment prior to accepting a job offer. If you require reasonable accommodations/adjustments in completing this application, or in any part of the recruitment process, direct your inquiries to adastaffingsupport@bms.com . Visit careers.bms.com/ eeo -accessibility to access our complete Equal Employment Opportunity statement. BMS cares about your well-being and the well-being of our staff, customers, patients, and communities. As a result, the Company strongly recommends that all employees be fully vaccinated for Covid-19 and keep up to date with Covid-19 boosters. BMS will consider for employment qualified applicants with arrest and conviction records, pursuant to applicable laws in your area. If you live in or expect to work from Los Angeles County if hired for this position, please visit this page for important additional information: https://careers.bms.com/california-residents/ Any data processed in connection with role applications will be treated in accordance with applicable data privacy policies and regulations.
Posted 1 week ago
3.0 years
0 Lacs
India
Remote
About BeGig BeGig is the leading tech freelancing marketplace. We empower innovative, early-stage, non-tech founders to bring their visions to life by connecting them with top-tier freelance talent. By joining BeGig, you’re not just taking on one role—you’re signing up for a platform that will continuously match you with high-impact opportunities tailored to your expertise. Your Opportunity Join our network as a Data Engineer and work directly with visionary startups to design, build, and optimize data pipelines and systems. You’ll help transform raw data into actionable insights, ensuring that data flows seamlessly across the organization to support informed decision-making. Enjoy the freedom to structure your engagement on an hourly or project basis—all while working remotely. Role Overview As a Data Engineer, you will: Design & Develop Data Pipelines: Build and maintain scalable, robust data pipelines that power analytics and machine learning initiatives. Optimize Data Infrastructure: Ensure data is processed efficiently, securely, and in a timely manner. Collaborate & Innovate: Work closely with data scientists, analysts, and other stakeholders to streamline data ingestion, transformation, and storage. What You’ll Do Data Pipeline Development: Design, develop, and maintain end-to-end data pipelines using modern data engineering tools and frameworks. Automate data ingestion, transformation, and loading processes across various data sources. Implement data quality and validation measures to ensure accuracy and reliability. Infrastructure & Optimization: Optimize data workflows for performance and scalability in cloud environments (AWS, GCP, or Azure). Leverage tools such as Apache Spark, Kafka, or Airflow for data processing and orchestration. Monitor and troubleshoot pipeline issues, ensuring smooth data operations. Technical Requirements & Skills Experience: 3+ years in data engineering or a related field. Programming: Proficiency in Python, SQL, and familiarity with Scala or Java is a plus. Data Platforms: Experience with big data technologies like Hadoop, Spark, or similar. Cloud: Working knowledge of cloud-based data solutions (e.g., AWS Redshift, BigQuery, or Azure Data Lake). ETL & Data Warehousing: Hands-on experience with ETL processes and data warehousing solutions. Tools: Familiarity with data orchestration tools such as Apache Airflow or similar. Database Systems: Experience with both relational (PostgreSQL, MySQL) and NoSQL databases. What We’re Looking For A detail-oriented data engineer with a passion for building efficient, scalable data systems. A proactive problem-solver who thrives in a fast-paced, dynamic environment. A freelancer with excellent communication skills and the ability to collaborate with cross-functional teams. Why Join Us? Immediate Impact: Tackle challenging data problems that drive real business outcomes. Remote & Flexible: Work from anywhere with engagements structured to suit your schedule. Future Opportunities: Leverage BeGig’s platform to secure additional data-focused roles as your expertise grows. Innovative Work: Collaborate with startups at the forefront of data innovation and technology. Ready to Transform Data? Apply now to become a key Data Engineer for our client and a valued member of the BeGig network!
Posted 1 week ago
3.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Company Overview Domo is a native cloud-native data experiences innovator that puts data to work for everyone. Underpinned by AI, data science, and a secure data foundation, our platform makes data actionable with user-friendly dashboards and apps. With Domo, companies get intuitive, agile data experiences that power exponential business impact. Position Summary Our Technical Support team is looking for problem solvers with executive presence and polish—highly versatile, reliable, self-starting individuals with deep technical troubleshooting skills and experience. You will help Domo clients facilitate their digital transformation and strategic initiatives and increase brand loyalty and referenceability through world-class technical support. When our customers succeed, we succeed. The Technical Support team is staffed 24/7, which allows our global customers to contact us at their convenience. Support Team members build strong, lasting relationships with customers by understanding their needs and concerns. This team takes the lead in providing a world-class experience for every person who contacts Domo through our Support Team. Key Responsibilities Provide exceptional service by connecting, solving, and building relationships with our customers. Interactions may include case work such as telephone, email, Zoom, in person, or other internal tools, as needed and determined by the business Thinking outside the box, our advisors are offered a high degree of latitude to find and develop solutions. Successful candidates will demonstrate independent thinking that consistently leads to robust and scalable solutions for our customers; Perpetually expand your knowledge of Domo’s platform, Business Intelligence, data, and analytics. On-the-job training, time for side projects, and Domo certification; Provide timely (SLAs), constant, and ongoing communication with your peers and customers regarding their support cases until those cases are solved. Job Requirements Essential: Bachelor's degree in a technical field (computer science, mathematics, statistics, analytics, etc.) or 3-5 years related experience in a relevant field. Show us that you know how to learn, find answers, and develop solutions on your own. At least 2 years of experience in a support role ideally in a customer facing environment. Communicate clearly and effectively with customers to fully meet their needs. You will be working with experts in their field; quickly establishing rapport and trust with them is critical. Strong SQL experience is a must. From memory, can you explain the basic purpose and SQL syntax behind joins, unions, selects, grouping, aggregation, indexes, subqueries, etc. Software application support experience. Preference given for SaaS, analytics, data, and Business Intelligence fields. Tell us about your experience working methodically through queues, following through on commitments, SOP’s, company policies, professional communication etiquette through verbal and written correspondence. Flexible and adaptable to rapid change. This is a fast-paced industry and there will always be something new to learn. Desired: APIs - REST/SOAP, endpoints, uses, authentication, methods, Postman; Programming languages - Python, JavaScript, Java, etc. Relational databases - MySQL, PostgreSQL, MSSQL, Redshift, Oracle, ODBC, OLE DB, JDBC Statistical computing - R, Jupyter JSON/XML – Reading, parsing, XPath, etc. SSO/IDP – OpenID Connect, SAML, Okta, Azure AD, Ping Identity Snowflake Data Cloud / ETL. LOCATION: Pune, Maharashtra, India India Benefits & Perks Medical cash allowance provided Maternity and Paternity Leave policy Baby bucks: cash allowance to spend on anything for every newborn or child adopted Haute Mama: cash allowance to spend on Maternity Wardrobe (only for women employees) 18 days paid time off + 10 holidays + 12 medical leaves Sodexo Meal Pass Health and Wellness Benefit One-time Technology Benefit towards the purchase of a tablet or smartwatch Corporate National Pension Scheme Employee Assistance Programme (EAP) Domo is an equal opportunity employer.
Posted 1 week ago
10.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
You are as unique as your background, experience and point of view. Here, you’ll be encouraged, empowered and challenged to be your best self. You'll work with dynamic colleagues - experts in their fields - who are eager to share their knowledge with you. Your leaders will inspire and help you reach your potential and soar to new heights. Every day, you'll have new and exciting opportunities to make life brighter for our Clients - who are at the heart of everything we do. Discover how you can make a difference in the lives of individuals, families and communities around the world. Job Description: Please enter the external job description here (remove this line) Lead Solution Designer Job Description (heading) / Description Du Poste (titre The Senior Data Solution Designer plays a pivotal role within our evolving Data Engineering team, leading the strategic implementation of data solutions on AWS. You will be responsible for driving the technical vision and execution of cloud-based data architectures, ensuring the platform’s scalability, security, and performance while addressing both business and technical requirements. This role demands a hands-on leader with deep technical expertise who can also steer strategic initiatives to success. Preferred Skills (heading) / Compétences Particulières (titre) Responsibilities (heading) / Responsabilités (titre) Spearhead the implementation, performance finetuning, development, and delivery of data solutions using AWS core data services, driving innovation in Data Engineering, Governance, Integration, and Virtualization. Oversee all technical aspects of AWS-based data systems, ensuring end-to-end delivery. Coordinating with multiple stakeholders to ensure timely implementation and effective value realization. Continuously enhance the D&A platform to improve performance, resiliency, scalability, and security while incorporating new data technologies and methodologies. Work closely with business partners, data architects, and cross-functional teams to translate complex business requirements into technical solutions. Develop and implement data management strategies, including Data Lakehouse, Data Warehousing, Master Data Management, and Advanced Analytics solutions. Combine technical solutioning with hands-on work as needed, actively contributing to the architecture, coding, and deployment of critical data systems. Ensure system health by monitoring platform performance, identifying potential issues, and taking preventive or corrective measures as needed. Be accountable for the accuracy, consistency, and overall quality of the data used in various applications and analytical processes. Advocate for the use of best practices in data engineering, governance, and AWS cloud technologies, advising on the latest tools and standards. Qualifications (heading) / Compétences (titre) 10+ years of hands-on experience in developing and architecting data solutions, with a strong background in AWS cloud services. Proven experience designing and implementing AWS data services (such as S3, Redshift, Athena, Glue, etc.) and a solid understanding of data service design patterns. Expertise in building large-scale data platforms, including Data Lakehouse, Data Warehouse, Master Data Management, and Advanced Analytics systems. Ability to effectively communicate complex technical solutions to both technical and non-technical stakeholders. Experience working with multi-disciplinary teams and aligning data strategies with business objectives. Demonstrated experience managing multiple projects in a high-pressure environment, ensuring timely and high-quality delivery. Strong problem-solving skills, with the ability to make data-driven decisions and approach challenges methodically. Proficiency in data solution coding, ensuring access and integration for analytical and decision-making processes. Good verbal & written communication skills and ability to work independently as well as in a team environment providing structure in ambiguous situation. Good to have. Experience within the Insurance domain is an added advantage. Solid understanding of data governance frameworks and Master Data Management principles. This role is ideal for an experienced data architect who is passionate about leading innovative AWS data solutions while balancing technical expertise with business needs. Job Category: IT - Digital Development Posting End Date: 17/07/2025
Posted 1 week ago
15.0 years
0 Lacs
Hyderābād
Remote
Working with Us Challenging. Meaningful. Life-changing. Those aren't words that are usually associated with a job. But working at Bristol Myers Squibb is anything but usual. Here, uniquely interesting work happens every day, in every department. From optimizing a production line to the latest breakthroughs in cell therapy, this is work that transforms the lives of patients, and the careers of those who do it. You'll get the chance to grow and thrive through opportunities uncommon in scale and scope, alongside high-achieving teams. Take your career farther than you thought possible. Bristol Myers Squibb recognizes the importance of balance and flexibility in our work environment. We offer a wide variety of competitive benefits, services and programs that provide our employees with the resources to pursue their goals, both at work and in their personal lives. Read more: careers.bms.com/working-with-us . Position Summary: The Senior Manager, Senior Solution Engineer, Drug Development Information Technology (DDIT) will be part of the product team committed to bridge the gap between technology and business needs within the Clinical Data Ecosystem (CDE) that primarily delivers technology strategy and solutions for clinical trial execution, Global Biostatistics and Data Sciences, Clinical Data Management (Clinical Analytics, Site Selection, Feasibility, Real Word Evidence) The role is based out of our Hyderabad office, and is part of the Research and Development (R&D) BI&T data team that delivers data and analytics capabilities for across DD. You will be specifically supporting our Clinical Data Filing and Sharing (CDFS) Product line. This is a critical role that supports systems necessary for BMS' direct value change; regulated analysis and reporting for every trial in BMS Desired Candidate Characteristics:" Have a strong commitment to a career in technology with a passion for healthcare" Ability to understand the needs of the business and commitment to deliver the best user experience and adoption" Able to collaborate across multiple teams"" Demonstrated leadership experience Excellent communication skills" Innovative and inquisitive nature to ask questions, offer bold ideas and challenge the status quo" Agility to learn new tools and processes" As the candidate grows in their role, they will get additional training and there is opportunity to expand responsibilities and exposure to additional areas withing Drug Development. This includes working with Data Product Leads, providing input and innovation opportunities to modernize with cutting edge technologies (Agentic AI, advanced automation, visualization and application development techniques). Key Responsibilities: Architect and lead the evolution of the Statistical Computing Environment (SCE) platform to support modern clinical trial requirements Partner with data management and statistical programming leads to support seamless integration of data pipelines and metadata-driven standards Lead the development of automated workflows for clinical data ingestion, transformation, analysis, and reporting within the SCE. Drive process automation and efficiency initiatives in data preparation and statistical programming workflows Develop and implement solutions to enhance system performance, stability and security Act as a subject matter expert for AWS SAS implementations and integration with clinical systems. Lead the implementation of cloud-based infrastructure using AWS EC2, Auto Scaling, CloudWatch, and AWS related packages Provide architectural guidance and oversight for CDISC SDTM/ADaM data standards and eCTD regulatory submissions. Collaborate with cross-functional teams to identify product improvements and enhancements. Administer production environment and diagnose & resolve technical issues in a timely manner, documenting solutions for future reference. Coordinate with vendors, suppliers, and contractors to ensure the timely delivery of products and services Serve as a technical mentor for development and operations teams supporting SCE solutions. Analyze business challenges and identify areas for improvement through technology solutions. Ensure regulatory and security compliance through proper governance and access controls. Provide guidance to the resources supporting projects, enhancements, and operations. Stay up to date with the latest technology trends and industry best practices. Qualifications & Experience: Master's or bachelor's degree in computer science, information technology, or related field preferred. 15 + years of experience in software development and engineering, clinical development or data science field 8-10 years of hands-on experience working on implementing and operation of different type of Statistical Clinical Environment (SCE) with Life Sciences and Healthcare business vertical. Strong experience with SAS in an AWS-hosted environment including EC2, S3, IAM, Glue, Athena, and Lambda Hands-on development experience managing and delivering data solutions with AWS data, analytics, AI technologies such as AWS Glue, Redshift, RDS (PostgreSQL), S3, Athena, Lambda, Databricks, Business Intelligence and Visualization tools etc. Experience with R, Python, or other programming languages for data analysis or automation. Experience in shell/ Python scripting and Linux automation for operational monitoring and alerting across the environment Familiarity with cloud DevOps practices, infrastructure-as-code (e.g., CloudFormation, Terraform). Expertise in SAS Grid architecture, grid node orchestration, and job lifecycle management. Strong working knowledge of SASGSUB, job submission parameters, and performance tuning. Understanding of submission readiness and Health Authority requirements for data traceability and transparency. Excellent communication, collaboration and interpersonal skills to interact with diverse stakeholders. Ability to work both independently and collaboratively in a team-oriented environment. Comfortable working in a fast-paced environment with minimal oversight. Prior experience working in an Agile based environment. If you come across a role that intrigues you but doesn't perfectly line up with your resume, we encourage you to apply anyway. You could be one step away from work that will transform your life and career. Uniquely Interesting Work, Life-changing Careers With a single vision as inspiring as Transforming patients' lives through science™ , every BMS employee plays an integral role in work that goes far beyond ordinary. Each of us is empowered to apply our individual talents and unique perspectives in a supportive culture, promoting global participation in clinical trials, while our shared values of passion, innovation, urgency, accountability, inclusion and integrity bring out the highest potential of each of our colleagues. On-site Protocol BMS has an occupancy structure that determines where an employee is required to conduct their work. This structure includes site-essential, site-by-design, field-based and remote-by-design jobs. The occupancy type that you are assigned is determined by the nature and responsibilities of your role: Site-essential roles require 100% of shifts onsite at your assigned facility. Site-by-design roles may be eligible for a hybrid work model with at least 50% onsite at your assigned facility. For these roles, onsite presence is considered an essential job function and is critical to collaboration, innovation, productivity, and a positive Company culture. For field-based and remote-by-design roles the ability to physically travel to visit customers, patients or business partners and to attend meetings on behalf of BMS as directed is an essential job function. BMS is dedicated to ensuring that people with disabilities can excel through a transparent recruitment process, reasonable workplace accommodations/adjustments and ongoing support in their roles. Applicants can request a reasonable workplace accommodation/adjustment prior to accepting a job offer. If you require reasonable accommodations/adjustments in completing this application, or in any part of the recruitment process, direct your inquiries to adastaffingsupport@bms.com . Visit careers.bms.com/ eeo -accessibility to access our complete Equal Employment Opportunity statement. BMS cares about your well-being and the well-being of our staff, customers, patients, and communities. As a result, the Company strongly recommends that all employees be fully vaccinated for Covid-19 and keep up to date with Covid-19 boosters. BMS will consider for employment qualified applicants with arrest and conviction records, pursuant to applicable laws in your area. If you live in or expect to work from Los Angeles County if hired for this position, please visit this page for important additional information: https://careers.bms.com/california-residents/ Any data processed in connection with role applications will be treated in accordance with applicable data privacy policies and regulations.
Posted 1 week ago
15.0 years
8 - 10 Lacs
Gurgaon
On-site
ROLES & RESPONSIBILITIES We are seeking an experienced and visionary Data Architect with over 15 years of experience to lead the design and implementation of scalable, secure, and high-performing data architectures. The ideal candidate should have a deep understanding of cloud-native architectures, enterprise data platforms, and end-to-end data lifecycle management. You will work closely with business, engineering, and product teams to craft robust data solutions that drive business intelligence, analytics, and AI initiatives. Key Responsibilities: Design and implement enterprise-grade data architectures using cloud platforms (e.g., AWS, Azure, GCP). Lead the definition of data architecture standards, guidelines, and best practices. Architect scalable data solutions including data lakes, data warehouses, and real-time streaming platforms. Collaborate with data engineers, analysts, and data scientists to understand data requirements and deliver optimal solutions. Oversee data modeling activities including conceptual, logical, and physical data models. Ensure data security, privacy, and compliance with applicable regulations (e.g., GDPR, HIPAA). Define and implement data governance strategies in collaboration with stakeholders. Evaluate and recommend data-related tools and technologies. Provide architectural guidance and mentorship to data engineering teams. Participate in client discussions, pre-sales, and proposal building (if in a consulting environment). Required Skills & Qualifications: 15+ years of experience in data architecture, data engineering, or database development. Strong experience architecting data solutions on at least one major cloud platform (AWS, Azure, or GCP). Deep understanding of data management principles, data modeling, ETL/ELT pipelines, and data warehousing. Hands-on experience with modern data platforms and tools (e.g., Snowflake, Databricks, BigQuery, Redshift, Synapse, Apache Spark). Proficiency with programming languages such as Python, SQL, or Java. Familiarity with real-time data processing frameworks like Kafka, Kinesis, or Azure Event Hub. Experience implementing data governance, data cataloging, and data quality frameworks. Knowledge of DevOps practices, CI/CD pipelines for data, and Infrastructure as Code (IaC) is a plus. Excellent problem-solving, communication, and stakeholder management skills. Bachelor's or Master's degree in Computer Science, Information Technology, or a related field. Cloud Architect or Data Architect certification (AWS/Azure/GCP) is a strong plus. Preferred Certifications: AWS Certified Solutions Architect – Professional Microsoft Certified: Azure Solutions Architect Expert Google Cloud Professional Data Engineer TOGAF or equivalent architecture frameworks What We Offer: A collaborative and inclusive work environment Opportunity to work on cutting-edge data and AI projects Flexible work options Competitive compensation and benefits package EXPERIENCE 16-18 Years SKILLS Primary Skill: Data Architecture Sub Skill(s): Data Architecture Additional Skill(s): Data Architecture ABOUT THE COMPANY Infogain is a human-centered digital platform and software engineering company based out of Silicon Valley. We engineer business outcomes for Fortune 500 companies and digital natives in the technology, healthcare, insurance, travel, telecom, and retail & CPG industries using technologies such as cloud, microservices, automation, IoT, and artificial intelligence. We accelerate experience-led transformation in the delivery of digital platforms. Infogain is also a Microsoft (NASDAQ: MSFT) Gold Partner and Azure Expert Managed Services Provider (MSP). Infogain, an Apax Funds portfolio company, has offices in California, Washington, Texas, the UK, the UAE, and Singapore, with delivery centers in Seattle, Houston, Austin, Kraków, Noida, Gurgaon, Mumbai, Pune, and Bengaluru.
Posted 1 week ago
4.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Description Analytics Ops and Programs team in Hyderabad is looking for an innovative, hands-on and customer-obsessed Business Analyst. Candidate must be detail oriented, have superior verbal and written communication skills, strong organizational skills, excellent technical skills and should be able to juggle multiple tasks at once. Ideal candidate must be able to identify problems before they happen and implement solutions that detect and prevent outages. The candidate must be able to accurately prioritize projects, make sound judgments, work to improve the customer experience and get the right things done. This job requires you to constantly hit the ground running and have the ability to learn quickly. Primary responsibilities include defining the problem and building analytical frameworks to help the operations to streamline the process, identifying gaps in the existing process by analyzing data and liaising with relevant team(s) to plug it and analyzing data and metrics and sharing update with the internal teams. Amazon is an equal opportunity employer. Basic Qualifications 4+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience Experience with data visualization using Tableau, Quicksight, or similar tools Experience with data modeling, warehousing and building ETL pipelines Experience in Statistical Analysis packages such as R, SAS and Matlab Experience using SQL to pull data from a database or data warehouse and scripting experience (Python) to process data for modeling Preferred Qualifications Experience with AWS solutions such as EC2, DynamoDB, S3, and Redshift Experience in data mining, ETL, etc. and using databases in a business environment with large-scale, complex datasets Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ASSPL - Telangana Job ID: A2969869
Posted 1 week ago
12.0 years
0 Lacs
Delhi
On-site
Job requisition ID :: 80039 Date: Jul 4, 2025 Location: Delhi Designation: Manager Entity: Deloitte Touche Tohmatsu India LLP DATE: 18-Feb-2025 BUSINESS TITLE: Data Architect POSITION DESCRIPTION WHAT YOU’LL DO Define and design future state data architecture for risk data products. Partner with Technology, Data Stewards and various Products teams in an Agile work stream while meeting program goals and deadlines. Create user personas to support various business initiatives. Engage with line of business, operations, and project partners to gather process improvements. Lead to design / build new models to efficiently deliver the risk results to senior management. Evaluate Data related tools and technologies and recommend appropriate implementation patterns and standard methodologies to ensure our Data ecosystem is always modern. Collaborate with Enterprise Data Architects in establishing and adhering to enterprise standards while also performing POCs to ensure those standards are implemented. Provide technical expertise and mentorship to Data Engineers and Data Analysts in the Data Architecture. Develop and maintain processes, standards, policies, guidelines, and governance to ensure that a consistent framework and set of standards is applied across the company. Create and maintain conceptual / logical data models to identify key business entities and visual relationships. Work with business and IT teams to understand data requirements. Maintain a data dictionary consisting of table and column definitions. Review data models with both technical and business audiences. YOU’RE GOOD AT Design, document & train the team on the overall processes and process flows for the Data architecture. Resolve technical challenges in critical situations that require immediate resolution. Develop relationships with external stakeholders to maintain awareness of data and security issues and trends. Review work from other tech team members and provide feedback for growth. Implement Data security policies that align with governance objectives and regulatory requirements. EXPERIENCE & QUALIFICATIONS Bachelor's degree or equivalent combination of education and experience. Bachelor's degree in information science, data management, computer science or related field preferred. Essential Experience & Job Requirements 12+ years of IT experience with major focus on data warehouse/database related projects Expertise in cloud databases like Snowflake/RedShift, data catalogue, MDM etc Expertise in writing SQL and database procedures Proficient in Data Modelling- Conceptual, logical, and Physical modelling Proficient in documenting all the architecture related work performed. Hand on experience in data storage, ETL/ELT and data analytics tools and technologies e.g., Talend, DBT, Attunity, Golden Gate, FiveTran, APIs, Tableau, Power BI, Alteryx etc Experienced in Data Warehousing design/development and BI/ Analytical systems Experience working projects using Agile methodologies Strong hands-on experience with data and analytics data architecture, solution design, and engineering experience Experience with Cloud Big Data technologies such as AWS, Azure, GCP and Snowflake Experience working with agile methodologies (Scrum, Kanban) and Meta Scrum with cross-functional teams (Product Owners, Scrum Master, Architects, and data SMEs) Review existing databases, data architecture, data models across multiple systems and propose architecture enhancements for cross compatibility and target systems Excellent written, oral communication and presentation skills to present architecture, features, and solution recommendations YOU’LL WORK WITH Global functional portfolio technical leaders (Finance, HR, Marketing, Legal, Risk, IT), product owners, functional area teams across levels Global Data Portfolio Management & teams (Enterprise Data Model, Data Catalog, Master Data Management) Consulting and internal Data Portfolio teams
Posted 1 week ago
8.0 years
0 Lacs
Kolkata, West Bengal, India
Remote
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity We are looking for a seasoned and strategic-thinking Senior AWS DataOps Engineer to join our growing global data team. In this role, you will take ownership of critical data workflows and work closely with cross-functional teams to support, optimize, and scale cloud-based data pipelines. You will bring leadership to data operations, contribute to architectural decisions, and help ensure the integrity, availability, and performance of our AWS data infrastructure. Your Key Responsibilities Lead the design, monitoring, and optimization of AWS-based data pipelines using services like AWS Glue, EMR, Lambda, and Amazon S3. Oversee and enhance complex ETL workflows involving IICS (Informatica Intelligent Cloud Services), Databricks, and native AWS tools. Collaborate with data engineering and analytics teams to streamline ingestion into Amazon Redshift and lead data validation strategies. Manage job orchestration using Apache Airflow, AWS Data Pipeline, or equivalent tools, ensuring SLA adherence. Guide SQL query optimization across Redshift and other AWS databases for analytics and operational use cases. Perform root cause analysis of critical failures, mentor junior staff on best practices, and implement preventive measures. Lead deployment activities through robust CI/CD pipelines, applying DevOps principles and automation. Own the creation and governance of SOPs, runbooks, and technical documentation for data operations. Partner with vendors, security, and infrastructure teams to ensure compliance, scalability, and cost-effective architecture. Skills And Attributes For Success Expertise in AWS data services and ability to lead architectural discussions. Analytical thinker with the ability to design and optimize end-to-end data workflows. Excellent debugging and incident resolution skills in large-scale data environments. Strong leadership and mentoring capabilities, with clear communication across business and technical teams. A growth mindset with a passion for building reliable, scalable data systems. Proven ability to manage priorities and navigate ambiguity in a fast-paced environment. To qualify for the role, you must have 5–8 years of experience in DataOps, Data Engineering, or related roles. Strong hands-on expertise in Databricks. Deep understanding of ETL pipelines and modern data integration patterns. Proven experience with Amazon S3, EMR, Glue, Lambda, and Amazon Redshift in production environments. Experience in Airflow or AWS Data Pipeline for orchestration and scheduling. Advanced knowledge of IICS or similar ETL tools for data transformation and automation. SQL skills with emphasis on performance tuning, complex joins, and window functions. Technologies and Tools Must haves Proficient in Amazon S3, EMR (Elastic MapReduce), AWS Glue, and Lambda Expert in Databricks – ability to develop, optimize, and troubleshoot advanced notebooks Strong experience with Amazon Redshift for scalable data warehousing and analytics Solid understanding of orchestration tools like Apache Airflow or AWS Data Pipeline Hands-on with IICS (Informatica Intelligent Cloud Services) or comparable ETL platforms Good to have Exposure to Power BI or Tableau for data visualization Familiarity with CDI, Informatica, or other enterprise-grade data integration platforms Understanding of DevOps and CI/CD automation tools for data engineering workflows SQL familiarity across large datasets and distributed databases What We Look For Enthusiastic learners with a passion for data op’s and practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 1 week ago
15.0 years
0 Lacs
Noida
On-site
Required Experience 15 - 25 Years Skills Apache Superset, aws redshift, amazon redshift + 12 more Job Description: Associate Director- Data Engineering We at Pine Labs are looking for those who share our core belief - Every Day is Game day. We bring our best selves to work each day to realize our mission of enriching the world through the power of digital commerce and financial services. Responsibilities Own the data engineering roadmap from streaming ingestion to analytics-ready layers. Lead and evolve our real-time data pipelines using Apache Kafka/MSK , Debezium CDC , and Apache Pinot . Architect and maintain custom-built Java/Python-based frameworks for data ingestion, validation, transformation, and replication. Design and manage data lakes using Apache Iceberg , Glue , and Athena over S3 . Define and enforce data modelling standards across Pinot , Redshift and warehouse layers. Lead implementation and scaling of open-source BI tools (Superset) and orchestration platforms (Airflow) . Drive cloud-native deployment strategies using ECS/EKS, Terraform/CDK, Docker . Collaborate with ML, product, and business teams to support advanced analytics and AI use cases. Mentor data engineers and evangelize modern architecture and engineering best practices. Ensure observability, lineage, data quality, and governance across the data stack. What Matters In This Role Proven expertise in Kafka/MSK , Debezium , and real-time event-driven architectures Hands-on experience with Pinot , Redshift , RocksDB or noSQL DB Strong background in custom tooling using Java and Python Experience with Apache Airflow , Superset , Iceberg , Athena , and Glue Strong AWS ecosystem knowledge (IAM, S3, Lambda, Glue, ECS/EKS, CloudWatch, etc.) Deep understanding of data lake architecture , streaming vs batch processing , CDC concepts Familiarity with modern data formats (Parquet, Avro) and storage abstractions Nice-to-Have: Exposure to dbt , Trino/Presto , ClickHouse , or Druid Familiarity with data security practices , encryption at rest/in-transit , GDPR/PCI compliance Experience with DevOps practices , GitHub Actions , CI/CD , and Terraform/Cloud Formation What We Value In Our People You take the shot: You Decide Fast and You Deliver Right You are the CEO of what you do: you show ownership and make things happen You own tomorrow: by building solutions for the merchants and doing the right thing You sign your work like an artist: You seek to learn and take pride in the work you do
Posted 1 week ago
0 years
7 - 9 Lacs
Jaipur
Remote
Location Jaipur Employment Type Full time Department Product Support Life at UiPath The people at UiPath believe in the transformative power of automation to change how the world works. We’re committed to creating category-leading enterprise software that unleashes that power. To make that happen, we need people who are curious, self-propelled, generous, and genuine. People who love being part of a fast-moving, fast-thinking growth company. And people who care—about each other, about UiPath, and about our larger purpose. Could that be you? Your mission We’re looking for a Support Engineer to join the team in Jaipur. It’s a customer-facing role that’s all about problem-solving – great for someone who enjoys helping others and has a real interest in tech. It’s especially suited to anyone thinking about a future in Data Engineering, Data Science or Platform Ops. You’ll need some basic knowledge of SQL and Python, but this isn’t a software development job. Day-to-day, you’ll be working closely with customers, getting to grips with their issues, and helping them troubleshoot problems on the Peak platform and with their deployed applications. You’ll need to be curious, proactive, and comfortable owning a problem from start to finish. If you’re someone who enjoys figuring things out, explaining things clearly, and digging into the root cause of an issue – this could be a great fit. What you'll do at UiPath Resolve Technical Issues: Troubleshoot and resolve customer issues on the Peak platform, using your analytical and problem-solving skills. Own the Process: Take full ownership of problems - investigate, follow up, escalate when needed, and ensure resolution is delivered. Investigate Errors: Dig into application logs, API responses, and system outputs to identify and resolve errors within Peak-built customer applications. Write Useful Scripts: Use scripting (e.g. in Python or Bash) to automate routine support tasks, extract data, or investigate technical issues efficiently. Monitor Systems: Help monitor infrastructure and application health, proactively spotting and flagging unusual behaviour before it becomes a problem. Support Infrastructure Security: Assist with routine security updates and checks, ensuring our systems remain secure and up-to-date. Communicate Clearly: Provide timely, professional updates to both internal teams and customers. You’ll often be the bridge between technical and non-technical people. Contribute to Documentation: Help us build out internal documentation and guides to make solving future issues faster and easier. Be Part of the Team: Participate in a shared on-call rotation to support our customers when they need us most. What you'll bring to the team Educational Requirements : A computer science degree or a related field, or equivalent academic experience in technology. Technical Skills : Comfortable using Python, Bash, and SQL for scripting, querying data, and troubleshooting. Familiar with Linux and the command line, and confident navigating file systems or running basic system commands. Exposure to cloud platforms (e.g. AWS, GCP, Azure) is a bonus - especially if you’ve explored tools like Snowflake, Redshift, or other modern data warehouses. Any experience with managing or supporting data workflows, backups, restores, or investigating issues across datasets is a strong plus. Communication Skills : Strong verbal and written communication skills in English. Ability to explain technical concepts clearly and concisely to both technical and non-technical audiences. Personal Attributes : Well-organised with the ability to handle multiple tasks simultaneously. Strong problem-solving and analytical skills. Fast learner with the ability to adapt to new tools and technologies quickly. Excellent interpersonal skills and the ability to work effectively in a team environment. Maybe you don’t tick all the boxes above—but still think you’d be great for the job? Go ahead, apply anyway. Please. Because we know that experience comes in all shapes and sizes—and passion can’t be learned. Many of our roles allow for flexibility in when and where work gets done. Depending on the needs of the business and the role, the number of hybrid, office-based, and remote workers will vary from team to team. Applications are assessed on a rolling basis and there is no fixed deadline for this requisition. The application window may change depending on the volume of applications received or may close immediately if a qualified candidate is selected. We value a range of diverse backgrounds, experiences and ideas. We pride ourselves on our diversity and inclusive workplace that provides equal opportunities to all persons regardless of age, race, color, religion, sex, sexual orientation, gender identity, and expression, national origin, disability, neurodiversity, military and/or veteran status, or any other protected classes. Additionally, UiPath provides reasonable accommodations for candidates on request and respects applicants' privacy rights. To review these and other legal disclosures, visit our privacy policy.
Posted 1 week ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
Remote
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity We are looking for a seasoned and strategic-thinking Senior AWS DataOps Engineer to join our growing global data team. In this role, you will take ownership of critical data workflows and work closely with cross-functional teams to support, optimize, and scale cloud-based data pipelines. You will bring leadership to data operations, contribute to architectural decisions, and help ensure the integrity, availability, and performance of our AWS data infrastructure. Your Key Responsibilities Lead the design, monitoring, and optimization of AWS-based data pipelines using services like AWS Glue, EMR, Lambda, and Amazon S3. Oversee and enhance complex ETL workflows involving IICS (Informatica Intelligent Cloud Services), Databricks, and native AWS tools. Collaborate with data engineering and analytics teams to streamline ingestion into Amazon Redshift and lead data validation strategies. Manage job orchestration using Apache Airflow, AWS Data Pipeline, or equivalent tools, ensuring SLA adherence. Guide SQL query optimization across Redshift and other AWS databases for analytics and operational use cases. Perform root cause analysis of critical failures, mentor junior staff on best practices, and implement preventive measures. Lead deployment activities through robust CI/CD pipelines, applying DevOps principles and automation. Own the creation and governance of SOPs, runbooks, and technical documentation for data operations. Partner with vendors, security, and infrastructure teams to ensure compliance, scalability, and cost-effective architecture. Skills And Attributes For Success Expertise in AWS data services and ability to lead architectural discussions. Analytical thinker with the ability to design and optimize end-to-end data workflows. Excellent debugging and incident resolution skills in large-scale data environments. Strong leadership and mentoring capabilities, with clear communication across business and technical teams. A growth mindset with a passion for building reliable, scalable data systems. Proven ability to manage priorities and navigate ambiguity in a fast-paced environment. To qualify for the role, you must have 5–8 years of experience in DataOps, Data Engineering, or related roles. Strong hands-on expertise in Databricks. Deep understanding of ETL pipelines and modern data integration patterns. Proven experience with Amazon S3, EMR, Glue, Lambda, and Amazon Redshift in production environments. Experience in Airflow or AWS Data Pipeline for orchestration and scheduling. Advanced knowledge of IICS or similar ETL tools for data transformation and automation. SQL skills with emphasis on performance tuning, complex joins, and window functions. Technologies and Tools Must haves Proficient in Amazon S3, EMR (Elastic MapReduce), AWS Glue, and Lambda Expert in Databricks – ability to develop, optimize, and troubleshoot advanced notebooks Strong experience with Amazon Redshift for scalable data warehousing and analytics Solid understanding of orchestration tools like Apache Airflow or AWS Data Pipeline Hands-on with IICS (Informatica Intelligent Cloud Services) or comparable ETL platforms Good to have Exposure to Power BI or Tableau for data visualization Familiarity with CDI, Informatica, or other enterprise-grade data integration platforms Understanding of DevOps and CI/CD automation tools for data engineering workflows SQL familiarity across large datasets and distributed databases What We Look For Enthusiastic learners with a passion for data op’s and practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 1 week ago
8.0 years
0 Lacs
Kanayannur, Kerala, India
Remote
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity We are looking for a seasoned and strategic-thinking Senior AWS DataOps Engineer to join our growing global data team. In this role, you will take ownership of critical data workflows and work closely with cross-functional teams to support, optimize, and scale cloud-based data pipelines. You will bring leadership to data operations, contribute to architectural decisions, and help ensure the integrity, availability, and performance of our AWS data infrastructure. Your Key Responsibilities Lead the design, monitoring, and optimization of AWS-based data pipelines using services like AWS Glue, EMR, Lambda, and Amazon S3. Oversee and enhance complex ETL workflows involving IICS (Informatica Intelligent Cloud Services), Databricks, and native AWS tools. Collaborate with data engineering and analytics teams to streamline ingestion into Amazon Redshift and lead data validation strategies. Manage job orchestration using Apache Airflow, AWS Data Pipeline, or equivalent tools, ensuring SLA adherence. Guide SQL query optimization across Redshift and other AWS databases for analytics and operational use cases. Perform root cause analysis of critical failures, mentor junior staff on best practices, and implement preventive measures. Lead deployment activities through robust CI/CD pipelines, applying DevOps principles and automation. Own the creation and governance of SOPs, runbooks, and technical documentation for data operations. Partner with vendors, security, and infrastructure teams to ensure compliance, scalability, and cost-effective architecture. Skills And Attributes For Success Expertise in AWS data services and ability to lead architectural discussions. Analytical thinker with the ability to design and optimize end-to-end data workflows. Excellent debugging and incident resolution skills in large-scale data environments. Strong leadership and mentoring capabilities, with clear communication across business and technical teams. A growth mindset with a passion for building reliable, scalable data systems. Proven ability to manage priorities and navigate ambiguity in a fast-paced environment. To qualify for the role, you must have 5–8 years of experience in DataOps, Data Engineering, or related roles. Strong hands-on expertise in Databricks. Deep understanding of ETL pipelines and modern data integration patterns. Proven experience with Amazon S3, EMR, Glue, Lambda, and Amazon Redshift in production environments. Experience in Airflow or AWS Data Pipeline for orchestration and scheduling. Advanced knowledge of IICS or similar ETL tools for data transformation and automation. SQL skills with emphasis on performance tuning, complex joins, and window functions. Technologies and Tools Must haves Proficient in Amazon S3, EMR (Elastic MapReduce), AWS Glue, and Lambda Expert in Databricks – ability to develop, optimize, and troubleshoot advanced notebooks Strong experience with Amazon Redshift for scalable data warehousing and analytics Solid understanding of orchestration tools like Apache Airflow or AWS Data Pipeline Hands-on with IICS (Informatica Intelligent Cloud Services) or comparable ETL platforms Good to have Exposure to Power BI or Tableau for data visualization Familiarity with CDI, Informatica, or other enterprise-grade data integration platforms Understanding of DevOps and CI/CD automation tools for data engineering workflows SQL familiarity across large datasets and distributed databases What We Look For Enthusiastic learners with a passion for data op’s and practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 1 week ago
8.0 years
0 Lacs
Trivandrum, Kerala, India
Remote
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity We are looking for a seasoned and strategic-thinking Senior AWS DataOps Engineer to join our growing global data team. In this role, you will take ownership of critical data workflows and work closely with cross-functional teams to support, optimize, and scale cloud-based data pipelines. You will bring leadership to data operations, contribute to architectural decisions, and help ensure the integrity, availability, and performance of our AWS data infrastructure. Your Key Responsibilities Lead the design, monitoring, and optimization of AWS-based data pipelines using services like AWS Glue, EMR, Lambda, and Amazon S3. Oversee and enhance complex ETL workflows involving IICS (Informatica Intelligent Cloud Services), Databricks, and native AWS tools. Collaborate with data engineering and analytics teams to streamline ingestion into Amazon Redshift and lead data validation strategies. Manage job orchestration using Apache Airflow, AWS Data Pipeline, or equivalent tools, ensuring SLA adherence. Guide SQL query optimization across Redshift and other AWS databases for analytics and operational use cases. Perform root cause analysis of critical failures, mentor junior staff on best practices, and implement preventive measures. Lead deployment activities through robust CI/CD pipelines, applying DevOps principles and automation. Own the creation and governance of SOPs, runbooks, and technical documentation for data operations. Partner with vendors, security, and infrastructure teams to ensure compliance, scalability, and cost-effective architecture. Skills And Attributes For Success Expertise in AWS data services and ability to lead architectural discussions. Analytical thinker with the ability to design and optimize end-to-end data workflows. Excellent debugging and incident resolution skills in large-scale data environments. Strong leadership and mentoring capabilities, with clear communication across business and technical teams. A growth mindset with a passion for building reliable, scalable data systems. Proven ability to manage priorities and navigate ambiguity in a fast-paced environment. To qualify for the role, you must have 5–8 years of experience in DataOps, Data Engineering, or related roles. Strong hands-on expertise in Databricks. Deep understanding of ETL pipelines and modern data integration patterns. Proven experience with Amazon S3, EMR, Glue, Lambda, and Amazon Redshift in production environments. Experience in Airflow or AWS Data Pipeline for orchestration and scheduling. Advanced knowledge of IICS or similar ETL tools for data transformation and automation. SQL skills with emphasis on performance tuning, complex joins, and window functions. Technologies and Tools Must haves Proficient in Amazon S3, EMR (Elastic MapReduce), AWS Glue, and Lambda Expert in Databricks – ability to develop, optimize, and troubleshoot advanced notebooks Strong experience with Amazon Redshift for scalable data warehousing and analytics Solid understanding of orchestration tools like Apache Airflow or AWS Data Pipeline Hands-on with IICS (Informatica Intelligent Cloud Services) or comparable ETL platforms Good to have Exposure to Power BI or Tableau for data visualization Familiarity with CDI, Informatica, or other enterprise-grade data integration platforms Understanding of DevOps and CI/CD automation tools for data engineering workflows SQL familiarity across large datasets and distributed databases What We Look For Enthusiastic learners with a passion for data op’s and practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 1 week ago
8.0 years
0 Lacs
Pune, Maharashtra, India
Remote
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity We are looking for a seasoned and strategic-thinking Senior AWS DataOps Engineer to join our growing global data team. In this role, you will take ownership of critical data workflows and work closely with cross-functional teams to support, optimize, and scale cloud-based data pipelines. You will bring leadership to data operations, contribute to architectural decisions, and help ensure the integrity, availability, and performance of our AWS data infrastructure. Your Key Responsibilities Lead the design, monitoring, and optimization of AWS-based data pipelines using services like AWS Glue, EMR, Lambda, and Amazon S3. Oversee and enhance complex ETL workflows involving IICS (Informatica Intelligent Cloud Services), Databricks, and native AWS tools. Collaborate with data engineering and analytics teams to streamline ingestion into Amazon Redshift and lead data validation strategies. Manage job orchestration using Apache Airflow, AWS Data Pipeline, or equivalent tools, ensuring SLA adherence. Guide SQL query optimization across Redshift and other AWS databases for analytics and operational use cases. Perform root cause analysis of critical failures, mentor junior staff on best practices, and implement preventive measures. Lead deployment activities through robust CI/CD pipelines, applying DevOps principles and automation. Own the creation and governance of SOPs, runbooks, and technical documentation for data operations. Partner with vendors, security, and infrastructure teams to ensure compliance, scalability, and cost-effective architecture. Skills And Attributes For Success Expertise in AWS data services and ability to lead architectural discussions. Analytical thinker with the ability to design and optimize end-to-end data workflows. Excellent debugging and incident resolution skills in large-scale data environments. Strong leadership and mentoring capabilities, with clear communication across business and technical teams. A growth mindset with a passion for building reliable, scalable data systems. Proven ability to manage priorities and navigate ambiguity in a fast-paced environment. To qualify for the role, you must have 5–8 years of experience in DataOps, Data Engineering, or related roles. Strong hands-on expertise in Databricks. Deep understanding of ETL pipelines and modern data integration patterns. Proven experience with Amazon S3, EMR, Glue, Lambda, and Amazon Redshift in production environments. Experience in Airflow or AWS Data Pipeline for orchestration and scheduling. Advanced knowledge of IICS or similar ETL tools for data transformation and automation. SQL skills with emphasis on performance tuning, complex joins, and window functions. Technologies and Tools Must haves Proficient in Amazon S3, EMR (Elastic MapReduce), AWS Glue, and Lambda Expert in Databricks – ability to develop, optimize, and troubleshoot advanced notebooks Strong experience with Amazon Redshift for scalable data warehousing and analytics Solid understanding of orchestration tools like Apache Airflow or AWS Data Pipeline Hands-on with IICS (Informatica Intelligent Cloud Services) or comparable ETL platforms Good to have Exposure to Power BI or Tableau for data visualization Familiarity with CDI, Informatica, or other enterprise-grade data integration platforms Understanding of DevOps and CI/CD automation tools for data engineering workflows SQL familiarity across large datasets and distributed databases What We Look For Enthusiastic learners with a passion for data op’s and practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 1 week ago
8.0 years
0 Lacs
Noida, Uttar Pradesh, India
Remote
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity We are looking for a seasoned and strategic-thinking Senior AWS DataOps Engineer to join our growing global data team. In this role, you will take ownership of critical data workflows and work closely with cross-functional teams to support, optimize, and scale cloud-based data pipelines. You will bring leadership to data operations, contribute to architectural decisions, and help ensure the integrity, availability, and performance of our AWS data infrastructure. Your Key Responsibilities Lead the design, monitoring, and optimization of AWS-based data pipelines using services like AWS Glue, EMR, Lambda, and Amazon S3. Oversee and enhance complex ETL workflows involving IICS (Informatica Intelligent Cloud Services), Databricks, and native AWS tools. Collaborate with data engineering and analytics teams to streamline ingestion into Amazon Redshift and lead data validation strategies. Manage job orchestration using Apache Airflow, AWS Data Pipeline, or equivalent tools, ensuring SLA adherence. Guide SQL query optimization across Redshift and other AWS databases for analytics and operational use cases. Perform root cause analysis of critical failures, mentor junior staff on best practices, and implement preventive measures. Lead deployment activities through robust CI/CD pipelines, applying DevOps principles and automation. Own the creation and governance of SOPs, runbooks, and technical documentation for data operations. Partner with vendors, security, and infrastructure teams to ensure compliance, scalability, and cost-effective architecture. Skills And Attributes For Success Expertise in AWS data services and ability to lead architectural discussions. Analytical thinker with the ability to design and optimize end-to-end data workflows. Excellent debugging and incident resolution skills in large-scale data environments. Strong leadership and mentoring capabilities, with clear communication across business and technical teams. A growth mindset with a passion for building reliable, scalable data systems. Proven ability to manage priorities and navigate ambiguity in a fast-paced environment. To qualify for the role, you must have 5–8 years of experience in DataOps, Data Engineering, or related roles. Strong hands-on expertise in Databricks. Deep understanding of ETL pipelines and modern data integration patterns. Proven experience with Amazon S3, EMR, Glue, Lambda, and Amazon Redshift in production environments. Experience in Airflow or AWS Data Pipeline for orchestration and scheduling. Advanced knowledge of IICS or similar ETL tools for data transformation and automation. SQL skills with emphasis on performance tuning, complex joins, and window functions. Technologies and Tools Must haves Proficient in Amazon S3, EMR (Elastic MapReduce), AWS Glue, and Lambda Expert in Databricks – ability to develop, optimize, and troubleshoot advanced notebooks Strong experience with Amazon Redshift for scalable data warehousing and analytics Solid understanding of orchestration tools like Apache Airflow or AWS Data Pipeline Hands-on with IICS (Informatica Intelligent Cloud Services) or comparable ETL platforms Good to have Exposure to Power BI or Tableau for data visualization Familiarity with CDI, Informatica, or other enterprise-grade data integration platforms Understanding of DevOps and CI/CD automation tools for data engineering workflows SQL familiarity across large datasets and distributed databases What We Look For Enthusiastic learners with a passion for data op’s and practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 1 week ago
15.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Roles & Responsibilities We are seeking an experienced and visionary Data Architect with over 15 years of experience to lead the design and implementation of scalable, secure, and high-performing data architectures. The ideal candidate should have a deep understanding of cloud-native architectures, enterprise data platforms, and end-to-end data lifecycle management. You will work closely with business, engineering, and product teams to craft robust data solutions that drive business intelligence, analytics, and AI initiatives. Key Responsibilities Design and implement enterprise-grade data architectures using cloud platforms (e.g., AWS, Azure, GCP). Lead the definition of data architecture standards, guidelines, and best practices. Architect scalable data solutions including data lakes, data warehouses, and real-time streaming platforms. Collaborate with data engineers, analysts, and data scientists to understand data requirements and deliver optimal solutions. Oversee data modeling activities including conceptual, logical, and physical data models. Ensure data security, privacy, and compliance with applicable regulations (e.g., GDPR, HIPAA). Define and implement data governance strategies in collaboration with stakeholders. Evaluate and recommend data-related tools and technologies. Provide architectural guidance and mentorship to data engineering teams. Participate in client discussions, pre-sales, and proposal building (if in a consulting environment). Required Skills & Qualifications 15+ years of experience in data architecture, data engineering, or database development. Strong experience architecting data solutions on at least one major cloud platform (AWS, Azure, or GCP). Deep understanding of data management principles, data modeling, ETL/ELT pipelines, and data warehousing. Hands-on experience with modern data platforms and tools (e.g., Snowflake, Databricks, BigQuery, Redshift, Synapse, Apache Spark). Proficiency with programming languages such as Python, SQL, or Java. Familiarity with real-time data processing frameworks like Kafka, Kinesis, or Azure Event Hub. Experience implementing data governance, data cataloging, and data quality frameworks. Knowledge of DevOps practices, CI/CD pipelines for data, and Infrastructure as Code (IaC) is a plus. Excellent problem-solving, communication, and stakeholder management skills. Bachelor's or Master's degree in Computer Science, Information Technology, or a related field. Cloud Architect or Data Architect certification (AWS/Azure/GCP) is a strong plus. Preferred Certifications AWS Certified Solutions Architect – Professional Microsoft Certified: Azure Solutions Architect Expert Google Cloud Professional Data Engineer TOGAF or equivalent architecture frameworks What We Offer A collaborative and inclusive work environment Opportunity to work on cutting-edge data and AI projects Flexible work options Experience Competitive compensation and benefits package 16-18 Years Skills Primary Skill: Data Architecture Sub Skill(s): Data Architecture Additional Skill(s): Data Architecture About The Company Infogain is a human-centered digital platform and software engineering company based out of Silicon Valley. We engineer business outcomes for Fortune 500 companies and digital natives in the technology, healthcare, insurance, travel, telecom, and retail & CPG industries using technologies such as cloud, microservices, automation, IoT, and artificial intelligence. We accelerate experience-led transformation in the delivery of digital platforms. Infogain is also a Microsoft (NASDAQ: MSFT) Gold Partner and Azure Expert Managed Services Provider (MSP). Infogain, an Apax Funds portfolio company, has offices in California, Washington, Texas, the UK, the UAE, and Singapore, with delivery centers in Seattle, Houston, Austin, Kraków, Noida, Gurgaon, Mumbai, Pune, and Bengaluru.
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
31458 Jobs | Dublin
Wipro
16542 Jobs | Bengaluru
EY
10788 Jobs | London
Accenture in India
10711 Jobs | Dublin 2
Amazon
8660 Jobs | Seattle,WA
Uplers
8559 Jobs | Ahmedabad
IBM
7988 Jobs | Armonk
Oracle
7535 Jobs | Redwood City
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi
Capgemini
6091 Jobs | Paris,France