Jobs
Interviews

66 Aws Redshift Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 - 6.0 years

8 - 13 Lacs

gurugram

Work from Office

Role Description: As a Senior Cloud Data Platform (AWS) Specialist at Incedo, you will be responsible for designing, deploying and maintaining cloud-based data platforms on the AWS platform. You will work with data engineers, data scientists and business analysts to understand business requirements and design scalable, reliable and cost-effective solutions that meet those requirements. Roles & Responsibilities: Designing, developing and deploying cloud-based data platforms using Amazon Web Services (AWS) Integrating and processing large amounts of structured and unstructured data from various sources Implementing and optimizing ETL processes and data pipelines Developing and maintaining security and access controls Collaborating with other teams to ensure the consistency and integrity of data Troubleshooting and resolving data platform issues Technical Skills Skills Requirements: In-depth knowledge of AWS services and tools such as AWS Glue, AWS Redshift, and AWS Lambda Experience in building scalable and reliable data pipelines using AWS services, Apache Spark, and related big data technologies Familiarity with cloud-based infrastructure and deployment, specifically on AWS Strong knowledge of programming languages such as Python, Java, and SQL Must have excellent communication skills and be able to communicate complex technical information to non-technical stakeholders in a clear and concise manner. Must understand the company's long-term vision and align with it. Provide leadership, guidance, and support to team members, ensuring the successful completion of tasks, and promoting a positive work environment that fosters collaboration and productivity, taking responsibility of the whole team. Nice-to-have skills Qualifications 4-6 years of work experience in relevant field B.Tech/B.E/M.Tech or MCA degree from a reputed university. Computer science background is preferred

Posted 18 hours ago

Apply

2.0 - 6.0 years

8 - 12 Lacs

bengaluru

Work from Office

Role & responsibilities Develop and maintain scalable ETL/ELT pipelines using Databricks (PySpark, Delta Lake). Design and optimize data models in AWS Redshift for performance and scalability. Manage Redshift clusters and EC2-based deployments, ensuring reliability and cost efficiency. Integrate data from diverse sources (structured/unstructured) into centralized data platforms. Implement data quality checks, monitoring, and logging across pipelines. Collaborate with data scientists, analysts, and business stakeholders to deliver high-quality datasets. Required Skills & Experience: 36 years of experience in data engineering. Strong expertise in Databricks (Spark, Delta Lake, notebooks, job orchestration). Hands-on experience with AWS Redshift (cluster management, performance tuning, workload optimization). Proficiency with AWS EC2, S3, and related AWS services. Strong SQL and Python skills. Experience with CI/CD and version control (Git). Preferred candidate profile We are seeking a skilled Data Engineer with hands-on experience in Databricks and AWS Redshift (including EC2 deployments) to design, build, and optimize data pipelines that support analytics and business intelligence initiatives.

Posted 21 hours ago

Apply

5.0 - 10.0 years

15 - 25 Lacs

kolkata, hyderabad, bengaluru

Hybrid

Genpact (NYSE: G) is a global professional services and solutions firm delivering outcomes that shape the future. Our 125,000+ people across 30+ countries are driven by our innate curiosity, entrepreneurial agility, and desire to create lasting value for clients. Powered by our purpose the relentless pursuit of a world that works better for people – we serve and transform leading enterprises, including the Fortune Global 500, with our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. Inviting applications for the role of Lead Consultant-Data Engineer, AWS+Python, Spark, Kafka for ETL! Responsibilities Develop, deploy, and manage ETL pipelines using AWS services, Python, Spark, and Kafka. Integrate structured and unstructured data from various data sources into data lakes and data warehouses. Design and deploy scalable, highly available, and fault-tolerant AWS data processes using AWS data services (Glue, Lambda, Step, Redshift) Monitor and optimize the performance of cloud resources to ensure efficient utilization and cost-effectiveness. Implement and maintain security measures to protect data and systems within the AWS environment, including IAM policies, security groups, and encryption mechanisms. Migrate the application data from legacy databases to Cloud based solutions (Redshift, DynamoDB, etc) for high availability with low cost Develop application programs using Big Data technologies like Apache Hadoop, Apache Spark, etc with appropriate cloud-based services like Amazon AWS, etc. Build data pipelines by building ETL processes (Extract-Transform-Load) Implement backup, disaster recovery, and business continuity strategies for cloud-based applications and data. Responsible for analysing business and functional requirements which involves a review of existing system configurations and operating methodologies as well as understanding evolving business needs Analyse requirements/User stories at the business meetings and strategize the impact of requirements on different platforms/applications, convert the business requirements into technical requirements Participating in design reviews to provide input on functional requirements, product designs, schedules and/or potential problems Understand current application infrastructure and suggest Cloud based solutions which reduces operational cost, requires minimal maintenance but provides high availability with improved security Perform unit testing on the modified software to ensure that the new functionality is working as expected while existing functionalities continue to work in the same way Coordinate with release management, other supporting teams to deploy changes in production environment Qualifications we seek in you! Minimum Qualifications Experience in designing, implementing data pipelines, build data applications, data migration on AWS Strong experience of implementing data lake using AWS services like Glue, Lambda, Step, Redshift Experience of Databricks will be added advantage Strong experience in Python and SQL Proven expertise in AWS services such as S3, Lambda, Glue, EMR, and Redshift. Advanced programming skills in Python for data processing and automation. Hands-on experience with Apache Spark for large-scale data processing. Experience with Apache Kafka for real-time data streaming and event processing. Proficiency in SQL for data querying and transformation. Strong understanding of security principles and best practices for cloud-based environments. Experience with monitoring tools and implementing proactive measures to ensure system availability and performance. Excellent problem-solving skills and ability to troubleshoot complex issues in a distributed, cloud-based environment. Strong communication and collaboration skills to work effectively with cross-functional teams. Preferred Qualifications/ Skills Master’s Degree-Computer Science, Electronics, Electrical. AWS Data Engineering & Cloud certifications, Databricks certifications Experience with multiple data integration technologies and cloud platforms Knowledge of Change & Incident Management process Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values diversity and inclusion, respect and integrity, customer focus, and innovation. Get to know us at genpact.com and on LinkedIn, X, YouTube, and Facebook. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training.

Posted 2 days ago

Apply

8.0 - 12.0 years

0 Lacs

karnataka

On-site

At PwC, the focus in data and analytics engineering is on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. You play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will concentrate on designing and building data infrastructure and systems to enable efficient data processing and analysis. Your responsibilities include developing and implementing data pipelines, data integration, and data transformation solutions. As an AWS Architect / Manager at PwC - AC, you will interact with Offshore Manager/Onsite Business Analyst to understand the requirements and will be responsible for end-to-end implementation of Cloud data engineering solutions like Enterprise Data Lake and Data hub in AWS. Strong experience in AWS cloud technology is required, along with planning and organization skills. You will work as a cloud Architect/lead on an agile team and provide automated cloud solutions, monitoring the systems routinely to ensure that all business goals are met as per the Business requirements. **Position Requirements:** **Must Have:** - Experience in architecting and delivering highly scalable, distributed, cloud-based enterprise data solutions - Strong expertise in the end-to-end implementation of Cloud data engineering solutions like Enterprise Data Lake, Data hub in AWS - Hands-on experience with Snowflake utilities, SnowSQL, SnowPipe, ETL data Pipelines, Big Data model techniques using Python / Java - Design scalable data architectures with Snowflake, integrating cloud technologies (AWS, Azure, GCP) and ETL/ELT tools such as DBT - Guide teams in proper data modeling (star, snowflake schemas), transformation, security, and performance optimization - Experience in load from disparate data sets and translating complex functional and technical requirements into detailed design - Deploying Snowflake features such as data sharing, events, and lake-house patterns - Experience with data security and data access controls and design - Understanding of relational as well as NoSQL data stores, methods, and approaches (star and snowflake, dimensional modeling) - Good knowledge of AWS, Azure, or GCP data storage and management technologies such as S3, Blob/ADLS, and Google Cloud Storage - Proficient in Lambda and Kappa Architectures - Strong AWS hands-on expertise with a programming background preferably Python/Scala - Knowledge of Big Data frameworks and related technologies with experience in Hadoop and Spark - Strong experience in AWS compute services like AWS EMR, Glue, and Sagemaker and storage services like S3, Redshift & Dynamodb - Experience with AWS Streaming Services like AWS Kinesis, AWS SQS, and AWS MSK - Troubleshooting and Performance tuning experience in Spark framework - Spark core, Sql, and Spark Streaming - Experience in flow tools like Airflow, Nifi, or Luigi - Knowledge of Application DevOps tools (Git, CI/CD Frameworks) - Experience in Jenkins or Gitlab with rich experience in source code management like Code Pipeline, Code Build, and Code Commit - Experience with AWS CloudWatch, AWS Cloud Trail, AWS Account Config, AWS Config Rules - Understanding of Cloud data migration processes, methods, and project lifecycle - Business/domain knowledge in Financial Services/Healthcare/Consumer Market/Industrial Products/Telecommunication, Media and Technology/Deal advisory along with technical expertise - Experience in leading technical teams, guiding and mentoring team members - Analytical & problem-solving skills - Communication and presentation skills - Understanding of Data Modeling and Data Architecture **Desired Knowledge/Skills:** - Experience in building stream-processing systems using solutions such as Storm or Spark-Streaming - Experience in Big Data ML toolkits like Mahout, SparkML, or H2O - Knowledge in Python - Certification on AWS Architecture desirable - Worked in Offshore/Onsite Engagements - Experience in AWS services like STEP & Lambda - Project Management skills with consulting experience in Complex Program Delivery **Professional And Educational Background:** BE/B.Tech/MCA/M.Sc/M.E/M.Tech/MBA **Minimum Years Experience Required:** Candidates with 8-12 years of hands-on experience **Additional Application Instructions:** Add here and change text color to black or remove bullet and section title if not applicable.,

Posted 3 days ago

Apply

2.0 - 6.0 years

0 Lacs

karnataka

On-site

As an AWS Developer at PwC's Advisory Acceleration Center, you will collaborate with the Offshore Manager and Onsite Business Analyst to comprehend requirements and take charge of implementing Cloud data engineering solutions on AWS, such as Enterprise Data Lake and Data hub. With a focus on architecting and delivering scalable cloud-based enterprise data solutions, you will bring your expertise in end-to-end implementation of Cloud data engineering solutions using tools like Snowflake utilities, SnowSQL, SnowPipe, ETL data Pipelines, and Big Data model techniques using Python/Java. Your responsibilities will include loading disparate data sets, translating complex requirements into detailed designs, and deploying Snowflake features like data sharing, events, and lake-house patterns. You are expected to possess a deep understanding of relational and NoSQL data stores, including star and snowflake dimensional modeling, and demonstrate strong hands-on expertise in AWS services such as EMR, Glue, Sagemaker, S3, Redshift, Dynamodb, and AWS Streaming Services like Kinesis, SQS, and MSK. Troubleshooting and performance tuning experience in Spark framework, familiarity with flow tools like Airflow, Nifi, or Luigi, and proficiency in Application DevOps tools like Git, CI/CD frameworks, Jenkins, and Gitlab are essential for this role. Desired skills include experience in building stream-processing systems using solutions like Storm or Spark-Streaming, knowledge in Big Data ML toolkits such as Mahout, SparkML, or H2O, proficiency in Python, and exposure to Offshore/Onsite Engagements and AWS services like STEP & Lambda. Candidates with 2-4 years of hands-on experience in Cloud data engineering solutions, a professional background in BE/B.Tech/MCA/M.Sc/M.E/M.Tech/MBA, and a passion for problem-solving and effective communication are encouraged to apply to be part of PwC's dynamic and inclusive work culture, where learning, growth, and excellence are at the core of our values. Join us at PwC, where you can make a difference today and shape the future tomorrow!,

Posted 5 days ago

Apply

0.0 - 3.0 years

0 Lacs

noida, uttar pradesh

On-site

We are seeking a Data Engineer Intern or Trainee with the following key skills: - Proficient in SQL Database tuning and performance optimization - Experience with Airflow implementation using Python or Scala - Strong knowledge of Python and PySpark - Familiarity with AWS Redshift, Snowflake, or Databricks for data warehousing - Ability to work with ETL services in AWS such as EMR, GLUE, S3, Redshift, or similar services in GCP or Azure. This opportunity is open to both freshers and individuals with up to 1 year of experience. Comprehensive on-the-job training will be provided for freshers. Candidates with a B.Tech background and no prior IT experience are also encouraged to apply. Job Types: Full-time, Permanent, Fresher Benefits: - Paid sick time - Performance bonus Schedule: - Monday to Friday Experience: - Total work: 1 year (Preferred) Work Location: In person Expected Start Date: 04/08/2025,

Posted 1 week ago

Apply

2.0 - 6.0 years

0 Lacs

karnataka

On-site

As an AWS Developer at PwC's Acceleration Center in Bangalore, you will be responsible for the end-to-end implementation of Cloud data engineering solutions like Enterprise Data Lake and Data hub in AWS. You will collaborate with Offshore Manager/Onsite Business Analyst to understand requirements and architect scalable, distributed, cloud-based enterprise data solutions. Your role will involve hands-on experience with Snowflake utilities, SnowSQL, SnowPipe, ETL data Pipelines, and Big Data model techniques using Python/Java. You must have a deep understanding of relational and NoSQL data stores, methods, and approaches such as star and snowflake dimensional modeling. Strong expertise in AWS services like EMR, Glue, Sagemaker, S3, Redshift, Dynamodb, and streaming services like Kinesis, SQS, and MSK is essential. Troubleshooting and performance tuning experience in Spark framework, along with knowledge of flow tools like Airflow, Nifi, or Luigi, is required. Experience with Application DevOps tools like Git, CI/CD Frameworks, Jenkins, or Gitlab is preferred. Familiarity with AWS CloudWatch, Cloud Trail, Account Config, Config Rules, and Cloud data migration processes is expected. Good analytical, problem-solving, communication, and presentation skills are essential for this role. Desired skills include building stream-processing systems using Storm or Spark-Streaming, experience in Big Data ML toolkits like Mahout, SparkML, or H2O, and knowledge of Python. Exposure to Offshore/Onsite Engagements and AWS services like STEP and Lambda would be a plus. Candidates with 2-4 years of hands-on experience in cloud data engineering solutions and a background in BE/B.Tech/MCA/M.Sc/M.E/M.Tech/MBA are encouraged to apply. Travel to client locations may be required based on project needs. This position falls under the Advisory line of service and the Technology Consulting horizontal, with the designation of Associate based in Bangalore, India. If you are passionate about working in a high-performance culture that values diversity, inclusion, and professional development, PwC could be the ideal place for you to grow and excel in your career. Apply now to be part of a global team dedicated to solving important problems and making a positive impact on the world.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

As a Business Intelligence Analyst in our team, you will collaborate with product managers, engineers, and business stakeholders to establish key performance indicators (KPIs) and success metrics for Creator Success. Your role involves creating detailed dashboards and self-service analytics tools utilizing platforms like QuickSight, Tableau, or similar Business Intelligence (BI) tools. You will conduct in-depth analysis on customer behavior, content performance, and livestream engagement patterns. Developing and maintaining robust ETL/ELT pipelines to handle large volumes of streaming and batch data from the Creator Success platform is a key responsibility. Additionally, you will be involved in designing and optimizing data warehouses, data lakes, and real-time analytics systems using AWS services such as Redshift, S3, Kinesis, EMR, and Glue. Ensuring data accuracy and reliability is crucial, and you will implement data quality frameworks and monitoring systems. Your qualifications should include a Bachelor's degree in Computer Science, Engineering, Mathematics, Statistics, or a related quantitative field. With at least 3 years of experience in business intelligence or analytic roles, you should have proficiency in SQL, Python, and/or Scala. Expertise in AWS cloud services like Redshift, S3, EMR, Glue, Lambda, and Kinesis is required. You should have a strong background in building and optimizing ETL pipelines, data warehousing solutions, and big data technologies like Spark and Hadoop. Familiarity with distributed computing frameworks, business intelligence tools (QuickSight, Tableau, Looker), and data visualization best practices is essential. Your proficiency in SQL and Python is highly valued, along with skills in AWS Lambda, QuickSight, Power BI, AWS S3, AWS Kinesis, ETL, Scala, AWS EMR, Hadoop, Spark, AWS Glue, and data warehousing. If you are passionate about leveraging data to drive business decisions and have a strong analytical mindset, we welcome you to join our team and make a significant impact in the field of Business Intelligence.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

hyderabad, telangana

On-site

About McDonalds: McDonalds Corporation is one of the world's largest employers with a presence in over 100 countries. In Hyderabad, corporate opportunities are available at our global offices, which serve as innovative and operational hubs. The purpose of these offices is to enhance McDonald's global talent pool and in-house expertise. The new office in Hyderabad aims to bring together knowledge in areas like business, technology, analytics, and AI, thereby accelerating our ability to provide impactful solutions for the business and customers worldwide. Position Summary: As an Associate Technical Product Analyst in the Global Technology Enterprise Products & Platforms (EPP) Team, you will be focusing on data management and operations within the Global Data & Analytics Platform (GDAP). The vision of this platform is to always be people-led, product-centric, forward-thinking, and a trusted technology partner. Reporting to the Technical Product Manager, the Senior Technical Analyst will support technical product management leadership, provide guidance to developers, manage squad output, and participate in roadmap and backlog preparation. Responsibilities & Accountabilities: Product roadmap and backlog preparation: - Collaborate with the Technical Product Manager to prioritize roadmap and backlog items - Analyze existing processes to identify inefficiencies and opportunities for improvement - Create detailed requirements documents, user stories, and acceptance criteria - Lead agile ceremonies and act as a leader for Software Development Engineers Technical solutioning and feature development/releases: - Work on designing, developing, and documenting Talend ETL processes - Administer Talend software for data integration and quality - Collaborate with business users and product teams on various management activities - Analyze patches, review defects, and ensure high standards for product delivery Qualifications: Basic Qualifications: - Bachelor's degree in computer science or engineering - 3+ years of experience with AWS RedShift and Talend - Experience in data warehousing is a plus - Knowledge of Agile software development and working collaboratively with business partners - Strong communication skills and ability to translate technical concepts into business requirements Preferred Qualifications: - Hands-on experience with AWS RedShift, Talend, and other AWS services - Proficiency in SQL, data integration tools, and scripting languages - Understanding of DevOps practices and tools - Experience with JIRA, Confluence, and product-centric organizations - Knowledge of cloud architecture, cybersecurity, and IT General Controls (ITGC) Work location: Hyderabad, India Work pattern: Full-time role Work mode: Hybrid Additional Information: - Any additional information specific to the job or work environment will be communicated as needed.,

Posted 1 week ago

Apply

6.0 - 10.0 years

0 Lacs

karnataka

On-site

At EY, you'll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we're counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Our Technology team builds innovative digital solutions rapidly and at scale to deliver the next generation of Financial and Non-Financial services across the globe. The Position is a senior technical, hands-on delivery role, requiring the knowledge of data engineering, cloud infrastructure and platform engineering, platform operations, and production support using ground-breaking cloud and big data technologies. The ideal candidate with 6-8 years of experience will possess strong technical skills, an eagerness to learn, a keen interest in the three key pillars that our team supports i.e. Financial Crime, Financial Risk, and Compliance technology transformation. The ability to work collaboratively in a fast-paced environment, and an aptitude for picking up new tools and techniques on the job, building on existing skillsets as a foundation. In this role you will: - Ingestion and provisioning of raw datasets, enriched tables, and/or curated, re-usable data assets to enable a variety of use cases. - Driving improvements in the reliability and frequency of data ingestion including increasing real-time coverage. - Support and enhancement of data ingestion infrastructure and pipelines. - Designing and implementing data pipelines that will collect data from disparate sources across the enterprise, and from external sources and deliver it to our data platform. - Extract Transform and Load (ETL) workflows, using both advanced data manipulation tools and programmatically manipulation data throughout our data flows, ensuring data is available at each stage in the data flow, and in the form needed for each system, service, and customer along said data flow. - Identifying and onboarding data sources using existing schemas and where required, conducting exploratory data analysis to investigate and provide solutions. - Evaluate modern technologies, frameworks, and tools in the data engineering space to drive innovation and improve data processing capabilities. Core/Must-Have Skills: - 3-8 years of expertise in designing and implementing data warehouses, data lakes using Oracle Tech Stack (ETL: ODI, SSIS, DB: PLSQL and AWS Redshift). - At least 4+ years of experience in managing data extraction, transformation, and loading various sources using Oracle Data Integrator with exposure to other tools like SSIS. - At least 4+ years of experience in Database Design and Dimension modeling using Oracle PLSQL, Microsoft SQL Server. - Experience in developing ETL processes ETL control tables, error logging, auditing, data quality, etc. Should be able to implement reusability, parameterization workflow design, etc. - Advanced working SQL Knowledge and experience working with relational and NoSQL databases as well as working familiarity with a variety of databases (Oracle, SQL Server, Neo4J). - Strong analytical and critical thinking skills, with the ability to identify and resolve issues in data pipelines and systems. - Expertise in data modeling and DB Design with skills in performance tuning. - Experience with OLAP, OLTP databases, and data structuring/modeling with an understanding of key data points. - Experience building and optimizing data pipelines on Azure Databricks or AWS glue or Oracle cloud. - Create and Support ETL Pipelines and table schemas to facilitate the accommodation of new and existing data sources for the Lakehouse. - Experience with data visualization (Power BI/Tableau) and SSRS. Good to Have: - Experience working in Financial Crime, Financial Risk, and Compliance technology transformation domains. - Certification in any cloud tech stack preferred Microsoft Azure. - In-depth knowledge and hands-on experience with data engineering, Data Warehousing, and Delta Lake on-prem (Oracle RDBMS, Microsoft SQL Server) and cloud (Azure or AWS or Oracle cloud). - Ability to script (Bash, Azure CLI), Code (Python, C#), query (SQL, PLSQL, T-SQL) coupled with software versioning control systems (e.g. GitHub) AND ci/cd systems. - Design and development of systems for the maintenance of the Azure/AWS Lakehouse, ETL process, business Intelligence, and data ingestion pipelines for AI/ML use cases.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

noida, uttar pradesh

On-site

At Capgemini Invent, we believe difference drives change. As inventive transformation consultants, we blend our strategic, creative, and scientific capabilities, collaborating closely with clients to deliver cutting-edge solutions. Join us to drive transformation tailored to our client's challenges of today and tomorrow, informed and validated by science and data, superpowered by creativity and design, all underpinned by technology created with purpose. Your role involves having IT experience with a minimum of 5+ years in creating data warehouses, data lakes, ETL/ELT, data pipelines on cloud. You should have experience in data pipeline implementation with cloud providers such as AWS, Azure, GCP, preferably in the Life Sciences Domain. Experience with cloud storage, cloud database, cloud Data Warehousing, and Data Lake solutions like Snowflake, BigQuery, AWS Redshift, ADLS, S3 is essential. You should also be familiar with cloud data integration services for structured, semi-structured, and unstructured data like Azure Databricks, Azure Data Factory, Azure Synapse Analytics, AWS Glue, AWS EMR, Dataflow, Dataproc. Good knowledge of Infra capacity sizing, costing of cloud services to drive optimized solution architecture, leading to optimal infra investment vs performance and scaling is required. Your profile should demonstrate the ability to contribute to making architectural choices using various cloud services and solution methodologies. Expertise in programming using Python is a must. Very good knowledge of cloud DevOps practices such as infrastructure as code, CI/CD components, and automated deployments on the cloud is essential. Understanding networking, security, design principles, and best practices in the cloud is expected. Knowledge of IoT and real-time streaming would be an added advantage. You will be leading architectural/technical discussions with clients and should possess excellent communication and presentation skills. At Capgemini, we recognize the significance of flexible work arrangements to provide support. Whether it's remote work or flexible work hours, you will get an environment to maintain a healthy work-life balance. Our mission is centered on your career growth, offering an array of career growth programs and diverse professions crafted to support you in exploring a world of opportunities. Equip yourself with valuable certifications in the latest technologies such as Generative AI. Capgemini is a global business and technology transformation partner, helping organizations accelerate their dual transition to a digital and sustainable world while creating tangible impact for enterprises and society. With a responsible and diverse group of over 340,000 team members in more than 50 countries, Capgemini has a strong heritage of over 55 years. Clients trust Capgemini to unlock the value of technology to address the entire breadth of their business needs, delivering end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by market-leading capabilities in AI, Generative AI, cloud, and data, combined with deep industry expertise and a partner ecosystem.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

You have 5 to 7 years of experience and are skilled in Python, PySpark, and SQL. As a Data Engineer at AceNet Consulting, you will design, develop, and maintain data pipelines using these technologies. You should have hands-on experience with cloud data platforms and data modeling. Proficiency in tools like Git and GitHub is required, along with strong knowledge of query performance tuning. The ideal candidate for this role has at least 5 years of experience in a complex data ecosystem, working in an agile environment. Experience with cloud data platforms such as AWS Redshift and Databricks is a plus. Problem-solving and communication skills are essential for this position. Joining AceNet Consulting offers you the opportunity to work on transformative projects with leading global firms, continuous investment in your professional development, competitive compensation and benefits, ESOPs, and international assignments. The company values a supportive environment, work-life balance, and employee well-being. AceNet Consulting fosters an open culture that encourages diverse perspectives, transparent communication, and rewards contributions. If you meet the qualifications mentioned above and are passionate about technology, thrive in a fast-paced environment, and want to be part of a dynamic team, submit your resume to apply for this Data Engineer position at AceNet Consulting.,

Posted 1 week ago

Apply

10.0 - 14.0 years

0 Lacs

vadodara, gujarat

On-site

As a Lead Data Engineer at Rearc, you will play a crucial role in establishing and maintaining technical excellence within our data engineering team. Your extensive experience in data architecture, ETL processes, and data modeling will be key in optimizing data workflows for efficiency, scalability, and reliability. Collaborating closely with cross-functional teams, you will design and implement robust data solutions that align with business objectives and adhere to best practices in data management. Building strong partnerships with technical teams and stakeholders is essential as you drive data-driven initiatives and ensure their successful implementation. With over 10 years of experience in data engineering or related fields, you bring a wealth of expertise in managing and optimizing data pipelines and architectures. Your proficiency in Java and/or Python, along with experience in data pipeline orchestration using platforms like Airflow, Databricks, DBT, or AWS Glue, will be invaluable. Hands-on experience with data analysis tools and libraries such as Pyspark, NumPy, Pandas, or Dask is required, while proficiency with Spark and Databricks is highly desirable. Your proven track record of leading complex data engineering projects, coupled with hands-on experience in ETL processes, data warehousing, and data modeling tools, enables you to deliver efficient and robust data pipelines. You possess in-depth knowledge of data integration tools and best practices, as well as a strong understanding of cloud-based data services and technologies like AWS Redshift, Azure Synapse Analytics, and Google BigQuery. Your strategic and analytical skills will enable you to solve intricate data challenges and drive data-driven decision-making. In this role, you will collaborate with stakeholders to understand data requirements and challenges, implement data solutions with a DataOps mindset using modern tools and frameworks, lead data engineering projects, mentor junior team members, and promote knowledge sharing through technical blogs and articles. Your exceptional communication and interpersonal skills will facilitate collaboration with cross-functional teams and effective stakeholder engagement at all levels. At Rearc, we empower engineers to build innovative products and experiences by providing them with the best tools possible. If you are a cloud professional with a passion for problem-solving and a desire to make a difference, join us in our mission to solve problems and drive innovation in the field of data engineering.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

hyderabad, telangana

On-site

We are seeking a skilled and experienced Data Engineer with a minimum of 5 years of expertise in data engineering and data migration projects. The ideal candidate will have a strong proficiency in SQL, Python, data modeling, data warehousing, and ETL pipeline development. It is essential to have hands-on experience with big data tools such as Hadoop and Spark, as well as familiarity with various AWS services including Redshift, S3, Glue, EMR, and Lambda. This position offers a fantastic opportunity to contribute to large-scale data solutions that drive data-informed decision-making and operational efficiency. As a Data Engineer, your responsibilities will include designing, building, and maintaining scalable data pipelines and ETL processes. You will be tasked with developing and optimizing data models and data warehouse architectures, as well as implementing and managing big data technologies and cloud-based data solutions. Your role will involve performing data migration, transformation, and integration from multiple sources, collaborating with cross-functional teams to understand data requirements, and ensuring data quality, consistency, and security throughout all data pipelines and storage systems. Additionally, you will be responsible for optimizing performance and managing cost-efficient AWS cloud resources. Basic qualifications for this role include a Master's degree in Computer Science, Engineering, Analytics, Mathematics, Statistics, IT, or a related field, along with a minimum of 5 years of hands-on experience in Data Engineering and data migration projects. Proficiency in SQL and Python for data processing and analysis is required, as well as a strong background in data modeling, data warehousing, and building data pipelines. The ideal candidate will have practical experience with big data technologies like Hadoop and Spark, and expertise in utilizing AWS services such as Redshift, S3, Glue, EMR, Kinesis, Firehose, Lambda, and IAM. An understanding of ETL development best practices and principles is also expected. Preferred qualifications include knowledge of data security and data privacy best practices, experience with DevOps and CI/CD practices related to data workflows, familiarity with data lake architectures and real-time data streaming, strong problem-solving abilities, attention to detail, excellent verbal and written communication skills, and the ability to work both independently and collaboratively in a team environment. Desirable skills for this role include experience with orchestration tools like Airflow or Step Functions, exposure to BI/Visualization tools like QuickSight, Tableau, or Power BI, and an understanding of data governance and compliance standards.,

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

As a Data Engineer with 6-9 years of experience, your role will be based in Pune. You should have a minimum of 5 years of experience as a Data Engineer, along with hands-on expertise in Star/Snowflake schema design, data modeling, data pipelining, and MLOps. Your proficiency in Data Warehouse technologies like Snowflake, AWS Redshift, and AWS data pipelines (Lambda, AWS Glue, Step Functions, etc.) will be crucial. Strong skills in SQL and at least one major programming language (Python/Java) are required. Additionally, you should be experienced with Data Analysis Tools such as Looker or Tableau, and have familiarity with Pandas, Numpy, Scikit-learn, and Jupyter notebooks. Knowledge of Git, GitHub, and JIRA is preferred. Your ability to identify and resolve data quality issues, provide end-to-end data platform support, and work effectively as an individual contributor is essential. In this role, you will need to possess strong analytical and problem-solving skills, with meticulous attention to detail. A positive mindset, can-do attitude, and a focus on simplifying tasks and building reusable components will be highly valued. You should be able to assess the suitability of new technologies for solving business problems and establish strong relationships with various stakeholders. Your responsibilities will involve designing, developing, and maintaining an accurate, secure, available, and fast data platform. You will engineer efficient, adaptable, and scalable data pipelines, integrate various data sources, create standardized datasets, and ensure product changes align well with the data platform. Collaborating with cross-functional teams, understanding their challenges, and providing data-driven solutions will be key aspects of your role. Overall, your technical skills, including expertise in data engineering, schema design, data modeling, and data warehousing, will play a vital role in driving the success of the data platform and meeting the goals of the organization.,

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

hyderabad, telangana

On-site

You should have a strong knowledge of AWS services including S3, AWS DMS (Database Migration Service), and AWS Redshift Serverless. Experience in setting up and managing data pipelines using AWS DMS is required. Proficiency in creating and managing data storage solutions using AWS S3 is a key aspect of this role. You should also be proficient in working with relational databases, particularly PostgreSQL, Microsoft SQL Server, and Oracle. Experience in setting up and managing data warehouses, particularly AWS Redshift Serverless, is important for this position. Your responsibilities will include utilizing analytical and problem-solving skills to analyze and interpret complex data sets. You should have experience in identifying and resolving data integration issues such as inconsistencies or discrepancies. Strong problem-solving skills are needed to troubleshoot and resolve data integration and migration issues effectively. Soft skills are also essential for this role. You should be able to work collaboratively with database administrators and other stakeholders to ensure integration solutions meet business requirements. Strong communication skills are required to document data integration processes, including data source definitions, data flow diagrams, and system interactions. Additionally, you should be able to participate in design reviews and provide input on data integration plans. A willingness to stay updated with the latest data integration tools and technologies and recommend upgrades when necessary is expected. Knowledge of data security and privacy regulations is crucial. Experience in ensuring adherence to data security and privacy standards during data integration processes is required. AWS certifications such as AWS Certified Solutions Architect or AWS Certified Database - Specialty are a plus for this position.,

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

hyderabad, telangana

On-site

Job Description: We are seeking a Senior Data Engineer with over 5 years of experience in Python, SQL, PySpark, and various AWS services such as Lambda, Glue Jobs, and Athena. This position is located in Hyderabad and necessitates on-site work. As a Senior Data Engineer, you will be tasked with constructing robust data pipelines, creating scalable ETL solutions, and overseeing cloud-based data systems. Key Responsibilities: - Designing, developing, and enhancing large-scale ETL/ELT pipelines utilizing PySpark and AWS Glue. - Deploying serverless workflows using AWS Lambda, Athena, and S3 for data processing and querying purposes. - Crafting efficient and scalable SQL queries to handle data in Redshift, Athena, and other data platforms. - Collaborating with diverse teams to comprehend business data requirements and devise suitable solutions. - Establishing and managing infrastructure using CloudFormation or similar Infrastructure-as-Code tools. - Integrating APIs and overseeing message queues utilizing SNS and SQS for both real-time and batch processing requirements. - Identifying, monitoring, and improving the performance of data pipelines and cloud solutions. Required Skills: - Over 5 years of practical experience in: - Python and SQL - PySpark - AWS Glue Jobs, AWS Lambda, Athena - Profound familiarity with AWS Redshift, S3, CloudFormation, SNS, and SQS. - Sound comprehension of data modeling, ETL frameworks, and distributed data systems. - Proficiency in API integrations and consuming RESTful services.,

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

As an AWS Consultant specializing in Infrastructure, Data & AI, and Databricks, you will play a crucial role in designing, implementing, and optimizing AWS Infrastructure solutions. Your expertise will be utilized to deliver secure and scalable data solutions using various AWS services and platforms. Your responsibilities will also include architecting and implementing ETL/ELT pipelines, data lakes, and distributed compute frameworks. You will be expected to work on automation and infrastructure as code using tools like CloudFormation or Terraform, and manage deployments through AWS CodePipeline, GitHub Actions, or Jenkins. Collaboration with internal teams and clients to gather requirements, assess current-state environments, and define cloud transformation strategies will be a key aspect of your role. Your support during pre-sales and delivery cycles will involve contributing to RFPs, SOWs, LOEs, solution blueprints, and technical documentation. Ensuring best practices in cloud security, cost governance, and compliance will be a priority. The ideal candidate for this position will possess 3 to 5 years of hands-on experience with AWS services, a Bachelor's degree or equivalent experience, and a strong understanding of cloud networking, IAM, security best practices, and hybrid connectivity. Proficiency in Databricks on AWS, experience with data modeling, ETL frameworks, and working with structured/unstructured data are required skills. Additionally, you should have working knowledge of DevOps tools and processes in the AWS ecosystem, strong documentation skills, and excellent communication abilities to translate business needs into technical solutions. Preferred certifications for this role include AWS Certified Solutions Architect - Associate or Professional, AWS Certified Data Analytics - Specialty (preferred), and Databricks Certified Data Engineer Associate/Professional (a plus).,

Posted 2 weeks ago

Apply

6.0 - 10.0 years

0 Lacs

karnataka

On-site

At EY, you'll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture, and technology to become the best version of you. And we're counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Our Technology team builds innovative digital solutions rapidly and at scale to deliver the next generation of Financial and Non-Financial services across the globe. The Position is a senior technical, hands-on delivery role, requiring knowledge of data engineering, cloud infrastructure, platform engineering, platform operations, and production support using ground-breaking cloud and big data technologies. The ideal candidate with 6-8 years of experience will possess strong technical skills, an eagerness to learn, a keen interest in Financial Crime, Financial Risk, and Compliance technology transformation, the ability to work collaboratively in a fast-paced environment, and an aptitude for picking up new tools and techniques on the job, building on existing skillsets as a foundation. In this role, you will: - Ingest and provision raw datasets, enriched tables, and curated, re-usable data assets to enable a variety of use cases. - Drive improvements in the reliability and frequency of data ingestion, including increasing real-time coverage. - Support and enhance data ingestion infrastructure and pipelines. - Design and implement data pipelines to collect data from disparate sources across the enterprise and external sources and deliver it to the data platform. - Implement Extract Transform and Load (ETL) workflows, ensuring data availability at each stage in the data flow. - Identify and onboard data sources, conduct exploratory data analysis, and evaluate modern technologies, frameworks, and tools in the data engineering space. Core/Must-Have skills: - 3-8 years of expertise in designing and implementing data warehouses, data lakes using Oracle Tech Stack (ETL: ODI, SSIS, DB: PLSQL, and AWS Redshift). - Experience in managing data extraction, transformation, and loading various sources using Oracle Data Integrator and other tools like SSIS. - Database Design and Dimension modeling using Oracle PLSQL, Microsoft SQL Server. - Advanced working SQL Knowledge and experience with relational and NoSQL databases. - Strong analytical and critical thinking skills, expertise in data Modeling and DB Design, and experience building and optimizing data pipelines. Good to have: - Experience in Financial Crime, Financial Risk, and Compliance technology transformation domains. - Certification on any cloud tech stack preferred Microsoft Azure. - In-depth knowledge and hands-on experience with data engineering, Data Warehousing, and Delta Lake on-prem and cloud platforms. - Ability to script, code, query, and design systems for maintaining Azure/AWS Lakehouse, ETL processes, business Intelligence, and data ingestion pipelines. EY exists to build a better working world, helping to create long-term value for clients, people, and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform, and operate. Working across assurance, consulting, law, strategy, tax, and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.,

Posted 2 weeks ago

Apply

8.0 - 12.0 years

0 Lacs

haryana

On-site

You should have 8-10 years of operational knowledge in Microservices and .Net Fullstack, with experience in C# or Python development, as well as Docker. Additionally, experience with PostgreSQL or Oracle is required. Knowledge of AWS services such as S3 is necessary, and familiarity with AWS Kinesis and AWS Redshift is preferred. A strong desire to learn new technologies and skills is highly valued. Experience with unit testing and Test-Driven Development (TDD) methodology is considered an asset. You should possess strong team spirit, analytical skills, and the ability to synthesize information. Having a passion for Software Craftsmanship, a culture of excellence, and writing Clean Code is important. Fluency in English is required due to the multicultural and international nature of the team. In this role, you will have the opportunity to develop your technical skills in C# .NET and/or Python, Oracle, PostgreSQL, AWS, ELK (Elasticsearch, Logstash, Kibana), GIT, GitHub, TeamCity, Docker, and Ansible.,

Posted 3 weeks ago

Apply

10.0 - 18.0 years

0 Lacs

indore, madhya pradesh

On-site

You should possess a BTech degree in computer science, engineering, or a related field of study, or have 12+ years of related work experience. Additionally, you should have at least 7 years of design and implementation experience with large-scale data-centric distributed applications. It is essential to have professional experience in architecting and operating cloud-based solutions, with a good understanding of core disciplines such as compute, networking, storage, security, and databases. A strong grasp of data engineering concepts like storage, governance, cataloging, data quality, and data modeling is required. Familiarity with various architecture patterns like data lake, data lake house, and data mesh is also important. You should have a good understanding of Data Warehousing concepts and hands-on experience with tools like Hive, Redshift, Snowflake, and Teradata. Experience in migrating or transforming legacy customer solutions to the cloud is highly valued. Moreover, experience working with services like AWS EMR, Glue, DMS, Kinesis, RDS, Redshift, Dynamo DB, Document DB, SNS, SQS, Lambda, EKS, and Data Zone is necessary. A thorough understanding of Big Data ecosystem technologies such as Hadoop, Spark, Hive, and HBase, along with other relevant tools and technologies, is expected. Knowledge in designing analytical solutions using AWS cognitive services like Textract, Comprehend, Rekognition, and Sagemaker is advantageous. You should also have experience with modern development workflows like git, continuous integration/continuous deployment pipelines, static code analysis tooling, and infrastructure-as-code. Proficiency in a programming or scripting language like Python, Java, or Scala is required. Possessing an AWS Professional/Specialty certification or relevant cloud expertise is a plus. In this role, you will be responsible for driving innovation within the Data Engineering domain by designing reusable and reliable accelerators, blueprints, and libraries. You should be capable of leading a technology team, fostering an innovative mindset, and enabling fast-paced deliveries. Adapting to new technologies, learning quickly, and managing high ambiguity are essential skills for this position. You will collaborate with business stakeholders, participate in various architectural, design, and status calls, and showcase good presentation skills when interacting with executives, IT Management, and developers. Furthermore, you will drive technology/software sales or pre-sales consulting discussions, ensure end-to-end ownership of tasks, and maintain high-quality software development with complete documentation and traceability. Fulfilling organizational responsibilities, sharing knowledge and experience with other teams/groups, conducting technical training sessions, and producing whitepapers, case studies, and blogs are also part of this role. The ideal candidate for this position should have 10 to 18 years of experience and be able to reference the job with the number 12895.,

Posted 3 weeks ago

Apply

8.0 - 12.0 years

0 Lacs

haryana

On-site

You should have 8-10 years of operational knowledge in Microservices and .Net Fullstack, C# or Python development, along with experience in Docker. Additionally, experience with PostgreSQL or Oracle is required. Knowledge of AWS services such as S3 is a must, and familiarity with AWS Kinesis and AWS Redshift is desirable. A genuine interest in mastering new technologies is essential for this role. Experience with unit testing and Test-Driven Development (TDD) methodology will be considered as assets. Strong team spirit, analytical skills, and the ability to synthesize information are key qualities we are looking for. Having a passion for Software Craftsmanship, a culture of excellence, and writing Clean Code is highly valued. Being fluent in English is important as you will be working in a multicultural and international team. In this role, you will have the opportunity to develop your technical skills in the following areas: C# .NET and/or Python programming, Oracle and PostgreSQL databases, AWS services, ELK (Elasticsearch, Logstash, Kibana) stack, as well as version control tools like GIT and GitHub, continuous integration with TeamCity, containerization with Docker, and automation using Ansible.,

Posted 3 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

hyderabad, telangana

On-site

As an Associate Technical Product Analyst - Global Data & Analytics Platform at McDonald's Corporation in Hyderabad, you will be an integral part of the Global Technology Enterprise Products & Platforms (EPP) Team. In this role, you will focus on data management & operations within the Global Data & Analytics Platform (GDAP) to support integrations with core Corporate Accounting/Financial/Reporting applications. Your vision will align with McDonald's goal to be a people-led, product-centric, forward-thinking, and trusted technology partner. Your responsibilities will include supporting the Technical Product Management leadership in technical/IT-related delivery topics such as trade-offs in implementation approaches and tech stack selection. You will provide technical guidance for developers/squad members, manage the output of internal/external squads to ensure adherence to McDonald's standards, participate in roadmap and backlog preparation, and maintain technical process flows and solution architecture diagrams at the product level. Additionally, you will lead acceptance criteria creation, validate development work, support hiring and development of engineers, and act as a technical developer as needed. To excel in this role, you should possess a Bachelor's degree in computer science or engineering, along with at least 3 years of hands-on experience designing and implementing solutions using AWS RedShift and Talend. Experience in data warehouse is a plus, as is familiarity with accounting and financial solutions across different industries. Knowledge of Agile software development processes, collaborative problem-solving skills, and excellent communication abilities are essential for success in this position. Preferred qualifications include proficiency in SQL, data integration tools, and scripting languages, as well as a strong understanding of Talend, AWS Redshift, and other AWS services. Experience with RESTful APIs, microservices architecture, DevOps practices, and tools like Jenkins and GitHub is highly desirable. Additionally, foundational expertise in security standards, cloud architecture, and Oracle cloud security will be advantageous. This full-time role based in Hyderabad, India, offers a hybrid work mode. If you are a detail-oriented individual with a passion for leveraging technology to drive business outcomes and are eager to contribute to a global team dedicated to innovation and excellence, we invite you to apply for the position of Associate Technical Product Analyst at McDonald's Corporation.,

Posted 3 weeks ago

Apply

6.0 - 10.0 years

0 Lacs

karnataka

On-site

At EY, you'll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we're counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Our Technology team builds innovative digital solutions rapidly and at scale to deliver the next generation of Financial and Non-Financial services across the globe. The Position is a senior technical, hands-on delivery role, requiring knowledge of data engineering, cloud infrastructure and platform engineering, platform operations, and production support using ground-breaking cloud and big data technologies. The ideal candidate with 6-8 years of experience will possess strong technical skills, an eagerness to learn, a keen interest on 3 key pillars that our team supports i.e. Financial Crime, Financial Risk, and Compliance technology transformation, the ability to work collaboratively in a fast-paced environment, and an aptitude for picking up new tools and techniques on the job, building on existing skill sets as a foundation. In this role, you will: - Ingest and provision raw datasets, enriched tables, and/or curated, re-usable data assets to enable a variety of use cases. - Drive improvements in the reliability and frequency of data ingestion, including increasing real-time coverage. - Support and enhance data ingestion infrastructure and pipelines. - Design and implement data pipelines that collect data from disparate sources across the enterprise and external sources and deliver it to our data platform. - Extract Transform and Load (ETL) workflows, using both advanced data manipulation tools and programmatically manipulate data throughout our data flows, ensuring data is available at each stage in the data flow and in the form needed for each system, service, and customer along said data flow. - Identify and onboard data sources using existing schemas and, where required, conduct exploratory data analysis to investigate and provide solutions. - Evaluate modern technologies, frameworks, and tools in the data engineering space to drive innovation and improve data processing capabilities. Core/Must-Have Skills: - 3-8 years of expertise in designing and implementing data warehouses, data lakes using Oracle Tech Stack (ETL: ODI, SSIS, DB: PLSQL, and AWS Redshift). - At least 4+ years of experience in managing data extraction, transformation, and loading various sources using Oracle Data Integrator with exposure to other tools like SSIS. - At least 4+ years of experience in Database Design and Dimension modeling using Oracle PLSQL, Microsoft SQL Server. - Experience in developing ETL processes - ETL control tables, error logging, auditing, data quality, etc. Should implement reusability, parameterization workflow design, etc. - Advanced working SQL Knowledge and experience working with relational and NoSQL databases as well as working familiarity with a variety of databases (Oracle, SQL Server, Neo4J). - Strong analytical and critical thinking skills, with the ability to identify and resolve issues in data pipelines and systems. - Expertise in data modeling and DB Design with skills in performance tuning. - Experience with OLAP, OLTP databases, and data structuring/modeling with an understanding of key data points. - Experience building and optimizing data pipelines on Azure Databricks or AWS Glue or Oracle Cloud. - Create and Support ETL Pipelines and table schemas to facilitate the accommodation of new and existing data sources for the Lakehouse. - Experience with data visualization (Power BI/Tableau) and SSRS. Good to Have: - Experience working in Financial Crime, Financial Risk, and Compliance technology transformation domains. - Certification on any cloud tech stack preferred Microsoft Azure. - In-depth knowledge and hands-on experience with data engineering, Data Warehousing, and Delta Lake on-prem (Oracle RDBMS, Microsoft SQL Server) and cloud (Azure or AWS or Oracle Cloud). - Ability to script (Bash, Azure CLI), Code (Python, C#), query (SQL, PLSQL, T-SQL) coupled with software versioning control systems (e.g., GitHub) AND ci/cd systems. - Design and development of systems for the maintenance of the Azure/AWS Lakehouse, ETL process, business Intelligence, and data ingestion pipelines for AI/ML use cases. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people, and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform, and operate. Working across assurance, consulting, law, strategy, tax, and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.,

Posted 3 weeks ago

Apply

5.0 - 10.0 years

15 - 25 Lacs

Hyderabad/Secunderabad, Bangalore/Bengaluru, Delhi / NCR

Hybrid

Ready to shape the future of work? At Genpact, we dont just adapt to changewe drive it. AI and digital innovation are redefining industries, and were leading the charge. Genpacts AI Gigafactory , our industry-first accelerator, is an example of how were scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to agentic AI , our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team thats shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at genpact.com and on LinkedIn , X , YouTube , and Facebook . Inviting applications for the role of Senior Principal Consultant, AWS DataLake! Responsibilities Having knowledge on DataLake on AWS services with exposure to creating External Tables and spark programming. The person shall be able to work on python programming. Writing effective and scalable Python codes for automations, data wrangling and ETL. ¢ Designing and implementing robust applications and work on Automations using python codes. ¢ Debugging applications to ensure low-latency and high-availability. ¢ Writing optimized custom SQL queries ¢ Experienced in team and client handling ¢ Having prowess in documentation related to systems, design, and delivery. ¢ Integrate user-facing elements into applications ¢ Having the knowledge of External Tables, Data Lake concepts. ¢ Able to do task allocation, collaborate on status exchanges and getting things to successful closure. ¢ Implement security and data protection solutions ¢ Must be capable of writing SQL queries for validating dashboard outputs ¢ Must be able to translate visual requirements into detailed technical specifications ¢ Well versed in handling Excel, CSV, text, json other unstructured file formats using python. ¢ Expertise in at least one popular Python framework (like Django, Flask or Pyramid) ¢ Good understanding and exposure on any Git, Bamboo, Confluence and Jira. ¢ Good in Dataframes and SQL ANSI using pandas. ¢ Team player, collaborative approach and excellent communication skills Qualifications we seek in you! Minimum Qualifications ¢BE/B Tech/ MCA ¢Excellent written and verbal communication skills ¢Good knowledge of Python, Pyspark Preferred Qualifications/ Skills ¢ Strong ETL knowledge on any ETL tool good to have. ¢ Good to have knowledge on AWS cloud and Snowflake. ¢ Having knowledge of PySpark is a plus. Why join Genpact? Be a transformation leader Work at the cutting edge of AI, automation, and digital innovation Make an impact Drive change for global enterprises and solve business challenges that matter Accelerate your career Get hands-on experience, mentorship, and continuous learning opportunities Work with the best Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Lets build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training.

Posted 3 weeks ago

Apply
Page 1 of 3
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies