Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 10.0 years
22 - 37 Lacs
Kolkata, Hyderabad, Bengaluru
Hybrid
Inviting applications for the role of Senior Principal Consultant-Data Engineer, AWS! Locations Bangalore, Hyderabad, Kolkata Responsibilities Develop, deploy, and manage ETL pipelines using AWS services, Python, Spark, and Kafka. Integrate structured and unstructured data from various data sources into data lakes and data warehouses. Design and deploy scalable, highly available, and fault-tolerant AWS data processes using AWS data services (Glue, Lambda, Step, Redshift) Monitor and optimize the performance of cloud resources to ensure efficient utilization and cost-effectiveness. Implement and maintain security measures to protect data and systems within the AWS environment, including IAM policies, security groups, and encryption mechanisms. Migrate the application data from legacy databases to Cloud based solutions (Redshift, DynamoDB, etc) for high availability with low cost Develop application programs using Big Data technologies like Apache Hadoop, Apache Spark, etc with appropriate cloud-based services like Amazon AWS, etc. Build data pipelines by building ETL processes (Extract-Transform-Load) Implement backup, disaster recovery, and business continuity strategies for cloud-based applications and data. Responsible for analysing business and functional requirements which involves a review of existing system configurations and operating methodologies as well as understanding evolving business needs Analyse requirements/User stories at the business meetings and strategize the impact of requirements on different platforms/applications, convert the business requirements into technical requirements Participating in design reviews to provide input on functional requirements, product designs, schedules and/or potential problems Understand current application infrastructure and suggest Cloud based solutions which reduces operational cost, requires minimal maintenance but provides high availability with improved security Perform unit testing on the modified software to ensure that the new functionality is working as expected while existing functionalities continue to work in the same way Coordinate with release management, other supporting teams to deploy changes in production environment Qualifications we seek in you! Minimum Qualifications Experience in designing, implementing data pipelines, build data applications, data migration on AWS Strong experience of implementing data lake using AWS services like Glue, Lambda, Step, Redshift Experience of Databricks will be added advantage Strong experience in Python and SQL Proven expertise in AWS services such as S3, Lambda, Glue, EMR, and Redshift. Advanced programming skills in Python for data processing and automation. Hands-on experience with Apache Spark for large-scale data processing. Experience with Apache Kafka for real-time data streaming and event processing. Proficiency in SQL for data querying and transformation. Strong understanding of security principles and best practices for cloud-based environments. Experience with monitoring tools and implementing proactive measures to ensure system availability and performance. Excellent problem-solving skills and ability to troubleshoot complex issues in a distributed, cloud-based environment. Strong communication and collaboration skills to work effectively with cross-functional teams. Preferred Qualifications/ Skills Masters Degree-Computer Science, Electronics, Electrical. AWS Data Engineering & Cloud certifications, Databricks certifications Experience with multiple data integration technologies and cloud platforms Knowledge of Change & Incident Management process
Posted 1 day ago
5.0 - 10.0 years
12 - 22 Lacs
Bengaluru, Mumbai (All Areas)
Work from Office
Node.JS Experience Express.JS AWS ExperienceMicro-Services Architecture JavaScript debugging techniques writing unit test cases for Backend API components using Mocha or JestCICD pipelineAWSJira ConfluenceServcie Agile methodology microservices AWS
Posted 5 days ago
5.0 - 10.0 years
10 - 18 Lacs
Bengaluru, Mumbai (All Areas)
Hybrid
About the Role: We are seeking a passionate and experienced Subject Matter Expert and Trainer to deliver our comprehensive Data Engineering with AWS program. This role combines deep technical expertise with the ability to coach, mentor, and empower learners to build strong capabilities in data engineering, cloud services, and modern analytics tools. If you have a strong background in data engineering and love to teach, this is your opportunity to create impact by shaping the next generation of cloud data professionals. Key Responsibilities: Deliver end-to-end training on the Data Engineering with AWS curriculum, including: - Oracle SQL and ANSI SQL - Data Warehousing Concepts, ETL & ELT - Data Modeling and Data Vault - Python programming for data engineering - AWS Fundamentals (EC2, S3, Glue, Redshift, Athena, Kinesis, etc.) - Apache Spark and Databricks - Data Ingestion, Processing, and Migration Utilities - Real-time Analytics and Compute Services (Airflow, Step Functions) Facilitate engaging sessions virtual and in-person and adapt instructional methods to suit diverse learning styles. Guide learners through hands-on labs, coding exercises, and real-world projects. Assess learner progress through evaluations, assignments, and practical assessments. Provide mentorship, resolve doubts, and inspire confidence in learners. Collaborate with the program management team to continuously improve course delivery and learner experience. Maintain up-to-date knowledge of AWS and data engineering best practices. Ideal Candidate Profile: Experience: Minimum 5-8 years in Data Engineering, Big Data, or Cloud Data Solutions. Prior experience delivering technical training or conducting workshops is strongly preferred. Technical Expertise: Proficiency in SQL, Python, and Spark. Hands-on experience with AWS services: Glue, Redshift, Athena, S3, EC2, Kinesis, and related tools. Familiarity with Databricks, Airflow, Step Functions, and modern data pipelines. Certifications: AWS certifications (e.g., AWS Certified Data Analytics Specialty) are a plus. Soft Skills: Excellent communication, facilitation, and interpersonal skills. Ability to break down complex concepts into simple, relatable examples. Strong commitment to learner success and outcomes. Email your application to: careers@edubridgeindia.in.
Posted 1 week ago
7.0 - 8.0 years
9 - 10 Lacs
Mumbai, New Delhi, Bengaluru
Work from Office
Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, PySpark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform - Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale . Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model:: Direct placement with client This is remote role Shift timings::10 AM to 7 PM
Posted 1 week ago
7.0 - 10.0 years
30 - 36 Lacs
Gandhinagar
Work from Office
Responsibilities: * Design and implement scalable cloud architectures on AWS using Aws CloudFormation, Python, Terraform, Kinesis, Lambda, Glue, S3, Redshift, DynamoDB, OLAP & OLTP databases with SQL. Flexi working Work from home Health insurance Performance bonus Provident fund Mobile bill reimbursements
Posted 1 week ago
5.0 - 10.0 years
10 - 20 Lacs
Bengaluru
Work from Office
Hiring for a FAANG company. Note: This position is part of a program designed to support women professionals returning to the workforce after a career break (9+ months career gap) About the Role Join a high-impact global business team that is building cutting-edge B2B technology solutions. As part of a structured returnship program, this role is ideal for experienced professionals re-entering the workforce after a career break. Youll work on mission-critical data infrastructure in one of the worlds largest cloud-based environments, helping transform enterprise procurement through intelligent architecture and scalable analytics. This role merges consumer-grade experience with enterprise-grade features to serve businesses worldwide. Youll collaborate across engineering, sales, marketing, and product teams to deliver scalable solutions that drive measurable value. Key Responsibilities: Design, build, and manage scalable data infrastructure using modern cloud technologies Develop and maintain robust ETL pipelines and data warehouse solutions Partner with stakeholders to define data needs and translate them into actionable solutions Curate and manage large-scale datasets from multiple platforms and systems Ensure high standards for data quality, lineage, security, and governance Enable data access for internal and external users through secure infrastructure Drive insights and decision-making by supporting sales, marketing, and outreach teams with real-time and historical data Work in a high-energy, fast-paced environment that values curiosity, autonomy, and impact Who You Are: 5+ years of experience in data engineering or related technical roles Proficient in SQL and familiar with relational database management Skilled in building and optimizing ETL pipelines Strong understanding of data modeling and warehousing Comfortable working with large-scale data systems and distributed computing Able to work independently, collaborate with cross-functional teams, and communicate clearly Passionate about solving complex problems through data Preferred Qualifications: Hands-on experience with cloud technologies including Redshift, S3, AWS Glue, EMR, Lambda, Kinesis, and Firehose Familiarity with non-relational databases (e.g., object storage, document stores, key-value stores, column-family DBs) Understanding of cloud access control systems such as IAM roles and permissions Returnship Benefits: Dedicated onboarding and mentorship support Flexible work arrangements Opportunity to work on meaningful, global-scale projects while rebuilding your career momentum Supportive team culture that encourages continuous learning and professional development Top 10 Must-Have Skills: SQL ETL Development Data Modeling Cloud Data Warehousing (e.g., Redshift or equivalent) Experience with AWS or similar cloud platforms Working with Large-Scale Datasets Data Governance & Security Awareness Business Communication & Stakeholder Collaboration Automation with Python/Scala (for ETL pipelines) Familiarity with Non-Relational Databases
Posted 1 week ago
4.0 - 7.0 years
3 - 6 Lacs
Noida
Work from Office
We are looking for a skilled AWS Data Engineer with 4 to 7 years of experience in data engineering, preferably in the employment firm or recruitment services industry. The ideal candidate should have a strong background in computer science, information systems, or computer engineering. Roles and Responsibility Design and develop solutions based on technical specifications. Translate functional and technical requirements into detailed designs. Work with partners for regular updates, requirement understanding, and design discussions. Lead a team, providing technical/functional support, conducting code reviews, and optimizing code/workflows. Collaborate with cross-functional teams to achieve project goals. Develop and maintain large-scale data pipelines using AWS Cloud platform services stack. Job Strong knowledge of Python/Pyspark programming languages. Experience with AWS Cloud platform services such as S3, EC2, EMR, Lambda, RDS, Dynamo DB, Kinesis, Sagemaker, Athena, etc. Basic SQL knowledge and exposure to data warehousing concepts like Data Warehouse, Data Lake, Dimensions, etc. Excellent communication skills and ability to work in a fast-paced environment. Ability to lead a team and provide technical/functional support. Strong problem-solving skills and attention to detail. A B.E./Master's degree in Computer Science, Information Systems, or Computer Engineering is required. The company offers a dynamic and supportive work environment, with opportunities for professional growth and development. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform crucial job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Posted 1 week ago
6.0 - 11.0 years
3 - 6 Lacs
Noida
Work from Office
We are looking for a skilled Snowflake Ingress/Egress Specialist with 6 to 12 years of experience to manage and optimize data flow into and out of our Snowflake data platform. This role involves implementing secure, scalable, and high-performance data pipelines, ensuring seamless integration with upstream and downstream systems, and maintaining compliance with data governance policies. Roles and Responsibility Design, implement, and monitor data ingress and egress pipelines in and out of Snowflake. Develop and maintain ETL/ELT processes using tools like Snowpipe, Streams, Tasks, and external stages (S3, Azure Blob, GCS). Optimize data load and unload processes for performance, cost, and reliability. Coordinate with data engineering and business teams to support data movement for analytics, reporting, and external integrations. Ensure data security and compliance by managing encryption, masking, and access controls during data transfers. Monitor data movement activities using Snowflake Resource Monitors and Query History. Job Bachelor's degree in Computer Science, Information Systems, or a related field. 6-12 years of experience in data engineering, cloud architecture, or Snowflake administration. Hands-on experience with Snowflake features such as Snowpipe, Streams, Tasks, External Tables, and Secure Data Sharing. Proficiency in SQL, Python, and data movement tools (e.g., AWS CLI, Azure Data Factory, Google Cloud Storage Transfer). Experience with data pipeline orchestration tools such as Apache Airflow, dbt, or Informatica. Strong understanding of cloud storage services (S3, Azure Blob, GCS) and working with external stages. Familiarity with network security, encryption, and data compliance best practices. Snowflake certification (SnowPro Core or Advanced) is preferred. Experience with real-time streaming data (Kafka, Kinesis) is desirable. Knowledge of DevOps tools (Terraform, CI/CD pipelines) is a plus. Strong communication and documentation skills are essential.
Posted 1 week ago
12.0 - 20.0 years
12 - 16 Lacs
Noida
Work from Office
We are looking for a skilled Amazon Connect Center Architect with 12 to 20 years of experience. The ideal candidate will have expertise in designing and implementing scalable and reliable cloud-based contact center solutions using Amazon Connect and AWS ecosystem services. Roles and Responsibility Lead the end-to-end implementation and configuration of Amazon Connect. Design and deploy contact center setups using CI/CD pipelines or Terraform. Develop and implement scalable and reliable cloud-based contact center solutions. Integrate Amazon Connect with Salesforce (Service Cloud Voice) and other CRM systems. Collaborate with cross-functional teams to ensure seamless integration and deployment. Troubleshoot and resolve technical issues related to Amazon Connect and AWS services. Job Proficiency in programming languages such as Java, Python, and JavaScript. Experience with Amazon Web Services, including Lambda, S3, DynamoDB, Kinesis, and CloudWatch. Familiarity with telephony concepts, including SIP, DID, ACD, IVR, and CTI. Strong understanding of CRM integrations, especially Salesforce Service Cloud Voice. Experience with REST APIs and integration frameworks. Hands-on experience with AWS services, including Amazon Lex, Amazon Connect, and IAM. AWS Certified Solutions Architect or Amazon Connect certification is preferred. Experience with Amazon Connect (Contact flows, queues, routing profile). Integration experience with Salesforce (Service Cloud Voice).
Posted 1 week ago
4.0 - 7.0 years
7 - 12 Lacs
Indore, Pune, Chennai
Work from Office
What will your role look like Develop Microservices using Spring /AWS technologies and deploy on AWS platform Support Java Angular enterprise applications with multi-region setup. Perform unit and system testing of application code as well as execution of implementation activities Design, build and test Java EE and Angular full stack applications Why you will love this role Besides a competitive package, an open workspace full of smart and pragmatic team members, with ever-growing opportunities for professional and personal growth Be a part of a learning culture where teamwork and collaboration are encouraged, diversity is valued and excellence, compassion, openness and ownership is rewarded We would like you to bring along In-depth knowledge of popular Java frameworks like Spring boot and Spring Experience with Object-Oriented Design (OOD) Experience in Spring, Spring Boot and Excellent knowledge of Relational Databases, MySQL, and ORM technologies (JPA2, Hibernate) Experience working in Agile (Scrum/Lean) with DevSecOps focused Experience with AWS, Kubernetes, Docker Containers AWS Component Usage, Configurations and Deployment Elasticsearch, EC2, S3, SNS, SQS, API Gateway Service, Kinesis AWS certification would be advantageous Knowledge of Health and related technologies
Posted 1 week ago
7.0 - 12.0 years
14 - 18 Lacs
Noida
Work from Office
Who We Are Build a brighter future while learning and growing with a Siemens company at the intersection of technology, community and s ustainability. Our global team of innovators is always looking to create meaningful solutions to some of the toughest challenges facing our world. Find out how far your passion can take you. What you need * BS in an Engineering or Science discipline, or equivalent experience * 7+ years of software/data engineering experience using Java, Scala, and/or Python, with at least 5 years' experience in a data focused role * Experience in data integration (ETL/ELT) development using multiple languages (e.g., Java, Scala, Python, PySpark, SparkSQL) * Experience building and maintaining data pipelines supporting a variety of integration patterns (batch, replication/CD C, event streaming) and data lake/warehouse in production environments * Experience with AWS-based data services technologies (e.g., Kinesis, Glue, RDS, Athena, etc.) and Snowflake CDW * Experience of working in the larger initiatives building and rationalizing large scale data environments with a large variety of data pipelines, possibly with internal and external partner integrations, would be a plus * Willingness to experiment and learn new approaches and technology applications * Knowledge and experience with various relational databases and demonstrable proficiency in SQL and supporting analytics uses and users * Knowledge of software engineering and agile development best practices * Excellent written and verbal communication skills The Brightly culture Were guided by a vision of community that serves the ambitions and wellbeing of all people, and our professional communities are no exception. We model that ideal every day by being supportive, collaborative partners to one another, conscientiousl y making space for our colleagues to grow and thrive. Our passionate team is driven to create a future where smarter infrastructure protects the environments that shape and connect us all. That brighter future starts with us.
Posted 1 week ago
8.0 - 13.0 years
8 - 12 Lacs
Bengaluru
Work from Office
Hello Talented Techie! We provide support in Project Services and Transformation, Digital Solutions and Delivery Management. We offer joint operations and digitalization services for Global Business Services and work closely alongside the entire Shared Services organization. We make optimal use of the possibilities of new technologies such as Business Process Management (BPM) and Robotics as enablers for efficient and effective processes. We are looking for Sr. AWS Cloud Architect Architect and Design Develop scalable and efficient data solutions using AWS services such as AWS Glue, Amazon Redshift, S3, Kinesis(Apache Kafka), DynamoDB, Lambda, AWS Glue(Streaming ETL) and EMR Integration Integrate real-time data from various Siemens organizations into our data lake, ensuring seamless data flow and processing. Data Lake Management Design and manage a large-scale data lake using AWS services like S3, Glue, and Lake Formation. Data Transformation Apply various data transformations to prepare data for analysis and reporting, ensuring data quality and consistency. Snowflake Integration Implement and manage data pipelines to load data into Snowflake, utilizing Iceberg tables for optimal performance and flexibility. Performance Optimization Optimize data processing pipelines for performance, scalability, and cost-efficiency. Security and Compliance Ensure that all solutions adhere to security best practices and compliance requirements. Collaboration Work closely with cross-functional teams, including data engineers, data scientists, and application developers, to deliver end-to-end solutions. Monitoring and Troubleshooting Implement monitoring solutions to ensure the reliability and performance of data pipelines. Troubleshoot and resolve any issues that arise. Youd describe yourself as: Experience 8+ years of experience in data engineering or cloud solutioning, with a focus on AWS services. Technical Skills Proficiency in AWS services such as AWS API, AWS Glue, Amazon Redshift, S3, Apache Kafka and Lake Formation. Experience with real-time data processing and streaming architectures. Big Data Querying Tools: Strong knowledge of big data querying tools (e.g., Hive, PySpark). Programming Strong programming skills in languages such as Python, Java, or Scala for building and maintaining scalable systems. Problem-Solving Excellent problem-solving skills and the ability to troubleshoot complex issues. Communication Strong communication skills, with the ability to work effectively with both technical and non-technical stakeholders. Certifications AWS certifications are a plus. Create a better #TomorrowWithUs! This role, based in Bangalore, is an individual contributor position. You may be required to visit other locations within India and internationally. In return, you'll have the opportunity to work with teams shaping the future. At Siemens, we are a collection of over 312,000 minds building the future, one day at a time, worldwide. Find out more about Siemens careers at
Posted 1 week ago
4.0 - 9.0 years
4 - 8 Lacs
Gurugram
Work from Office
Data Engineer Location PAN INDIA Workmode Hybrid Work Timing :2 Pm to 11 PM Primary Skill Data Engineer Experience in data engineering, with a proven focus on data ingestion and extraction using Python/PySpark.. Extensive AWS experience is mandatory, with proficiency in Glue, Lambda, SQS, SNS, AWS IAM, AWS Step Functions, S3, and RDS (Oracle, Aurora Postgres). 4+ years of experience working with both relational and non-relational/NoSQL databases is required. Strong SQL experience is necessary, demonstrating the ability to write complex queries from scratch. Also, experience in Redshift is required along with other SQL DB experience Strong scripting experience with the ability to build intricate data pipelines using AWS serverless architecture. understanding of building an end-to end Data pipeline. Strong understanding of Kinesis, Kafka, CDK. Experience with Kafka and ECS is also required. strong understanding of data concepts related to data warehousing, business intelligence (BI), data security, data quality, and data profiling is required Experience in Node Js and CDK. JDResponsibilities Lead the architectural design and development of a scalable, reliable, and flexible metadata-driven data ingestion and extraction framework on AWS using Python/PySpark. Design and implement a customizable data processing framework using Python/PySpark. This framework should be capable of handling diverse scenarios and evolving data processing requirements. Implement data pipeline for data Ingestion, transformation and extraction leveraging the AWS Cloud Services Seamlessly integrate a variety of AWS services, including S3,Glue, Kafka, Lambda, SQL, SNS, Athena, EC2, RDS (Oracle, Postgres, MySQL), AWS Crawler to construct a highly scalable and reliable data ingestion and extraction pipeline. Facilitate configuration and extensibility of the framework to adapt to evolving data needs and processing scenarios. Develop and maintain rigorous data quality checks and validation processes to safeguard the integrity of ingested data. Implement robust error handling, logging, monitoring, and alerting mechanisms to ensure the reliability of the entire data pipeline. QualificationsMust Have Over 6 years of hands-on experience in data engineering, with a proven focus on data ingestion and extraction using Python/PySpark. Extensive AWS experience is mandatory, with proficiency in Glue, Lambda, SQS, SNS, AWS IAM, AWS Step Functions, S3, and RDS (Oracle, Aurora Postgres). 4+ years of experience working with both relational and non-relational/NoSQL databases is required. Strong SQL experience is necessary, demonstrating the ability to write complex queries from scratch. Strong working experience in Redshift is required along with other SQL DB experience. Strong scripting experience with the ability to build intricate data pipelines using AWS serverless architecture. Complete understanding of building an end-to end Data pipeline. Nice to have Strong understanding of Kinesis, Kafka, CDK. A strong understanding of data concepts related to data warehousing, business intelligence (BI), data security, data quality, and data profiling is required. Experience in Node Js and CDK. Experience with Kafka and ECS is also required.
Posted 1 week ago
8.0 - 13.0 years
4 - 8 Lacs
Bengaluru
Work from Office
Experience: 8 years of experience in data engineering, specifically in cloud environments like AWS. Proficiency in PySpark for distributed data processing and transformation. Solid experience with AWS Glue for ETL jobs and managing data workflows. Hands-on experience with AWS Data Pipeline (DPL) for workflow orchestration. Strong experience with AWS services such as S3, Lambda, Redshift, RDS, and EC2. Technical Skills: Proficiency in Python and PySpark for data processing and transformation tasks. Deep understanding of ETL concepts and best practices. Familiarity with AWS Glue (ETL jobs, Data Catalog, and Crawlers). Experience building and maintaining data pipelines with AWS Data Pipeline or similar orchestration tools. Familiarity with AWS S3 for data storage and management, including file formats (CSV, Parquet, Avro). Strong knowledge of SQL for querying and manipulating relational and semi-structured data. Experience with Data Warehousing and Big Data technologies, specifically within AWS. Additional Skills: Experience with AWS Lambda for serverless data processing and orchestration. Understanding of AWS Redshift for data warehousing and analytics. Familiarity with Data Lakes, Amazon EMR, and Kinesis for streaming data processing. Knowledge of data governance practices, including data lineage and auditing. Familiarity with CI/CD pipelines and Git for version control. Experience with Docker and containerization for building and deploying applications. Design and Build Data PipelinesDesign, implement, and optimize data pipelines on AWS using PySpark, AWS Glue, and AWS Data Pipeline to automate data integration, transformation, and storage processes. ETL DevelopmentDevelop and maintain Extract, Transform, and Load (ETL) processes using AWS Glue and PySpark to efficiently process large datasets. Data Workflow AutomationBuild and manage automated data workflows using AWS Data Pipeline, ensuring seamless scheduling, monitoring, and management of data jobs. Data IntegrationWork with different AWS data storage services (e.g., S3, Redshift, RDS) to ensure smooth integration and movement of data across platforms. Optimization and ScalingOptimize and scale data pipelines for high performance and cost efficiency, utilizing AWS services like Lambda, S3, and EC2.
Posted 1 week ago
2.0 - 5.0 years
4 - 7 Lacs
Bengaluru
Work from Office
Urgent Opening for AWS developer- Bangalore Posted On 01st Feb 2020 12:16 PM Location Bangalore Role / Position AWS Developer Experience (required) 2-5 yrs Description Our Client is a leading Big Data analyticscompany having its headquarter in Bangalore DesignationAWS Developer LocationBangalore Experience 2-5 yrs AWS certified with solid 1+ years experiencein Production environment Candidate Profile AWS certified candidate with 1+ yr experience of working in production environment is the specific ask. Within AWS, the key needs are working knowledge on Kinesis (streaming data), Redshift / RDS ( Querying), Dynamo DBNo SQL DB. Send Resumes to girish.expertiz@gmail.com -->Upload Resume
Posted 1 week ago
6.0 - 10.0 years
15 - 25 Lacs
Bengaluru
Work from Office
Who We Are At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. The Role Are you ready to dive headfirst into the captivating world of data engineering at Kyndryl? As a Data Engineer, you'll be the visionary behind our data platforms, crafting them into powerful tools for decision-makers. Your role? Ensuring a treasure trove of pristine, harmonized data is at everyone's fingertips. As a AWS Data Engineer at Kyndryl, you'll be at the forefront of the data revolution, crafting and shaping data platforms that power our organization's success. This role is not just about code and databases; it's about transforming raw data into actionable insights that drive strategic decisions and innovation. In this role, you'll be engineering the backbone of our data infrastructure, ensuring the availability of pristine, refined data sets. With a well-defined methodology, critical thinking, and a rich blend of domain expertise, consulting finesse, and software engineering prowess, you'll be the mastermind of data transformation. Your journey begins by understanding project objectives and requirements from a business perspective, converting this knowledge into a data puzzle. You'll be delving into the depths of information to uncover quality issues and initial insights, setting the stage for data excellence. But it doesn't stop there. You'll be the architect of data pipelines, using your expertise to cleanse, normalize, and transform raw data into the final dataset—a true data alchemist. Armed with a keen eye for detail, you'll scrutinize data solutions, ensuring they align with business and technical requirements. Your work isn't just a means to an end; it's the foundation upon which data-driven decisions are made – and your lifecycle management expertise will ensure our data remains fresh and impactful. So, if you're a technical enthusiast with a passion for data, we invite you to join us in the exhilarating world of data engineering at Kyndryl. Let's transform data into a compelling story of innovation and growth. Your Future ar Kyndryl Every position at Kyndryl offers a way forward to grow your career. We have opportunities that you won’t find anywhere else, including hands-on experience, learning opportunities, and the chance to certify in all four major platforms. Whether you want to broaden your knowledge base or narrow your scope and specialize in a specific sector, you can find your opportunity here. Who You Are You’re good at what you do and possess the required experience to prove it. However, equally as important – you have a growth mindset; keen to drive your own personal and professional development. You are customer-focused – someone who prioritizes customer success in their work. And finally, you’re open and borderless – naturally inclusive in how you work with others. Required Skills and Experience • 10+ years of experience in data engineering with a minimum of 6 years on AWS. • Proficiency in AWS data services, including S3, Redshift, DynamoDB, Glue, Lambda, and EMR. • Strong SQL skills and experience with NoSQL databases on AWS. • Programming skills in Python, Java, or Scala for data processing and ETL tasks. • Solid understanding of data warehousing concepts, data modeling, and ETL best practices. • Experience with machine learning model deployment on AWS SageMaker. • Familiarity with data orchestration tools, such as Apache Airflow, AWS Step Functions, or AWS Data Pipeline. • Excellent problem-solving and analytical skills with attention to detail. • Strong communication skills and ability to collaborate effectively with both technical and non-technical stakeholders. • Experience with advanced AWS analytics services such as Athena, Kinesis, QuickSight, and Elasticsearch. • Hands-on experience with Amazon Bedrock and generative AI tools for exploring and implementing AI-based solutions. • AWS Certifications, such as AWS Certified Big Data – Specialty, AWS Certified Machine Learning – Specialty, or AWS Certified Solutions Architect. • Familiarity with CI/CD pipelines, containerization (Docker), and serverless computing concepts on AWS. Preferred Skills and Experience •Experience working as a Data Engineer and/or in cloud modernization. •Experience in Data Modelling, to create conceptual model of how data is connected and how it will be used in business processes. •Professional certification, e.g., Open Certified Technical Specialist with Data Engineering Specialization. •Cloud platform certification, e.g., AWS Certified Data Analytics– Specialty, Elastic Certified Engineer, Google CloudProfessional Data Engineer, or Microsoft Certified: Azure Data Engineer Associate. •Understanding of social coding and Integrated Development Environments, e.g., GitHub and Visual Studio. •Degree in a scientific discipline, such as Computer Science, Software Engineering, or Information Technology. Being You Diversity is a whole lot more than what we look like or where we come from, it’s how we think and who we are. We welcome people of all cultures, backgrounds, and experiences. But we’re not doing it single-handily: Our Kyndryl Inclusion Networks are only one of many ways we create a workplace where all Kyndryls can find and provide support and advice. This dedication to welcoming everyone into our company means that Kyndryl gives you – and everyone next to you – the ability to bring your whole self to work, individually and collectively, and support the activation of our equitable culture. That’s the Kyndryl Way. What You Can Expect With state-of-the-art resources and Fortune 100 clients, every day is an opportunity to innovate, build new capabilities, new relationships, new processes, and new value. Kyndryl cares about your well-being and prides itself on offering benefits that give you choice, reflect the diversity of our employees and support you and your family through the moments that matter – wherever you are in your life journey. Our employee learning programs give you access to the best learning in the industry to receive certifications, including Microsoft, Google, Amazon, Skillsoft, and many more. Through our company-wide volunteering and giving platform, you can donate, start fundraisers, volunteer, and search over 2 million non-profit organizations. At Kyndryl, we invest heavily in you, we want you to succeed so that together, we will all succeed. Get Referred! If you know someone that works at Kyndryl, when asked ‘How Did You Hear About Us’ during the application process, select ‘Employee Referral’ and enter your contact's Kyndryl email address.
Posted 1 week ago
5.0 - 7.0 years
1 - 1 Lacs
Lucknow
Hybrid
Technical Experience 5-7 years of hands-on experience in data pipeline development and ETL processes 3+ years of deep AWS experience , specifically with Kinesis, Glue, Lambda, S3, and Step Functions Strong proficiency in NodeJS/JavaScript and Java for serverless and containerized applications Production experience with Apache Spark, Apache Flink, or similar big data processing frameworks Data Engineering Expertise Proven experience with real-time streaming architectures and event-driven systems Hands-on experience with Parquet, Avro, Delta Lake, and columnar storage optimization Experience implementing data quality frameworks such as Great Expectations or similar tools Knowledge of star schema modeling, slowly changing dimensions, and data warehouse design patterns Experience with medallion architecture or similar progressive data refinement strategies AWS Skills Experience with Amazon EMR, Amazon MSK (Kafka), or Amazon Kinesis Analytics Knowledge of Apache Airflow for workflow orchestration Experience with DynamoDB, ElastiCache, and Neptune for specialized data storage Familiarity with machine learning pipelines and Amazon SageMaker integration
Posted 1 week ago
6.0 - 8.0 years
8 - 12 Lacs
Mumbai, Bengaluru, Delhi / NCR
Work from Office
Skills Required Experience in designing and building a serverless data lake solution using a layered components architecture including Ingestion, Storage, processing, Security & Governance, Data cataloguing & Search, Consumption layer. Hands on experience in AWS serverless technologies such as Lake formation, Glue, Glue Python, Glue Workflows, Step Functions, S3, Redshift, Quick sight, Athena, AWS Lambda, Kinesis. Must have experience in Glue. Experience in design, build, orchestration and deploy multi-step data processing pipelines using Python and Java Experience in managing source data access security, configuring authentication and authorisation, enforcing data policies and standards. Experience in AWS Environment setup and configuration. Minimum 6 years of relevant experience with atleast 3 years in building solutions using AWS Ability to work under pressure and commitment to meet customer expectations Location-Delhi NCR,Bangalore,Chennai,Pune,Kolkata,Ahmedabad,Mumbai,Hyderabad
Posted 2 weeks ago
5.0 - 8.0 years
7 - 10 Lacs
Mumbai, New Delhi, Bengaluru
Work from Office
The Customer, Sales & Service Practice | Cloud Job Title - Amazon Connect + Level 9 (Consultant) + Entity (S&C GN) Management Level: Level 9 - Consultant Location: Delhi, Mumbai, Bangalore, Gurgaon, Pune, Hyderabad and Chennai Must have skills: AWS contact center, Amazon Connect flows, AWS Lambda and Lex bots, Amazon Connect Contact Center Good to have skills: AWS Lambda and Lex bots, Amazon Connect Join our team of Customer Sales & Service consultants who solve customer facing challenges at clients spanning sales, service and marketing to accelerate business change. Practice: Customer Sales & Service Sales I Areas of Work: Cloud AWS Cloud Contact Center Transformation, Analysis and Implementation | Level: Consultant | Location: Delhi, Mumbai, Bangalore, Gurgaon, Pune, Hyderabad and Chennai | Years of Exp: 5-8 years Explore an Exciting Career at Accenture Are you passionate about scaling businesses using in-depth frameworks and techniques to solve customer facing challengesDo you want to design, build and implement strategies to enhance business performanceDoes working in an inclusive and collaborative environment spark your interestThen, this is the right place for you! Welcome to a host of exciting global opportunities within Accenture Strategy & Consultings Customer, Sales & Service practice. The Practice A Brief Sketch The Customer Sales & Service Consulting practice is aligned to the Capability Network Practice of Accenture and works with clients across their marketing, sales and services functions. As part of the team, you will work on transformation services driven by key offerings like Living Marketing, Connected Commerce and Next-Generation Customer Care. These services help our clients become living businesses by optimizing their marketing, sales and customer service strategy, thereby driving cost reduction, revenue enhancement, customer satisfaction and impacting front end business metrics in a positive manner.You will work closely with our clients as consulting professionals who design, build and implement initiatives that can help enhance business performance. As part of these, you will drive the following: Work on creating business cases for journey to cloud, cloud strategy, cloud contact center vendor assessment activities Work on creating Cloud transformation approach for contact center transformations Work along with Solution Architects for architecting cloud contact center technology with AWS platform Work on enabling cloud contact center technology platforms for global clients specifically on Amazon connect Work on innovative assets, proof of concept, sales demos for AWS cloud contact center Support AWS offering leads in responding to RFIs and RFPs Bring your best skills forward to excel at the role: Good understanding of contact center technology landscape. An understanding of AWS Cloud platform and services with Solution architect skills. Deep expertise on AWS contact center relevant services. Sound experience in developing Amazon Connect flows , AWS Lambda and Lex bots Deep functional and technical understanding of APIs and related integration experience Functional and technical understanding of building API-based integrations with Salesforce, Service Now and Bot platforms Ability to understand customer challenges and requirements, ability to address these challenges/requirements in a differentiated manner. Ability to help the team to implement the solution, sell, deliver cloud contact center solutions to clients. Excellent communications skills Ability to develop requirements based on leadership input Ability to work effectively in a remote, virtual, global environment Ability to take new challenges and to be a passionate learner Read about us. Blogs Your experience counts! Bachelors degree in related field or equivalent experience and Post-Graduation in Business management would be added value. Minimum 4-5 years of experience in delivering software as a service or platform as a service projects related to cloud CC service providers such as Amazon Connect Contact Center cloud solution Hands-on experience working on the design, development and deployment of contact center solutions at scale. Hands-on development experience with cognitive service such as Amazon connect, Amazon Lex, Lambda, Kinesis, Athena, Pinpoint, Comprehend, Transcribe Working knowledge of one of the programming/scripting languages such as Node.js, Python, Java Whats in it for you An opportunity to work on transformative projects with key G2000 clients Potential to Co-create with leaders in strategy, industry experts, enterprise function practitioners and, business intelligence professionals to shape and recommend innovative solutions that leverage emerging technologies. Ability to embed responsible business into everythingfrom how you service your clients to how you operate as a responsible professional. Personalized training modules to develop your strategy & consulting acumen to grow your skills, industry knowledge and capabilities Opportunity to thrive in a culture that is committed to accelerate equality for all. Engage in boundaryless collaboration across the entire organization. About Accenture: Accenture is a leading global professional services company, providing a broad range of services and solutions in strategy, consulting, digital, technology and operations. Combining unmatched experience and specialized skills across more than 40 industries and all business functions underpinned by the worlds largest delivery network Accenture works at the intersection of business and technology to help clients improve their performance and create sustainable value for their stakeholders. With 569,000 people serving clients in more than 120 countries, Accenture drives innovation to improve the way the world works and lives. Visit us at About Accenture Strategy & Consulting: Accenture Strategy shapes our clients future, combining deep business insight with the understanding of how technology will impact industry and business models. Our focus on issues such as digital disruption, redefining competitiveness, operating and business models as well as the workforce of the future helps our clients find future value and growth in a digital world. Today, digital is changing the way organizations engage with their employees, business partners, customers and communities. This is our unique differentiator. Capability Network a distributed management consulting organization that provides management consulting and strategy expertise across the client lifecycle. Our Capability Network teams complement our in-country teams to deliver cutting-edge expertise and measurable value to clients all around the world. For more information visit https:// Accenture Capability Network | Accenture in One Word come and be a part of our team. Qualification Experience: Minimum 5 year(s) of experience is required Educational Qualification: Engineering Degree or MBA from a Tier 1 or Tier 2 institute
Posted 2 weeks ago
3.0 - 7.0 years
12 - 18 Lacs
Chennai
Work from Office
Responsibilities: * Design, develop & maintain data pipelines using Python, SQL & Kinesis * Optimize performance through Apache Spark & Flink * Collaborate with cross-functional teams on CI/CD processes
Posted 2 weeks ago
7.0 - 12.0 years
14 - 18 Lacs
Bengaluru
Work from Office
As a Fortune 50 company with more than 400,000 team members worldwide, Target is an iconic brand and one of America's leading retailers. Joining Target means promoting a culture of mutual care and respect and striving to make the most meaningful and positive impact. Becoming a Target team member means joining a community that values different voices and lifts each other up. Here, we believe your unique perspective is important, and you'll build relationships by being authentic and respectful. Overview about TII: At Target, we have a timeless purpose and a proven strategy. And that hasnt happened by accident. Some of the best minds from different backgrounds come together at Target to redefine retail in an inclusive learning environment that values people and delivers world-class outcomes. That winning formula is especially apparent in Bengaluru, where Target in India operates as a fully integrated part of Targets global team and has more than 4,000 team members supporting the companys global strategy and operations. Team Overview: Every time a guest enters a Target store or browses Target.com nor the app, they experience the impact of Targets investments in technology and innovation. Were the technologists behind one of the most loved retail brands, delivering joy to millions of our guests, team members, and communities. Join our global in-house technology team of more than 5,000 of engineers, data scientists, architects and product managers striving to make Target the most convenient, safe and joyful place to shop. We use agile practices and leverage open-source software to adapt and build best-in-class technology for our team members and guestsand we do so with a focus on diversity and inclusion, experimentation and continuous learning. Position Overview: As a Lead Data Engineer , you will serve as the technical anchor for the engineering team, responsible for designing and developing scalable, high-performance data solutions . You will own and drive data architecture that supports both functional and non-functional business needs, ensuring reliability, efficiency, and scalability .Your expertise in big data technologies, distributed systems, and cloud platforms will help shape the engineering roadmap and best practices for data processing, analytics, and real-time data serving . You will play a key role in architecting and optimizing data pipelines using Hadoop, Spark, Scala/Java, and cloud technologies to support enterprise-wide data initiatives.Additionally, experience with API development for serving low-latency data and Customer Data Platforms (CDP) will be a strong plus. Key Responsibilities: Architect and build scalable, high-performance data pipelines and distributed data processing solutions using Hadoop, Spark, Scala/Java, and cloud platforms (AWS/GCP/Azure) . Design and implement real-time and batch data processing solutions , ensuring data is efficiently processed and made available for analytical and operational use. Develop APIs and data services to expose low-latency, high-throughput data for downstream applications, enabling real-time decision-making. Optimize and enhance data models, workflows, and processing frameworks to improve performance, scalability, and cost-efficiency. Drive data governance, security, and compliance best practices. Collaborate with data scientists, product teams, and business stakeholders to understand requirements and deliver data-driven solutions . Lead the design, implementation, and lifecycle management of data services and solutions. Stay up to date with emerging technologies and drive adoption of best practices in big data engineering, cloud computing, and API development . Provide technical leadership and mentorship to engineering teams, promoting best practices in data engineering and API design . About You: 7+ years of experience in data engineering, software development, or distributed systems. Expertise in big data technologies such as Hadoop, Spark, and distributed processing frameworks. Strong programming skills in Scala and/or Java (Python is a plus). Experience with cloud platforms (AWS, GCP, or Azure) and their data ecosystem (e.g., S3, BigQuery, Databricks, EMR, Snowflake, etc.). Proficiency in API development using REST, GraphQL, or gRPC to serve real-time and batch data. Experience with real-time and streaming data architectures (Kafka, Flink, Kinesis, etc.). Strong knowledge of data modeling, ETL pipeline design, and performance optimization . Understanding of data governance, security, and compliance in large-scale data environments. Experience with Customer Data Platforms (CDP) or customer-centric data processing is a strong plus. Strong problem-solving skills and ability to work in complex, unstructured environments . Excellent communication and collaboration skills, with experience working in cross-functional teams . Why Join Us Work with cutting-edge big data, API, and cloud technologies in a fast-paced, collaborative environment. Influence and shape the future of data architecture and real-time data services at Target. Solve high-impact business problems using scalable, low-latency data solutions . Be part of a culture that values innovation, learning, and growth . Life at Target- https://india.target.com/ Benefits- https://india.target.com/life-at-target/workplace/benefits Culture- https://india.target.com/life-at-target/belonging
Posted 2 weeks ago
7.0 - 9.0 years
15 - 22 Lacs
Chennai
Work from Office
Key Responsibilities • Application Development: Design, develop, and maintain dealership applications using Java, Spring Boot, and Microservices architecture. • Cloud Deployment: Build and manage cloud-native applications on AWS, leveraging services such as Lambda, ECS, RDS, S3, DynamoDB, and API Gateway. • System Architecture: Develop and maintain scalable, high-availability architectures to support large-scale dealership operations. • Database Management: Work with relational and NoSQL databases like PostgreSQL, and DynamoDB for efficient data storage and retrieval. • Integration & APIs: Develop and manage RESTful APIs and integrate with third-party services, including OEMs and dealership management systems (DMS). • DevOps & CI/CD: Implement CI/CD pipelines using tools like Jenkins, GitHub Actions, or AWS CodePipeline for automated testing and deployments. • Security & Compliance: Ensure application security, data privacy, and compliance with industry regulations. • Collaboration & Mentorship: Work closely with product managers, designers, and other engineers. Mentor junior developers to maintain best coding practices. Technical Skills Required • Programming Languages: Java (8+), Spring Boot, Hibernate • Cloud Platforms: AWS (Lambda, S3, EC2, ECS, RDS, DynamoDB, CloudFormation) • Microservices & API Development: RESTful APIs, API Gateway • Database Management: PostgreSQL, DynamoDB • DevOps & Automation: Docker, Kubernetes, Terraform, CI/CD pipelines • Testing & Monitoring: JUnit, Mockito, Prometheus, CloudWatch • Version Control & Collaboration: Git, GitHub, Jira, Confluence Nice-to-Have Skills • Experience with serverless architectures (AWS Lambda, Step Functions) • Exposure to event-driven architectures (Kafka, SNS, SQS) Qualifications • Bachelor's or masters degree in computer science, Software Engineering, or a related field • 5+ years of hands-on experience in Java-based backend development • Strong problem-solving skills with a focus on scalability and performance.
Posted 2 weeks ago
3.0 - 5.0 years
27 - 32 Lacs
Bengaluru
Work from Office
Job Title : Data Engineer (DE) / SDE – Data Location Bangalore Experience range 3-15 What we offer Our mission is simple – Building trust. Our customer's trust in us is not merely about the safety of their assets but also about how dependable our digital offerings are. That’s why, we at Kotak Group are dedicated to transforming banking by imbibing a technology-first approach in everything we do, with an aim to enhance customer experience by providing superior banking services. We welcome and invite the best technological minds in the country to come join us in our mission to make banking seamless and swift. Here, we promise you meaningful work that positively impacts the lives of many. About our team DEX is a central data org for Kotak Bank which manages entire data experience of Kotak Bank. DEX stands for Kotak’s Data Exchange. This org comprises of Data Platform, Data Engineering and Data Governance charter. The org sits closely with Analytics org. DEX is primarily working on greenfield project to revamp entire data platform which is on premise solutions to scalable AWS cloud-based platform. The team is being built ground up which provides great opportunities to technology fellows to build things from scratch and build one of the best-in-class data lake house solutions. The primary skills this team should encompass are Software development skills preferably Python for platform building on AWS; Data engineering Spark (pyspark, sparksql, scala) for ETL development, Advanced SQL and Data modelling for Analytics. The org size is expected to be around 100+ member team primarily based out of Bangalore comprising of ~10 sub teams independently driving their charter. As a member of this team, you get opportunity to learn fintech space which is most sought-after domain in current world, be a early member in digital transformation journey of Kotak, learn and leverage technology to build complex data data platform solutions including, real time, micro batch, batch and analytics solutions in a programmatic way and also be futuristic to build systems which can be operated by machines using AI technologies. The data platform org is divided into 3 key verticals Data Platform This Vertical is responsible for building data platform which includes optimized storage for entire bank and building centralized data lake, managed compute and orchestrations framework including concepts of serverless data solutions, managing central data warehouse for extremely high concurrency use cases, building connectors for different sources, building customer feature repository, build cost optimization solutions like EMR optimizers, perform automations and build observability capabilities for Kotak’s data platform. The team will also be center for Data Engineering excellence driving trainings and knowledge sharing sessions with large data consumer base within Kotak. Data Engineering This team will own data pipelines for thousands of datasets, be skilled to source data from 100+ source systems and enable data consumptions for 30+ data analytics products. The team will learn and built data models in a config based and programmatic and think big to build one of the most leveraged data model for financial orgs. This team will also enable centralized reporting for Kotak Bank which cuts across multiple products and dimensions. Additionally, the data build by this team will be consumed by 20K + branch consumers, RMs, Branch Managers and all analytics usecases. Data Governance The team will be central data governance team for Kotak bank managing Metadata platforms, Data Privacy, Data Security, Data Stewardship and Data Quality platform. If you’ve right data skills and are ready for building data lake solutions from scratch for high concurrency systems involving multiple systems then this is the team for you. You day to day role will include Drive business decisions with technical input and lead the team. Design, implement, and support an data infrastructure from scratch. Manage AWS resources, including EC2, EMR, S3, Glue, Redshift, and MWAA. Extract, transform, and load data from various sources using SQL and AWS big data technologies. Explore and learn the latest AWS technologies to enhance capabilities and efficiency. Collaborate with data scientists and BI engineers to adopt best practices in reporting and analysis. Improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers. Build data platforms, data pipelines, or data management and governance tools. BASIC QUALIFICATIONS for Data Engineer/ SDE in Data Bachelor's degree in Computer Science, Engineering, or a related field 3-5 years of experience in data engineering Strong understanding of AWS technologies, including S3, Redshift, Glue, and EMR Experience with data pipeline tools such as Airflow and Spark Experience with data modeling and data quality best practices Excellent problem-solving and analytical skills Strong communication and teamwork skills Experience in at least one modern scripting or programming language, such as Python, Java, or Scala Strong advanced SQL skills PREFERRED QUALIFICATIONS AWS cloud technologiesRedshift, S3, Glue, EMR, Kinesis, Firehose, Lambda, IAM, Airflow Prior experience in Indian Banking segment and/or Fintech is desired. Experience with Non-relational databases and data stores Building and operating highly available, distributed data processing systems for large datasets Professional software engineering and best practices for the full software development life cycle Designing, developing, and implementing different types of data warehousing layers Leading the design, implementation, and successful delivery of large-scale, critical, or complex data solutions Building scalable data infrastructure and understanding distributed systems concepts SQL, ETL, and data modelling Ensuring the accuracy and availability of data to customers Proficient in at least one scripting or programming language for handling large volume data processing Strong presentation and communications skills. For Managers, Customer centricity, obsession for customer Ability to manage stakeholders (product owners, business stakeholders, cross function teams) to coach agile ways of working. Ability to structure, organize teams, and streamline communication. Prior work experience to execute large scale Data Engineering projects
Posted 2 weeks ago
0.0 - 2.0 years
4 - 8 Lacs
Bengaluru
Work from Office
Data Engineer -1 (Experience – 0-2 years) What we offer Our mission is simple – Building trust. Our customer's trust in us is not merely about the safety of their assets but also about how dependable our digital offerings are. That’s why, we at Kotak Group are dedicated to transforming banking by imbibing a technology-first approach in everything we do, with an aim to enhance customer experience by providing superior banking services. We welcome and invite the best technological minds in the country to come join us in our mission to make banking seamless and swift. Here, we promise you meaningful work that positively impacts the lives of many. About our team DEX is a central data org for Kotak Bank which manages entire data experience of Kotak Bank. DEX stands for Kotak’s Data Exchange. This org comprises of Data Platform, Data Engineering and Data Governance charter. The org sits closely with Analytics org. DEX is primarily working on greenfield project to revamp entire data platform which is on premise solutions to scalable AWS cloud-based platform. The team is being built ground up which provides great opportunities to technology fellows to build things from scratch and build one of the best-in-class data lake house solutions. The primary skills this team should encompass are Software development skills preferably Python for platform building on AWS; Data engineering Spark (pyspark, sparksql, scala) for ETL development, Advanced SQL and Data modelling for Analytics. The org size is expected to be around 100+ member team primarily based out of Bangalore comprising of ~10 sub teams independently driving their charter. As a member of this team, you get opportunity to learn fintech space which is most sought-after domain in current world, be a early member in digital transformation journey of Kotak, learn and leverage technology to build complex data data platform solutions including, real time, micro batch, batch and analytics solutions in a programmatic way and also be futuristic to build systems which can be operated by machines using AI technologies. The data platform org is divided into 3 key verticals: Data Platform This Vertical is responsible for building data platform which includes optimized storage for entire bank and building centralized data lake, managed compute and orchestrations framework including concepts of serverless data solutions, managing central data warehouse for extremely high concurrency use cases, building connectors for different sources, building customer feature repository, build cost optimization solutions like EMR optimizers, perform automations and build observability capabilities for Kotak’s data platform. The team will also be center for Data Engineering excellence driving trainings and knowledge sharing sessions with large data consumer base within Kotak. Data Engineering This team will own data pipelines for thousands of datasets, be skilled to source data from 100+ source systems and enable data consumptions for 30+ data analytics products. The team will learn and built data models in a config based and programmatic and think big to build one of the most leveraged data model for financial orgs. This team will also enable centralized reporting for Kotak Bank which cuts across multiple products and dimensions. Additionally, the data build by this team will be consumed by 20K + branch consumers, RMs, Branch Managers and all analytics usecases. Data Governance The team will be central data governance team for Kotak bank managing Metadata platforms, Data Privacy, Data Security, Data Stewardship and Data Quality platform. If you’ve right data skills and are ready for building data lake solutions from scratch for high concurrency systems involving multiple systems then this is the team for you. You day to day role will include Drive business decisions with technical input and lead the team. Design, implement, and support an data infrastructure from scratch. Manage AWS resources, including EC2, EMR, S3, Glue, Redshift, and MWAA. Extract, transform, and load data from various sources using SQL and AWS big data technologies. Explore and learn the latest AWS technologies to enhance capabilities and efficiency. Collaborate with data scientists and BI engineers to adopt best practices in reporting and analysis. Improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers. Build data platforms, data pipelines, or data management and governance tools. BASIC QUALIFICATIONS for Data Engineer/ SDE in Data Bachelor's degree in Computer Science, Engineering, or a related field Experience in data engineering Strong understanding of AWS technologies, including S3, Redshift, Glue, and EMR Experience with data pipeline tools such as Airflow and Spark Experience with data modeling and data quality best practices Excellent problem-solving and analytical skills Strong communication and teamwork skills Experience in at least one modern scripting or programming language, such as Python, Java, or Scala Strong advanced SQL skills PREFERRED QUALIFICATIONS AWS cloud technologiesRedshift, S3, Glue, EMR, Kinesis, Firehose, Lambda, IAM, Airflow Prior experience in Indian Banking segment and/or Fintech is desired. Experience with Non-relational databases and data stores Building and operating highly available, distributed data processing systems for large datasets Professional software engineering and best practices for the full software development life cycle Designing, developing, and implementing different types of data warehousing layers Leading the design, implementation, and successful delivery of large-scale, critical, or complex data solutions Building scalable data infrastructure and understanding distributed systems concepts SQL, ETL, and data modelling Ensuring the accuracy and availability of data to customers Proficient in at least one scripting or programming language for handling large volume data processing Strong presentation and communications skills.
Posted 2 weeks ago
3.0 - 5.0 years
3 - 5 Lacs
Bengaluru
Work from Office
What we offer Our mission is simple – Building trust. Our customer's trust in us is not merely about the safety of their assets but also about how dependable our digital offerings are. That’s why, we at Kotak Group are dedicated to transforming banking by imbibing a technology-first approach in everything we do, with an aim to enhance customer experience by providing superior banking services. We welcome and invite the best technological minds in the country to come join us in our mission to make banking seamless and swift. Here, we promise you meaningful work that positively impacts the lives of many. About our team DEX is a central data org for Kotak Bank which manages entire data experience of Kotak Bank. DEX stands for Kotak’s Data Exchange. This org comprises of Data Platform, Data Engineering and Data Governance charter. The org sits closely with Analytics org. DEX is primarily working on greenfield project to revamp entire data platform which is on premise solutions to scalable AWS cloud-based platform. The team is being built ground up which provides great opportunities to technology fellows to build things from scratch and build one of the best-in-class data lake house solutions. The primary skills this team should encompass are Software development skills preferably Python for platform building on AWS; Data engineering Spark (pyspark, sparksql, scala) for ETL development, Advanced SQL and Data modelling for Analytics. The org size is expected to be around 100+ member team primarily based out of Bangalore comprising of ~10 sub teams independently driving their charter. As a member of this team, you get opportunity to learn fintech space which is most sought-after domain in current world, be a early member in digital transformation journey of Kotak, learn and leverage technology to build complex data data platform solutions including, real time, micro batch, batch and analytics solutions in a programmatic way and also be futuristic to build systems which can be operated by machines using AI technologies. The data platform org is divided into 3 key verticals Data Platform This Vertical is responsible for building data platform which includes optimized storage for entire bank and building centralized data lake, managed compute and orchestrations framework including concepts of serverless data solutions, managing central data warehouse for extremely high concurrency use cases, building connectors for different sources, building customer feature repository, build cost optimization solutions like EMR optimizers, perform automations and build observability capabilities for Kotak’s data platform. The team will also be center for Data Engineering excellence driving trainings and knowledge sharing sessions with large data consumer base within Kotak. Data Engineering This team will own data pipelines for thousands of datasets, be skilled to source data from 100+ source systems and enable data consumptions for 30+ data analytics products. The team will learn and built data models in a config based and programmatic and think big to build one of the most leveraged data model for financial orgs. This team will also enable centralized reporting for Kotak Bank which cuts across multiple products and dimensions. Additionally, the data build by this team will be consumed by 20K + branch consumers, RMs, Branch Managers and all analytics usecases. Data Governance The team will be central data governance team for Kotak bank managing Metadata platforms, Data Privacy, Data Security, Data Stewardship and Data Quality platform. If you’ve right data skills and are ready for building data lake solutions from scratch for high concurrency systems involving multiple systems then this is the team for you. You day to day role will include Drive business decisions with technical input and lead the team. Design, implement, and support an data infrastructure from scratch. Manage AWS resources, including EC2, EMR, S3, Glue, Redshift, and MWAA. Extract, transform, and load data from various sources using SQL and AWS big data technologies. Explore and learn the latest AWS technologies to enhance capabilities and efficiency. Collaborate with data scientists and BI engineers to adopt best practices in reporting and analysis. Improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers. Build data platforms, data pipelines, or data management and governance tools. BASIC QUALIFICATIONS for Data Engineer/ SDE in Data Bachelor's degree in Computer Science, Engineering, or a related field 3-5 years of experience in data engineering Strong understanding of AWS technologies, including S3, Redshift, Glue, and EMR Experience with data pipeline tools such as Airflow and Spark Experience with data modeling and data quality best practices Excellent problem-solving and analytical skills Strong communication and teamwork skills Experience in at least one modern scripting or programming language, such as Python, Java, or Scala Strong advanced SQL skills BASIC QUALIFICATIONS for Data Engineering Manager / Software Development Manager 10+ years of engineering experience most of which is in Data domain 5+ years of engineering team management experience 10+ years of planning, designing, developing and delivering consumer software experience - Experience partnering with product or program management teams 5+ years of experience in managing data engineer, business intelligence engineers and/or data scientists Experience designing or architecting (design patterns, reliability and scaling) of new and existing systems Experience managing multiple concurrent programs, projects and development teams in an Agile environment Strong understanding of Data Platform, Data Engineering and Data Governance Experience partnering with product and program management teams - Experience designing and developing large scale, high-traffic applications PREFERRED QUALIFICATIONS AWS cloud technologiesRedshift, S3, Glue, EMR, Kinesis, Firehose, Lambda, IAM, Airflow Prior experience in Indian Banking segment and/or Fintech is desired. Experience with Non-relational databases and data stores Building and operating highly available, distributed data processing systems for large datasets Professional software engineering and best practices for the full software development life cycle Designing, developing, and implementing different types of data warehousing layers Leading the design, implementation, and successful delivery of large-scale, critical, or complex data solutions Building scalable data infrastructure and understanding distributed systems concepts SQL, ETL, and data modelling Ensuring the accuracy and availability of data to customers Proficient in at least one scripting or programming language for handling large volume data processing Strong presentation and communications skills.
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
27294 Jobs | Dublin
Wipro
13935 Jobs | Bengaluru
EY
9574 Jobs | London
Accenture in India
8669 Jobs | Dublin 2
Amazon
7820 Jobs | Seattle,WA
Uplers
7606 Jobs | Ahmedabad
IBM
7142 Jobs | Armonk
Oracle
6920 Jobs | Redwood City
Muthoot FinCorp (MFL)
6164 Jobs | New Delhi
Capgemini
5313 Jobs | Paris,France