Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 10.0 years
20 - 35 Lacs
bengaluru
Work from Office
Seikor is hiring for Tricon Infotech Pvt. Ltd. ( https://www.triconinfotech.com/ ) We are seeking full stack python developers. We are offering INR 1500 if you clear round 1 interview and are selected for round 2 interview. Apply, Earn during the process and find your next awesome job at Tricon powered by Seikor Job Title : Python Full-stack Developer Location : Bengaluru, India Experience : 4 - 10 Years Team Size : 5001,000 employees globally Function : Software Development Job Summary: We are looking for a skilled and experienced Python Full Stack Developer with hands-on experience in AWS . The ideal candidate should have a strong foundation in backend development using Python and frameworks like Django or Flask. This role offers an exciting opportunity to work on dynamic, scalable applications in a collaborative and fast-paced environment. Key Responsibilities: Lead and mentor a team of engineers, especially data engineers Architect scalable, secure backend systems using Python, FastAPI, and AWS Drive data infrastructure decisions with PostgreSQL, Redshift , and advanced data pipelines Collaborate cross-functionally to integrate AI-first features and stay ahead of emerging AI trends Ensure delivery of high-quality, maintainable code and manage technical debt Required Skills & Qualifications: Strong leadership and communication skills Deep understanding of AWS services (EC2, Lambda, S3, IAM, Redshift) Advanced proficiency in Python and FastAPI Expertise in relational databases ( PostgreSQL ) and data warehousing ( Redshift ) Proven experience in ETL pipelines, data modeling, and optimization Ability to thrive in fast-paced, iterative environments Nice to Have: Experience with AI/ML pipelines or data science platforms Familiarity with Airflow or similar orchestration tools Exposure to DevOps practices and CI/CD pipelines Soft Skills: Engineer-first mindset Team-oriented culture Growth mindset Strong problem-solving skills Educational Qualification: Bachelors or Masters degree in Computer Science, Engineering, or related field
Posted 1 day ago
9.0 - 14.0 years
20 - 35 Lacs
chennai, bengaluru
Work from Office
Dear Candidate , We are hiring for a "AWS Data Engineer " role for a leading MNC . Work Mode: Hybrid Experience: 7 to 14 years Location : Chennai / Bangalore Interview Date : 23rd Aug 2025 (Saturday) I nterview Mode : Face to Face Primary Skill : Pyspark (optimization), Redshift, Glue, SQL (queries), ETL (pipelines), python(coding) Detailed JD Seeking a developer who has good Experience in Athena, Python code, Glue, Lambda, DMS, RDS, Redshift Cloud Formation and other AWS serverless resources. Can optimize data models for performance and efficiency. Able to write SQL queries to support data analysis and reporting Design, implement, and maintain the data architecture for all AWS data services. Work with stakeholders to identify business needs and requirements for data-related projects Design and implement ETL processes to load data into the data warehouse Good Experience in Athena, Python code, Glue, Lambda, DMS, RDS, Redshift, Cloud Formation, and other AWS serverless resources. Responsibility We are seeking a highly skilled senior / junior AWS Developer to join our team. With a primary focus on SQL, the ideal candidate will also have experience with Agile methodologies. As a Senior AWS Developer, you will be responsible for optimizing data models for performance and efficiency, writing SQL queries to support data analysis and reporting, and designing and implementing ETL processes to load data into the data warehouse. You will also work with stakeholders to identify business needs and requirements for data-related projects and design and maintain the data architecture for all AWS data services. The ideal candidate will have at least 5 years of work experience and be comfortable working in a hybrid setting.
Posted 1 day ago
6.0 - 9.0 years
17 - 22 Lacs
pune
Work from Office
Role & responsibilities : AWS Resdshift, AWS Glue
Posted 2 days ago
6.0 - 9.0 years
17 - 22 Lacs
bengaluru
Work from Office
Role & responsibilities : AWS Resdshift, AWS Glue
Posted 2 days ago
6.0 - 9.0 years
17 - 22 Lacs
hyderabad
Work from Office
Role & responsibilities : AWS Resdshift, AWS Glue
Posted 2 days ago
5.0 - 10.0 years
15 - 27 Lacs
bengaluru
Work from Office
Hi, Greetings from Preludesys India Pvt Ltd!! We are hiring for one of our prestigious clients for the below position!!! Job Posting: Data Modeler -SA Notice Period: Immediate - 30 Days Role Overview We are looking for an experienced Data Modeler with a strong foundation in dimensional data modeling and a proven ability to design and maintain conceptual, logical, and physical data models. The ideal candidate will have a minimum of 5+ years of experience in data modeling and architecture, preferably within the banking or financial services industry. Key Responsibilities Design, develop, and maintain dimensional data models to support analytics and reporting. Design conceptual, logical, and physical data models Utilize AWS services for scalable data model design Align data models with business rules and governance standards. Collaborate with business stakeholders, data architects, and engineers to ensure data models align with business rules and data governance standards. Translate business requirements into scalable and efficient data models. Maintain comprehensive documentation for data models, metadata, and data dictionaries. Ensure consistency and integrity of data models across systems and platforms. Partner with data engineering teams to implement models in AWS-based environments, including Redshift, Glue, and Lake Formation. Required Skills and Qualifications 5+ years of experience in data modeling, with a focus on dimensional modeling and data warehouse design. Proficiency in developing conceptual, logical, and physical data models. Strong understanding of data governance, data quality, and metadata management. Hands-on experience with AWS services such as Redshift, Glue, and Lake Formation. Familiarity with data modeling tools (e.g., ER/Studio, ERwin, or similar). Excellent communication skills and ability to work with cross-functional teams. Preferred Qualifications Experience in the banking or financial services sector. Knowledge of data lake architecture and modern data stack tools. AWS or data modeling certifications are a plus.
Posted 2 days ago
7.0 - 12.0 years
22 - 27 Lacs
hyderabad, chennai, bengaluru
Hybrid
Role & responsibilities Data Engineering & Analytics: Strong background in building scalable data pipelines and analytics platforms. Databricks (AWS preferred): Mandatory hands-on expertise in Databricks, including cluster management, notebooks, job orchestration, and optimization. AWS Cloud Services: Proficiency in AWS ecosystem (S3, Glue, EMR, Lambda, Redshift, IAM, CloudWatch). Programming: Expertise in PySpark and Python for ETL, transformations, and analytics. GenAI & LLMs: Experience with Large Language Models (LLM), fine-tuning, and enterprise integration. CI/CD & DevOps Knowledge: Familiarity with Git-based workflows, deployment pipelines, and automation. Preferred candidate profile 812 years of IT experience with a strong focus on Data Engineering & Cloud Analytics. Minimum 45 years of hands-on Databricks experience (preferably on AWS). Strong expertise in PySpark, Python, SQL, and AWS Data Services. Experience in LLM fine-tuning, GenAI automation, and enterprise integration. Proven ability to lead teams, deliver projects, and engage stakeholders. Strong problem-solving, communication, and analytical skills.
Posted 2 days ago
4.0 - 8.0 years
10 - 18 Lacs
kolkata, pune, delhi / ncr
Work from Office
Design, develop, and implement robust microservices-based applications on AWS using Java. • Lead the architecture and design of EKS-based solutions, ensuring seamless deployment and scalability. • Collaborate with cross-functional teams to gather and analyze functional requirements, translating them into technical specifications. • Define and enforce best practices for software development, including coding standards, code reviews, and documentation. • Identify non-functional requirements such as performance, scalability, security, and reliability; ensure these are met throughout the development lifecycle. • Conduct architectural assessments and provide recommendations for improvements to existing systems. • Mentor and guide junior developers in best practices and architectural principles. • Proficiency in Java programming language with experience in frameworks such as Spring Boot. • Strong understanding of RESTful APIs and microservices architecture. • Experience with AWS services, especially EKS, Lambda, S3, RDS, DynamoDB, and CloudFormation. • Familiarity with CI/CD pipelines and tools like Jenkins or GitLab CI. • Ability to design data models for relational and NoSQL databases. • Experience in designing applications for high availability, fault tolerance, and disaster recovery. • Knowledge of security best practices in cloud environments. • Strong analytical skills to troubleshoot performance issues and optimize system efficiency. • - Excellent communication skills to articulate complex concepts to technical and non-technical stakeholders.
Posted 2 days ago
6.0 - 11.0 years
0 - 0 Lacs
chennai
Hybrid
Job details: Title AWS Data Engineer Type Hybrid Location Chennai Key Skills AWS Glue, RedShift, S3, Lambda Athena. Hands on experience in Data Engineer with AWS, Glue, Lambda, SQL, Python, Redshift. Must have working knowledge in designing and implementing data pipelines on any of the cloud providers (AWS is preferred). Must be able to work with large volumes of data coming from various sources. Perform data cleansing, data validation etc Hands on ETL developer who is good at python, SQL. AWS services like glue, glue crawlers, lambda, red shift, athena, s3, EC2, IAM, Monitoring and Logging mechanisms- AWS cloudwatch, setting up alerts. Deployment knowledge on cloud. Integrate CI/CD pipeline to build artifacts and deploy changed to higher Environments. Scheduling frame works Airflow, AWS Step functions Excellent Communication skills, should be able to work collaboratively with other teams
Posted 3 days ago
8.0 - 10.0 years
18 - 33 Lacs
pune
Hybrid
Roles and Responsibilities Design, develop, and maintain large-scale data pipelines using AWS services such as S3, Lambda, Step Functions, etc. Develop ETL processes using PySpark and Redshift to extract insights from NoSQL databases like DynamoDB. Ensure high availability and scalability of the data warehousing infrastructure on AWS. Troubleshoot complex issues related to data processing, storage, and retrieval. Collaborate with cross-functional teams to identify business requirements and design solutions that meet those needs.
Posted 3 days ago
3.0 - 7.0 years
11 - 18 Lacs
gurugram
Hybrid
Job Title: AWS Data Engineer Location: Gurugram, India No of openings: Experience Required: Company: PwC India Job Description: PwC India is seeking a talented AWS Data Engineer to join our team in Gurgaon. The ideal candidate will have 2-7 years of experience with a strong focus on AWS services, data engineering, and analytics. This role offers an exciting opportunity to work on cutting-edge projects for global clients while leveraging your expertise in cloud technologies and data management. Key Responsibilities: 1. AWS Service Implementation: Design, develop, and maintain data solutions using AWS services, with a particular focus on S3, Athena, Glue, EMR, and Redshift. Implement and optimize data lakes and data warehouses on AWS platforms. 2. Data Pipeline Development: Create and maintain efficient ETL processes using PySpark and other relevant tools. Develop scalable and performant data pipelines to process large volumes of data. Implement data quality checks and monitoring systems to ensure data integrity. 3. Database Management: Proficient with SQL and NoSQL databases, optimizing queries and database structures for performance. Design and implement database schemas that align with business requirements and data models. 4. Performance Optimization: Continuously monitor and optimize the performance of data processing jobs and queries. Implement best practices for cost optimization in AWS environments. Troubleshoot and resolve performance bottlenecks in data pipelines and analytics processes. 5. Collaboration and Documentation: Work closely with data scientists, analysts, and other stakeholders to understand data requirements and deliver solutions. Develop and maintain comprehensive documentation for data architectures, processes, and best practices. Participate in code reviews and contribute to the team's knowledge base. Required Qualifications: Bachelor's or Master's degree in Computer Science, Information Technology, or a related field. 2-7 years of experience in data engineering, with a focus on AWS technologies. Strong hands-on experience with AWS services, particularly S3, Athena, Glue, EMR, and Redshift. Proficiency in Python and PySpark for data processing and analysis. Extremely good with SQL/PL SQL Demonstrated ability to optimize data pipelines and queries for performance. Strong problem-solving skills and attention to detail. Preferred Skills: AWS certifications (e.g., AWS Certified Data Analytics - Specialty, AWS Certified Solutions Architect). Familiarity with data visualization tools (e.g., Tableau, Power BI). Experience with data modeling and data warehouse concepts. Innovation thinking and creativity in solution delivery
Posted 3 days ago
7.0 - 12.0 years
7 - 17 Lacs
Hyderabad, Bengaluru
Hybrid
Hexaware Technologies is Hiring AWS Redshift developers Primary Skill set - AWS redshift, Glue, Lambda, Pyspark Total Exp Required - 6 + years to 12yrs Location - Bangalore & Hyderabad only Work mode - Hybrid Job Description: Mandatory Skills: 8 to 10 years Experience with most of the cloud products such as Amazon AWS. Experience as developer in multiple cloud technologies including AWS EC2, S3, Amazon API Gateway , AWS Lambda, AWS Glue, AWS RDS, AWS Step Functions. Good knowledge on AWS environment and Service knowledge with S3 storage understanding. Must have good knowledge of AWS Glue and serverless architecture. Must have good knowledge of Pyspark. Must have good knowledge of SQL. Nice to have skills: Collaborate with data analysts and stakeholders to meet data requirements. AWS Postgres experience for DB design. Must have worked with Dynamo DB. Interested candidates, Kindly share your updated resume to ramyar2@hexaware.com with below required details. Full Name: Contact No: Total Exp: Rel Exp in AWS: Current & Joining Location: Notice Period (If serving mention LWD): Current CTC: Expected CTC:
Posted 1 month ago
5.0 - 9.0 years
1 - 5 Lacs
Bengaluru
Work from Office
Role & responsibilities: Outline the day-to-day responsibilities for this role. Preferred candidate profile: Specify required role expertise, previous job experience, or relevant certifications.
Posted 1 month ago
0.0 - 1.0 years
0 Lacs
Noida
Work from Office
We are excited to invite fresh BTech graduates to our Walk-In Drive for Trainee Roles at our Noida office. This is a great opportunity for recent graduates to kickstart their careers in one of the following domains: Available Domains: Python Java Frontend Development DevOps Software Testing Data Warehouse Walk-In Dates: Wednesday, July 23, 2025 Thursday, July 24, 2025 Important: Only 20 walk-in candidates will be shortlisted. Eligibility Criteria: BTech degree completed (20222025 pass-outs) Basic knowledge in at least one of the mentioned domains Good communication skills Eagerness to learn and grow in the tech field How to Apply: Interested candidates must register using the form below. Only shortlisted candidates will be contacted with interview location details. Apply Here: https://forms.gle/a9LesdmF7g1MM2PW7 Stipend/CTC: As per industry standards (To be discussed during the interview)
Posted 1 month ago
8.0 - 13.0 years
15 - 27 Lacs
Bengaluru
Hybrid
Job Description: We are seeking an experienced and visionary Senior Data Architect to lead the design and implementation of scalable enterprise data solutions. This is a strategic leadership role for someone who thrives in cloud-first, data-driven environments and is passionate about building future-ready data architectures. Key Responsibilities: Define and implement enterprise-wide data architecture strategy aligned with business goals. Design and lead scalable, secure, and resilient data platforms for both structured and unstructured data. Architect data lake/warehouse ecosystems and cloud-native solutions (Snowflake, Databricks, Redshift, BigQuery). Collaborate with business and tech stakeholders to capture data requirements and translate them into scalable designs. Mentor data engineers, analysts, and other architects in data best practices. Establish standards for data modeling, integration, and management. Drive governance across data quality, security, metadata, and compliance. Lead modernization and cloud migration efforts. Evaluate new technologies and recommend adoption strategies. Support data cataloging, lineage, and MDM initiatives. Ensure compliance with privacy standards (e.g., GDPR, HIPAA, CCPA). Required Qualifications: Bachelors/Master’s degree in Computer Science, Data Science, or related field. 10+ years of experience in data architecture; 3+ years in a senior/lead capacity. Hands-on experience with modern cloud data platforms: Snowflake, Azure Synapse, AWS Redshift, BigQuery, etc. Strong skills in data modeling tools (e.g., Erwin, ER/Studio). Deep understanding of ETL/ELT , APIs, and data integration. Expertise in SQL, Python , and data-centric languages. Experience with data governance, RBAC, encryption , and compliance frameworks. DevOps/CI-CD experience in data pipelines is a plus. Excellent communication and leadership skills.
Posted 1 month ago
5.0 - 10.0 years
8 - 18 Lacs
Hyderabad
Work from Office
Job Title: Data EngineerClient: Amazon Employment Type: Full-time (On-site) Payroll: BCT Consulting Pvt Ltd Work Location: Hyderabad (Work from Office Monday to Friday, General Shift) Experience Required: 5+ Years Joining Mode: Permanent with BCT Consulting Pvt Ltd, deployed at Amazon About the Role: We are seeking a highly skilled and motivated Data Engineer with strong expertise in SQL, Python, Big Data technologies, AWS, Airflow, and Redshift . The ideal candidate will play a key role in building and optimizing data pipelines, ensuring data integrity, and enabling scalable data solutions across the organization. Key Responsibilities: Design, develop, and maintain scalable data pipelines using Python and SQL . Work with Big Data technologies to process and manage large datasets efficiently. Implement and manage workflows using Apache Airflow . Develop and optimize data models and queries in Amazon Redshift . Collaborate with cross-functional teams to understand data requirements and deliver solutions. Ensure data quality, consistency, and security across all data platforms. Monitor and troubleshoot data pipeline performance and reliability. Leverage AWS services (S3, Lambda, Glue, EMR, etc.) for cloud-native data engineering solutions. Required Skills & Qualifications: 5+ years of experience in Data Engineering . Strong proficiency in SQL and Python . Hands-on experience with Big Data tools (e.g., Spark, Hadoop). Expertise in AWS cloud services related to data engineering. Experience with Apache Airflow for workflow orchestration. Solid understanding of Amazon Redshift and data warehousing concepts. Excellent problem-solving and communication skills. Ability to work in a fast-paced, collaborative environment. Nice to Have: Experience with CI/CD pipelines and DevOps practices. Familiarity with data governance and compliance standards. Perks & Benefits: Opportunity to work on cutting-edge data technologies. Collaborative and innovative work culture. Immediate joining preferred.
Posted 1 month ago
5.0 - 10.0 years
10 - 15 Lacs
Chennai, Bengaluru
Work from Office
Job Description: Job Title: ETL Testing Experience: 5-8 Years location: Chennai, Bangalore Employment Type: Full Time. J ob Type: Work from Office (Monday - Friday) Shift Timing: 12:30 PM to 9:30 PM Required Skills: Analytics skills to understand requirements to develop test cases, understand and manage data, strong SQL skills. Hands on testing of data pipelines built using Glue, S3, Redshift and Lambda, collaborate with developers to build automated testing where appropriate, understanding of data concepts like data lineage, data integrity and quality, experience testing financial data is a plus
Posted 1 month ago
5.0 - 7.0 years
15 - 30 Lacs
Gurugram
Remote
Design, develop, and maintain robust data pipelines and ETL/ELT processes on AWS. Leverage AWS services such as S3, Glue, Lambda, Redshift, Athena, EMR , and others to build scalable data solutions. Write efficient and reusable code using Python for data ingestion, transformation, and automation tasks. Collaborate with cross-functional teams including data analysts, data scientists, and software engineers to support data needs. Monitor, troubleshoot, and optimize data workflows for performance, reliability, and cost efficiency. Ensure data quality, security, and governance across all systems. Communicate technical solutions clearly and effectively with both technical and non-technical stakeholders. Required Skills & Qualifications 5+ years of experience in data engineering roles. Strong hands-on experience with Amazon Web Services (AWS) , particularly in data-related services (e.g., S3, Glue, Lambda, Redshift, EMR, Athena). Proficiency in Python for scripting and data processing. Experience with SQL and working with relational databases. Solid understanding of data architecture, data modeling, and data warehousing concepts. Experience with CI/CD pipelines and version control tools (e.g., Git). Excellent verbal and written communication skills . Proven ability to work independently in a fully remote environment. Preferred Qualifications Experience with workflow orchestration tools like Apache Airflow or AWS Step Functions. Familiarity with big data technologies such as Apache Spark or Hadoop. Exposure to infrastructure-as-code tools like Terraform or CloudFormation. Knowledge of data privacy and compliance standards.
Posted 1 month ago
5.0 - 10.0 years
15 - 20 Lacs
Chennai, Bengaluru
Work from Office
Job Description: Job Title: Data Engineer Experience: 5-8 Years location: Chennai, Bangalore Employment Type: Full Time. Job Type: Work from Office (Monday - Friday) Shift Timing: 12:30 PM to 9:30 PM Required Skills: 5-8 years' experience candidate as back end - data engineer. Strong experience in SQL. Strong knowledge and experience Python and Py Spark. Experience in AWS. Experience in Docker and OpenShift. Hands on experience with REST Concepts. Design and Develop business solutions on the data front. Experience in implementation of new enhancements and also handling defect triage. Candidate must have strong analytical abilities. Skills/ Competency Additionally Preferred Jira, Bit Bucket Experience on Kafka. Experience on Snowflake. Domain knowledge in Banking. Analytical skills. Excellent communication skills Working knowledge of Agile. Thanks & Regards, Suresh Kumar Raja, CGI.
Posted 1 month ago
5.0 - 9.0 years
10 - 12 Lacs
Bengaluru
Remote
Sr Data Engineer Tenure: Min. 3 months (potential for extension) Contract. Remote. We are seeking a Sr. Data Engineer to join our technology team. This role is a hands-on position responsible for the build and continued evolution of the data platform, business applications and integration tools. We are looking for a hands on engineer that can recommend best practices to work with enterprise data. The ideal candidate is going to be a hands on engineer, and very strong with AWS Products to help build out data pipelines, create jobs, manage the quality of the data warehouse and integrated tools. RESPONSIBILITIES Design, develop, and maintain scalable, efficient data pipelines to support ETL/ELT processes across multiple sources and systems Partner with Data Science, Analytics, and Business teams to understand data needs, prioritize use cases, and deliver reliable datasets and models Monitor, optimize, and troubleshoot data jobs, ensuring high availability and performance of data infrastructure Build and manage data models and schemas in Redshift and other data technologies, enabling self-service analytics Implement data quality checks, validation rules, and alerting mechanisms to ensure trust in data Leverage AWS services like Glue, Lambda, S3, Athena, and EMR to build modular, reusable data solutions Drive improvements in data lineage, cataloging, and documentation to ensure transparency and reusability of data assets Create and maintain technical documentation and version-controlled workflows (e.g., Git, dbt) Contribute to and promote a culture of continuous improvement, mentoring peers and advocating for scalable and modern data practices Participate in sprint planning, code reviews, and team retrospectives as part of an Agile development process Stay current on industry trends and emerging technologies to identify opportunities for innovation and automation Advanced Python, including experience building APIs, scripting ETL processes, and automating workflows Expert in SQL, with ability to write complex queries, optimize performance, and work across large datasets Hands-on experience with AWS data ecosystem including Redshift, S3, Glue, Athena, EMR, EC2, DynamoDB, Lambda, and Redis Strong understanding of data warehousing and data modeling principles (e.g., star/snowflake schema, dimensional modeling) Familiarity with dbt Labs and modern ELT/analytics engineering practices Experience working with structured, semi-structured, and unstructured data
Posted 1 month ago
6.0 - 11.0 years
9 - 14 Lacs
Noida
Work from Office
Responsibilities: * Design, develop & maintain data pipelines using AWS, Python & SQL. * Optimize performance with Apache Spark & Amazon Redshift. * Collaborate on cloud architecture with cross-functional teams. Redshift
Posted 1 month ago
10.0 - 15.0 years
30 - 40 Lacs
Bengaluru
Hybrid
We are looking for a Cloud Data Engineer with strong hands-on experience in data pipelines, cloud-native services (AWS), and modern data platforms like Snowflake or Databricks. Alternatively, were open to Data Visualization Analysts with strong BI experience and exposure to data engineering or pipelines. You will collaborate with technology and business leads to build scalable data solutions, including data lakes, data marts, and virtualization layers using tools like Starburst. This is an exciting opportunity to work with modern cloud tech in a dynamic, enterprise-scale financial services environment. Key Responsibilities: Design and develop data pipelines for structured/unstructured data in AWS. Build semantic layers and virtualization layers using Starburst or similar tools. Create intuitive dashboards and reports using Power BI/Tableau. Collaborate on ETL designs and support testing (SIT/UAT). Optimize Spark jobs and ETL performance. Implement data quality checks and validation frameworks. Translate business requirements into scalable technical solutions. Participate in design reviews and documentation. Skills & Qualifications: Must-Have: 10+ years in Data Engineering or related roles. Hands-on with AWS Glue, Redshift, Athena, EMR, Lambda, S3, Kinesis. Proficient in HiveQL, Spark, Python, Scala. Experience with modern data platforms (Snowflake/Databricks). 3+ years in ETL tools (Informatica, SSIS) & recent experience in cloud-based ETL. Strong understanding of Data Warehousing, Data Lakes, and Data Mesh. Preferred: Exposure to Data Virtualization tools like Starburst or Denodo. Experience in financial services or banking domain. AWS Certification (Data specialty) is a plus.
Posted 1 month ago
12.0 - 15.0 years
0 - 3 Lacs
Bengaluru
Hybrid
Role & responsibilities Design, develop, and maintain scalable enterprise data architecture incorporating data warehouse, data lake, and data mesh concepts Create and maintain data models, schemas, and mappings that support Reporting, business intelligence, analytics, and AI/ML initiatives Establish data integration patterns for batch and real-time processing using AWS services (Glue, DMS, Lambda), Redshift, Snowflake or Data Bricks [KM1] . Define technical specifications for data storage, data processing, and data access patterns Develop data models and enforce data architecture standards [KM2] , policies, and best practices Partner with business stakeholders to translate requirements into architectural solutions Lead data modernization initiatives, including legacy system migrations Create roadmaps for evolving data architecture to support future business needs Provide expert guidance on complex data problems and architectural decisions Preferred candidate profile Bachelors degree in computer science, Information Systems, or related field; Masters degree preferred 8+ years of experience in data architecture, database design, data modelling or related roles[KM1] 5+ years of experience with cloud data platforms, particularly AWS data services 3+ years of experience architecting MPP database solutions (Redshift, Snowflake, etc.) Expert knowledge of data warehouse architecture and dimensional modelling Strong understanding of AWS [KM2] data services ecosystem (Redshift, S3, Glue, DMS, Lambda) Experience with SQL Server and migration to cloud data platforms Proficiency in data modelling, entity relationship diagrams, and schema design Working knowledge of data integration patterns and technologies (ETL/ELT, CDC) Experience with one or more programming/scripting languages (Python, SQL, Shell) Familiarity with data lake architectures and technologies (Parquet, Delta Lake, Athena) Excellent verbal and written communication skills, with ability to translate complex technical concepts to varied audiences Strong stakeholder management and influencing skills Experience implementing data warehouse, data lake and data mesh architectures Good to have knowledge of machine learning workflows and feature engineering Understanding of regulatory requirements related to data (Fed Ramp, GDPR, CCPA, etc.)[KM3] Experience with big data technologies [KM4] (Spark, Hadoop)
Posted 1 month ago
5.0 - 10.0 years
10 - 20 Lacs
Bengaluru
Work from Office
Hiring for a FAANG company. Note: This position is open only for women professionals returning to the workforce after a career break (9+ months career gap, e.g, last working day prior to NOV 2024). We encourage you to apply only if you fit this criteria. Position Overview This is a Level 5 Data Engineer role within a leading e-commerce organization's Selling Partner Services division in India. The position focuses on building and scaling API authorization and customization systems that serve thousands of global selling partners. This is a senior-level position requiring significant technical expertise and leadership capabilities. Team Context & Mission Organization : Selling Partner Services division Focus : API authorization and customization systems for global selling partners Mission : Create flexible, reliable, and extensible API solutions to help businesses thrive on the platform Culture : Startup excitement with enterprise-level resources and scale Impact : Direct influence on thousands of global selling partners Key Responsibilities Technical Leadership Lead design and implementation of complex data pipelines and ETL processes Architect scalable, high-performance data systems using cloud technologies and big data platforms Evaluate and recommend new technologies and tools for data infrastructure enhancement Troubleshoot and resolve complex data-related issues in production environments Collaboration & Stakeholder Management Work closely with data scientists, analysts, and business stakeholders Understand data requirements and implement appropriate solutions Contribute to data governance policies and procedures development Performance & Quality Optimization Optimize data storage and retrieval systems for performance and cost-effectiveness Implement data quality checks and monitoring systems Ensure data integrity and reliability across all systems Mentorship & Leadership Mentor junior engineers on the team Provide technical leadership on data engineering best practices and methodologies Drive adoption of industry standards and innovative approaches Required Qualifications (Must-Have) Experience Requirements 5+ years of data engineering experience - Senior-level expertise expected 5+ years of SQL experience - Advanced SQL skills for complex data manipulation Data modeling, warehousing, and ETL pipeline building - Core competencies Distributed systems knowledge - Understanding of data storage and computing in distributed environments Technical Skills Advanced proficiency in designing and implementing data solutions Strong understanding of data architecture principles Experience with production-level data systems Knowledge of data governance and quality assurance practices Preferred Qualifications Cloud Technology Stack Data Warehousing : Redshift, Snowflake, BigQuery Object Storage : S3, Azure Blob, Google Cloud Storage ETL Services : AWS Glue, Azure Data Factory, Google Dataflow Big Data Processing : EMR, Databricks, Apache Spark Real-time Streaming : Kinesis, Kafka, Apache Storm Data Delivery : FireHose, Apache NiFi Serverless Computing : Lambda, Azure Functions, Google Cloud Functions Identity Management : IAM, Active Directory, role-based access control Non-Relational Database Experience Object Storage : S3, blob storage systems Document Stores : MongoDB, CouchDB Key-Value Stores : Redis, DynamoDB Graph Databases : Neo4j, ArangoDB Column-Family : Cassandra, HBase Key Success Factors Scalability Focus : Building systems that can handle massive enterprise scale Performance Optimization : Continuous improvement of system efficiency Quality Assurance : Maintaining high data quality and reliability standards Innovation : Staying current with emerging technologies and best practices Collaboration : Effective partnership with stakeholders across the organization This role represents a significant opportunity for a senior data engineer to make a substantial impact on a global e-commerce seller ecosystem while working with cutting-edge technologies and leading a team of talented professionals.
Posted 1 month ago
7.0 - 9.0 years
7 - 17 Lacs
Pune
Remote
Requirements for the candidate: The role will require deep knowledge of data engineering techniques to create data pipelines and build data assets. At least 4+ years of Strong hands on programming experience with Pyspark / Python / Boto3 including Python Frameworks, libraries according to python best practices. Strong experience in code optimization using spark SQL and pyspark. Understanding of Code versioning, Git repository, JFrog Artifactory. AWS Architecture knowledge specially on S3, EC2, Lambda, Redshift, CloudFormation etc and able to explain benefits of each Code Refactorization of Legacy Codebase: Clean, modernize, improve readability and maintainability. Unit Tests/TDD: Write tests before code, ensure functionality, catch bugs early. Fixing Difficult Bugs: Debug complex code, isolate issues, resolve performance, concurrency, or logic flaws.
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
54024 Jobs | Dublin
Wipro
24262 Jobs | Bengaluru
Accenture in India
18733 Jobs | Dublin 2
EY
17079 Jobs | London
Uplers
12548 Jobs | Ahmedabad
IBM
11704 Jobs | Armonk
Amazon
11059 Jobs | Seattle,WA
Bajaj Finserv
10656 Jobs |
Accenture services Pvt Ltd
10587 Jobs |
Oracle
10506 Jobs | Redwood City