Jobs
Interviews

3098 Redshift Jobs - Page 20

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0 years

8 - 8 Lacs

Chennai

On-site

Job Summary: We are looking for a skilled AWS Data Engineer with strong experience in building and managing cloud-based ETL pipelines using AWS Glue, Python/PySpark, and Athena, along with data warehousing expertise in Amazon Redshift. The ideal candidate will be responsible for designing, developing, and maintaining scalable data solutions in a cloud-native environment. Design and implement ETL workflows using AWS Glue, Python, and PySpark. Develop and optimize queries using Amazon Athena and Redshift. Build scalable data pipelines to ingest, transform, and load data from various sources. Ensure data quality, integrity, and security across AWS services. Collaborate with data analysts, data scientists, and business stakeholders to deliver data solutions. Monitor and troubleshoot ETL jobs and cloud infrastructure performance. Automate data workflows and integrate with CI/CD pipelines. Required Skills & Qualifications: Hands-on experience with AWS Glue, Athena, and Redshift. Strong programming skills in Python and PySpark. Experience with ETL design, implementation, and optimization. Familiarity with S3, Lambda, CloudWatch, and other AWS services. Understanding of data warehousing concepts and performance tuning in Redshift. Experience with schema design, partitioning, and query optimization in Athena. Proficiency in version control (Git) and agile development practices. About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.

Posted 1 week ago

Apply

4.0 - 6.0 years

2 - 8 Lacs

Noida

On-site

Position: Data Engineer (AWS QuickSight, Glue, PySpark) (Noida) (CE46SF RM 3386) Education Required: Bachelor’s / Masters / PhD: Bachelor’s or master’s in computer science, Statistics, Mathematics, Data Science, Engineering AWS certification (e.g., AWS Certified Data Analytics – Specialty, AWS Certified Developer) Must have skills: Proficiency in AWS cloud services: AWS Glue, QuickSight, S3, Lambda, Athena, Redshift, EMR, and related technologies Strong experience with PySpark Expertise in SQL and data modeling for relational and non-relational databases Familiarity with business intelligence and visualization tools, especially Amazon QuickSight Good to have: Proficiency in Python and ML libraries (scikit-learn, TensorFlow, PyTorch). Understanding of MLOps and model deployment best practices. Hands-on experience with AWS services for ML. Experience or familiarity with HVAC domain is a plus Key Responsibilities: Design, develop, and maintain data pipelines using AWS Glue, PySpark, and related AWS services to extract, transform, and load (ETL) data from diverse sources Build and optimize data warehouse/data lake infrastructure on AWS, ensuring efficient data storage, processing, and retrieval Develop and manage ETL processes to source data from various systems, including databases, APIs, and file storage, and create unified data models for analytics and reporting Implement and maintain business intelligence dashboards using Amazon QuickSight, enabling stakeholders to derive actionable insights Collaborate with cross-functional teams (business analysts, data scientists, product managers) to understand requirements and deliver scalable data solutions Ensure data quality, integrity, and security throughout the data lifecycle, implementing best practices for governance and compliance5. Support self-service analytics by empowering internal users to access and analyze data through QuickSight and other reporting tools1. Troubleshoot and resolve data pipeline issues, optimizing performance and reliability as needed Required Skills: Proficiency in AWS cloud services: AWS Glue, QuickSight, S3, Lambda, Athena, Redshift, EMR, and related technologies Strong experience with PySpark for large-scale data processing and transformation Expertise in SQL and data modeling for relational and non-relational databases Experience building and optimizing ETL pipelines and data integration workflows Familiarity with business intelligence and visualization tools, especially Amazon QuickSight Knowledge of data governance, security, and compliance best practices. Strong programming skills in Python; experience with automation and scripting Ability to work collaboratively in agile environments and manage multiple priorities effectively Excellent problem-solving and communication skills. ******************************************************************************************************************************************* Job Category: Digital_Cloud_Web Technologies Job Type: Full Time Job Location: Noida Experience: 4-6 years Notice period: 0-15 days

Posted 1 week ago

Apply

8.0 - 12.0 years

3 - 5 Lacs

Noida

On-site

Posted On: 8 Jul 2025 Location: Noida, UP, India Company: Iris Software Why Join Us? Are you inspired to grow your career at one of India’s Top 25 Best Workplaces in IT industry? Do you want to do the best work of your life at one of the fastest growing IT services companies ? Do you aspire to thrive in an award-winning work culture that values your talent and career aspirations ? It’s happening right here at Iris Software. About Iris Software At Iris Software, our vision is to be our client’s most trusted technology partner, and the first choice for the industry’s top professionals to realize their full potential. With over 4,300 associates across India, U.S.A, and Canada, we help our enterprise clients thrive with technology-enabled transformation across financial services, healthcare, transportation & logistics, and professional services. Our work covers complex, mission-critical applications with the latest technologies, such as high-value complex Application & Product Engineering, Data & Analytics, Cloud, DevOps, Data & MLOps, Quality Engineering, and Business Automation. Working at Iris Be valued, be inspired, be your best. At Iris Software, we invest in and create a culture where colleagues feel valued, can explore their potential, and have opportunities to grow. Our employee value proposition (EVP) is about “Being Your Best” – as a professional and person. It is about being challenged by work that inspires us, being empowered to excel and grow in your career, and being part of a culture where talent is valued. We’re a place where everyone can discover and be their best version. Job Description General Roles & Responsibilities: Technical Leadership: Demonstrate leadership, and ability to guide business and technology teams in adoption of best practices and standards Design & Development: Design, develop, and maintain robust, scalable, and high-performance data estate Architecture: Architect and design robust data solutions that meet business requirements & include scalability, performance, and security. Quality: Ensure the quality of deliverables through rigorous reviews, and adherence to standards. Agile Methodologies: Actively participate in agile processes, including planning, stand-ups, retrospectives, and backlog refinement. Collaboration: Work closely with system architects, data engineers, data scientists, data analysts, cloud engineers and other business stakeholders to determine optimal solution & architecture that is future-proof too. Innovation: Stay updated with the latest industry trends and technologies, and drive continuous improvement initiatives within the development team. Documentation: Create and maintain technical documentation, including design documents, and architectural user guides. Technical Responsibilities: Optimize data pipelines for performance and efficiency. Work with Databricks clusters and configuration management tools. Use appropriate tools in the cloud data lake development and deployment. Developing/implementing cloud infrastructure to support current and future business needs. Provide technical expertise and ownership in the diagnosis and resolution of issues. Ensure all cloud solutions exhibit a higher level of cost efficiency, performance, security, scalability, and reliability. Manage cloud data lake development and deployment on AWS /Databricks. Manage and create workspaces, configure cloud resources, view usage data, and manage account identities, settings, and subscriptions in Databricks Required Technical Skills: Experience & Proficiency with Databricks platform - Delta Lake storage, Spark (PySpark, Spark SQL). Must be well versed with Databricks Lakehouse, Unity Catalog concept and its implementation in enterprise environments. Familiarity of data design pattern - medallion architecture to organize data in a Lakehouse. Experience & Proficiency with AWS Data Services – S3, Glue, Athena, Redshift etc.| Airflow scheduling Proficiency in SQL and experience with relational databases. Proficiency in at least one programming language (e.g., Python, Java) for data processing and scripting. Experience with DevOps practices - AWS DevOps for CI/CD, Terraform/CDK for infrastructure as code Good understanding of data principles, Cloud Data Lake design & development including data ingestion, data modeling and data distribution. Jira: Proficient in using Jira for managing projects and tracking progress. Other Skills: Strong communication and interpersonal skills. Collaborate with data stewards, data owners, and IT teams for effective implementation Understanding of business processes and terminology – preferably Logistics Experienced with Scrum and Agile Methodologies Qualification Bachelor’s degree in information technology or a related field. Equivalent experience may be considered. Overall experience of 8-12 years in Data Engineering Mandatory Competencies Cloud - AWS Data Science - Databricks Database - SQL Data on Cloud - Azure Data Lake (ADL) Agile - Agile Data Analysis - Data Analysis Big Data - PySpark Data on Cloud - AWS S3 Data on Cloud - Redshift ETL - AWS Glue Python - Python DevOps - CI/CD Beh - Communication and collaboration Perks and Benefits for Irisians At Iris Software, we offer world-class benefits designed to support the financial, health and well-being needs of our associates to help achieve harmony between their professional and personal growth. From comprehensive health insurance and competitive salaries to flexible work arrangements and ongoing learning opportunities, we're committed to providing a supportive and rewarding work environment. Join us and experience the difference of working at a company that values its employees' success and happiness.

Posted 1 week ago

Apply

3.0 years

0 Lacs

India

Remote

AWS Data Engineer Location: Remote (India) Experience: 3+ Years Employment Type: Full-Time About the Role: We are seeking a talented AWS Data Engineer with at least 3 years of hands-on experience in building and managing data pipelines using AWS services. This role involves working with large-scale data, integrating multiple data sources (including sensor/IoT data), and enabling efficient, secure, and analytics-ready solutions. Experience in the energy industry or working with time-series/sensor data is a strong plus. Key Responsibilities: Build and maintain scalable ETL/ELT data pipelines using AWS Glue, Redshift, Lambda, EMR, S3, and Athena Process and integrate structured and unstructured data, including sensor/IoT and real-time streams Optimize pipeline performance and ensure reliability and fault tolerance Collaborate with cross-functional teams including data scientists and analysts Perform data transformations using Python, Pandas, and SQL Maintain data integrity, quality, and security across the platform Use Terraform and CI/CD tools (e.g., Azure DevOps) for infrastructure and deployment automation Support and monitor pipeline workflows, troubleshoot issues, and implement fixes Contribute to the adoption of emerging tools like AWS Bedrock, Textract, Rekognition, and GenAI solutions Required Skills and Qualifications: Bachelor’s or Master’s degree in Computer Science, Information Technology, or related field 3+ years of experience in data engineering using AWS Strong skills in: AWS Glue, Redshift, S3, Lambda, EMR, Athena Python, Pandas, SQL RDS, Postgres, SAP HANA Solid understanding of data modeling, warehousing, and pipeline orchestration Experience with version control (Git) and infrastructure as code (Terraform) Preferred Skills: Experience working with energy sector dat a or IoT/sensor-based data Exposure to machine learnin g tools and frameworks (e.g., SageMaker, TensorFlow, Scikit-learn) Familiarity with big data technologie s like Apache Spark, Kafka Experience with data visualization tool s (Tableau, Power BI, AWS QuickSight) Awareness of data governance and catalog tool s such as AWS Data Quality, Collibra, and AWS Databrew AWS Certifications (Data Analytics, Solutions Architect

Posted 1 week ago

Apply

6.0 - 9.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Role Description ocation: Any UST Location Experience: 6 to 9 years Mandatory Skills: PySpark, GCP, Hadoop, Hive, AWS, GCP Good to Have: CI/CD and DevOps experience Job Description We are seeking a highly skilled Senior Big Data Engineer to join our team at UST. The ideal candidate will have solid experience in Big Data technologies, cloud platforms, and data processing frameworks with a strong focus on PySpark and Google Cloud Platform (GCP). Key Responsibilities Design, develop, and maintain scalable data pipelines and ETL workflows using PySpark, Hadoop, and Hive. Deploy and manage big data workloads on cloud platforms like GCP and AWS. Work closely with cross-functional teams to understand data requirements and deliver high-quality solutions. Optimize data processing jobs for performance and cost-efficiency on cloud infrastructure. Implement automation and CI/CD pipelines to streamline deployment and monitoring of data workflows. Ensure data security, governance, and compliance in cloud environments. Troubleshoot and resolve data issues, monitoring job executions and system health. Mandatory Skills PySpark: Strong experience in developing data processing jobs and ETL pipelines. Google Cloud Platform (GCP): Hands-on experience with BigQuery, Dataflow, Dataproc, or similar services. Hadoop Ecosystem: Expertise with Hadoop, Hive, and related big data tools. AWS: Familiarity with AWS data services like S3, EMR, Glue, or Redshift. Strong SQL and data modeling skills. Good To Have Experience with CI/CD tools and DevOps practices (Jenkins, GitLab, Terraform, etc.). Containerization and orchestration knowledge (Docker, Kubernetes). Experience with Infrastructure as Code (IaC). Knowledge of data governance and data security best practices. Skills Spark,Hadoop,Hive,Gcp

Posted 1 week ago

Apply

3.0 - 8.0 years

5 - 14 Lacs

Hyderabad

Work from Office

Position: Data Analyst | Interview: Walk-in | Type: Full-time | Location: Hyderabad | Exp: 3–8 yrs | Work: 5 Days WFO Data Analysis & Insights Reporting & Visualization Data Extraction & ETL Collaboration & Management Contact:6309124068 Manoj Required Candidate profile Looking for Data Analysts with 3–8 yrs exp in SQL, BI tools (Tableau/Power BI), Python/AppScript. Should have experience in ETL, dashboarding, A/B testing, Contact:6309124068 Manoj

Posted 1 week ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Company Description Zimetrics is a technology services and solutions provider specializing in Data, AI, and Digital. We help enterprises leverage the economic potential and business value of data from systems, machines, connected devices, and human-generated content. Our core principles are Integrity, Intellect, and Ingenuity, guiding our value system, engineering expertise, and organizational behavior. We are problem solvers and innovators who challenge conventional wisdom and believe in possibilities. Key Responsibilities: Design scalable and secure cloud-based data architecture solutions Lead data modeling, integration, and migration strategies across platforms Engage directly with clients to understand business needs and translate them into technical solutions Support sales/pre-sales teams with solution architecture, technical presentations, and proposals Collaborate with cross-functional teams including engineering, BI, and product Ensure best practices in data governance, security, and performance optimization Key Requirements: Strong experience with Cloud platforms (AWS, Azure, or GCP) Deep understanding of Data Warehousing concepts and tools (Snowflake, Redshift, BigQuery, etc.) Proven expertise in data modeling (conceptual, logical, and physical) Excellent communication and client engagement skills Experience in pre-sales or solution consulting is a strong advantage Ability to present complex technical concepts to non-technical stakeholders

Posted 1 week ago

Apply

3.0 - 5.0 years

0 - 3 Lacs

Hyderabad

Work from Office

Analyze large datasets to identify trends, patterns, and business insights. Perform exploratory data analysis (EDA) using SQL, Python, PySpark or R. Support A/B testing design, execution, and analysis for product/marketing initiatives. Required Candidate profile Strong expertise in SQL, BI tools, and scripting -Python Optimize query performance and dashboard efficiency Extract, clean, and transform data from structured/unstructured sources

Posted 1 week ago

Apply

8.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

🚀 We’re Hiring: Sr. Data Engineer (AWS/Azure) 📍 Location: Ahmedabad, Gujarat (Hybrid/Onsite) 📅 Experience: 4–8 Years 🕒 Type: Full-Time Are you passionate about designing scalable, cloud-native data solutions? Ready to work on cutting-edge tech with a global engineering team? Join Simform — a top-tier digital engineering partner for AWS, Microsoft, Google Cloud, and Databricks — and help us power the data behind next-gen digital products. 🔍 What You’ll Do As a Senior Data Engineer , you’ll design and build high-performance data pipelines using AWS and Azure services. You'll work closely with ML engineers, data scientists, and product teams to develop robust data infrastructure that supports real-time analytics, large-scale processing, and machine learning workflows. 🛠️ Tech You’ll Work With Cloud: AWS Glue, S3, Redshift, Kinesis, Lambda / Azure Data Factory, Synapse, Databricks, Microsoft Fabric Big Data & Streaming: Spark, Kafka, Flink, Airflow Databases: PostgreSQL, MongoDB, MySQL, SQL Server, Cassandra, Neptune Data Ops: ETL/ELT, data lake/lakehouse design, real-time + batch pipelines ✅ What We’re Looking For Strong hands-on experience with end-to-end data pipelines on AWS and/or Azure Proficiency in ETL/ELT , data modelling, and optimizing large-scale datasets (100GB+) Solid foundation in distributed data processing and data integration Bonus: Experience with ML pipeline integration , CI/CD for data , or data observability tools 💼 Why Join Simform? 🌱 A growth-driven, engineering-first culture 🤝 Flat hierarchy & transparent leadership 🧠 Learning & certification sponsorship 🧘 Free health insurance & flexible work options 🎮 Game zone, free snacks, subsidized lunch 🌍 Global exposure across North America & Europe If you’re ready to engineer real impact with cloud data solutions, let’s connect! 📩 Apply now or refer someone great! 👉🏿 yash.b@simformsolutions.com

Posted 1 week ago

Apply

7.0 - 12.0 years

20 - 35 Lacs

Pune, Gurugram

Hybrid

Python,PySpark,SQL / Queries ,AWS Elastic MapReduce (EMR),Amazon Managed Workflow for Apache Airflow (MWAA),AWS CDK, Cloud-formation, Lambda, Step-funtion,Athena,Redshift,Glue Catalog,S3 ,CI/CD: Github Actions

Posted 1 week ago

Apply

0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Job Title: Tableau Developer Job Type: Full-Time About Us: Torrent Technologies is the leading vendor for the National Flood Insurance program. We provide Flood insurance services to many large US insurance companies. Our platform is a complete policy management system, including all policy life cycle workflows (quoting, policy management, accounting, claims, agency management) as well as a robust reporting and analytics platform. Job Description: We are seeking a skilled Tableau Developer with strong SQL Server experience and a background in analytics presentation to join our dynamic team. The ideal candidate will be responsible for designing, developing, and maintaining interactive dashboards and reports that provide actionable insights to drive business decisions. You will collaborate with cross-functional teams to understand data requirements and deliver high-quality analytics solutions. Key Responsibilities: Design and develop interactive dashboards and visualizations using Tableau to present complex data in a clear and concise manner. Utilize SQL Server to extract, manipulate, and analyze data from various sources to support reporting and analytics needs. Collaborate with stakeholders to gather requirements and translate business needs into technical specifications for Tableau reports and dashboards. Optimize Tableau workbooks for performance and usability, ensuring a seamless user experience. Conduct data analysis to identify trends, patterns, and insights that can inform business strategies. Create and maintain documentation for Tableau dashboards, data sources, and processes. Provide training and support to end-users on Tableau tools and best practices. Stay updated on the latest Tableau features and industry trends to continuously improve analytics capabilities. Qualifications: Bachelor’s degree in Computer Science, Information Technology, Data Science, or a related field. Proven experience as a Tableau Developer with a strong portfolio of dashboards and reports. Proficient in SQL Server, including writing complex queries, stored procedures, and data manipulation. Strong analytical skills with the ability to interpret and present data effectively. Experience in data visualization best practices and analytics presentation techniques. Familiarity with data warehousing concepts and ETL processes is a plus. Excellent communication and collaboration skills, with the ability to work effectively in a team environment. Detail-oriented with strong problem-solving skills and the ability to manage multiple projects simultaneously. Preferred Qualifications: Tableau certification (e.g., Tableau Desktop Specialist, Tableau Desktop Certified Associate) is a plus. Experience with other data visualization tools or programming languages (e.g., Python, R) is a plus. Knowledge of business intelligence concepts and methodologies. Experience with AWS Redshift, AWS Glue. What We Offer: Competitive salary and benefits package. Opportunities for professional development and career growth. A collaborative and innovative work environment.

Posted 1 week ago

Apply

5.0 years

0 Lacs

India

On-site

Job Summary: We are seeking a skilled and analytical Data Architect & Business Intelligence Specialist to design, model, and implement robust data architectures, pipelines, and reporting frameworks. This role will be responsible for building and maintaining data models, overseeing data migrations, and developing scalable data warehouse solutions to support business intelligence and analytics initiatives. Key Responsibilities: 2. Data Pipelines & ETL/ELT 3. Data Migration 4. Data Warehousing 5. Business Intelligence & Reporting Data Architecture & Modeling Design and maintain the enterprise data architecture aligned with business and technical requirements. Develop logical and physical data models using industry best practices. Establish and maintain metadata standards and data dictionaries. Ensure data consistency, quality, and governance across all systems. Design and build efficient and scalable data pipelines for structured and unstructured data. Develop ETL/ELT processes using tools like Apache Airflow, Talend, Informatica, or Azure Data Factory. Optimize data ingestion, transformation, and loading procedures to support analytics. Plan and execute data migration projects from legacy systems to modern data platforms. Ensure data integrity and minimal downtime during migration activities. Collaborate with stakeholders to map old data structures to new architecture. Design, implement, and manage modern data warehouses (e.g., Snowflake, Redshift, BigQuery, Synapse). Ensure high performance, scalability, and security of data warehousing environments. Implement data partitioning, indexing, and performance tuning techniques. Collaborate with business stakeholders to gather reporting and analytics requirements. Build interactive dashboards and reports using tools like Power BI, Tableau, Looker, or Qlik. Enable self-service reporting and ensure data accuracy in BI platforms. Monitor data usage, performance, and drive continuous improvement in reporting frameworks. Requirements Requirements: Education & Experience: Bachelor's or Master's degree in Computer Science, Information Systems, Data Engineering, or a related field. 5+ years of experience in data architecture, modeling, pipelines, and BI/reporting. Technical Skills: Strong expertise in SQL, data modeling (3NF, dimensional, star/snowflake schemas). Experience with data warehouse technologies and cloud platforms (AWS, Azure, GCP). Proficiency in BI/reporting tools and data visualization best practices. Knowledge of Python, Scala, or other scripting languages is a plus. Familiarity with data governance, security, and compliance standards. Soft Skills: Excellent problem-solving skills and attention to detail. Strong communication and collaboration skills with both technical and non-technical stakeholders. Ability to translate complex technical concepts into business language

Posted 1 week ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

About The Role: Eucloid is looking for a senior Data Engineer with hands-on expertise in Databricks to join our Data Platform team supporting various business applications. The ideal candidate will support development of data infrastructure on Databricks for our clients by participating in activities which may include starting from up- stream and down-stream technology selection to designing and building of different components. Candidate will also involve in projects like integrating data from various sources, managing big data pipelines that are easily accessible with optimized performance of overall ecosystem. The ideal candidate is an experienced data wrangler who will support our software developers, database architects and data analysts on business initiatives. You must be self-directed and comfortable supporting the data needs of cross-functional teams, systems, and technical solutions. Location: Chennai Qualifications: B.Tech/BS degree in Computer Science, Computer Engineering, Statistics, or other Engineering disciplines Min. 5 years of Professional work Experience, with 1+ years of hands-on experience with Databricks Highly proficient in SQL & Data model (conceptual and logical) concepts Highly proficient with Python & Spark (3+ year) Knowledge of Distributed computing and cloud databases like Redshift, Big query etc. 2+ years of Hands-on experience with one of the top cloud platforms - AWS/GCP/Azure. Experience with Modern Data stack tools like Airflow, Terraform, dbt, Glue, Data proc etc. Exposure to Hadoop& Shell scripting a plus Min 2 years, Databricks 1 year desirable, Python & Spark 1+ years, remove data model, SQL only, any cloud exp 1+ year Responsibilities Design, implementation, and improvement of processes & automation of Data infrastructure Tuning of Data pipelines for reliability & performance Building tools and scripts to develop, monitor, troubleshoot ETL’s Perform scalability, latency, and availability tests on a regular basis. Perform code reviews and QA data imported by various processes. Investigate, analyze, correct and document reported data defects. Create and maintain technical specification documentation. Eucloid offers a high growth path along with great compensation, which is among the best in the industry. Please reach out to chandershekhar.verma@eucloid.com if you want to apply.

Posted 1 week ago

Apply

3.0 - 4.0 years

0 Lacs

Greater Bengaluru Area

On-site

Job Title: Senior Data Analyst Location: Bangalore Experience: 3-4 Years Department: Analytics / Business Intelligence Employment Type: Full-time Job Summary: We are seeking a highly skilled and detail-oriented Senior Data Analyst to join our data-driven team. The ideal candidate will have strong expertise in SQL, Tableau, and MS Excel , with a foundational understanding of Python for data analysis and automation . You will play a key role in turning data into actionable insights that influence strategic decisions across the business. Key Responsibilities: Design, develop, and maintain SQL queries to extract and analyze large datasets from multiple sources. Build interactive Tableau dashboards and reports to visualize business trends and performance metrics. Perform advanced data analysis in MS Excel including pivot tables, lookups, and complex formulas. Use Python for data cleaning, automation, and basic exploratory data analysis. Collaborate with cross-functional teams to understand business requirements and translate them into data solutions. Conduct root cause analysis and identify key insights to support business decisions. Ensure data accuracy, consistency, and integrity across all reporting and analytics deliverables. Exposure to AI-driven analytics or interest in learning AI-based tools will be an added advantage. Required Qualifications: Bachelor’s degree in Computer Science, Statistics, Mathematics, Economics, or a related field. Proficiency in SQL for data extraction and transformation. Strong expertise in Tableau for building reports and dashboards. Advanced skills in Microsoft Excel , including macros, charts, and data modeling. Working knowledge of Python for scripting and data manipulation (Pandas, NumPy preferred). Previous experience in E-commerce Industry (Mandatory) Strong problem-solving abilities and attention to detail. Excellent communication and data storytelling skills. Preferred Qualifications: Experience with data warehousing tools like Snowflake, Redshift, or BigQuery. Exposure to cloud platforms (AWS, Azure, GCP). Familiarity with ETL tools and processes. Background in A/B testing or statistical modeling is a plus. Join us if you're passionate about turning data into insights and want to drive real business impact!

Posted 1 week ago

Apply

0 years

0 Lacs

India

Remote

Role Description This is a Full-Time position with one of our clients for a Redshift Database Administrator. The Redshift Database Administrator will be responsible for managing, designing, and troubleshooting Redshift databases. Daily tasks include performing database administration, ensuring database design is optimal, troubleshooting issues, managing database replication, and ensuring database performance and integrity. Qualifications Proficiency in Database Administration and Database Design Strong Troubleshooting skills Experience with Databases and Replication Strong understanding of database design, performance tuning, and optimization techniques Proficiency in SQL and experience with database scripting languages (e.g., Python, Shell) Experience with database backup and recovery, security, and high availability solutions Familiarity with AWS services and tools, including S3, EC2, IAM, and CloudWatch Excellent problem-solving abilities and analytical skills Ability to work independently and remotely Advanced knowledge of AWS Redshift is a plus Bachelor's degree in computer science, Information Technology, or related field

Posted 1 week ago

Apply

8.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Who we are? Searce means ‘a fine sieve’ & indicates ‘to refine, to analyze, to improve’. It signifies our way of working: To improve to the finest degree of excellence, ‘solving for better’ every time. Searcians are passionate improvers & solvers who love to question the status quo. The primary purpose of all of us, at Searce, is driving intelligent, impactful & futuristic business outcomes using new-age technology. This purpose is driven passionately by HAPPIER people who aim to become better, everyday. What are we looking for? Are you a keen learner? Excellent mentor? Passionate coach? We’re looking for someone who’s all three! We’re on the lookout for someone who can design and implement our data processing pipelines for all kinds of data sources. What you'll do as a Manager - Data Engineering with us? 1. You have worked in environments of different shapes and sizes. On-premise, private cloud, public cloud, Hybrid, all windows / linux / healthy mix. Thanks to this experience, you can connect the dots quickly and understand client pain points. 2. You are curious. You keep up with the breakneck speed of innovation on Public cloud. When something new gets released or an existing service changes - you try it out and you learn. 3. You have Strong database background - relational and non-relational alike. a. MySQL, PostgreSQL, SQL Server, Oracle. b. MongoDB, Cassandra and other NoSQL databases. c. Strong SQL query writing experience. d. HA, DR, Performance tuning, Migrations. e. Experience with the cloud offerings - RDS, Aurora, CloudSQL, Azure SQL. 4. You have hands-on experience with designing, deploying, migrating enterprise data warehouses and data lakes. a. Familiarity with migrations from the likes of Netezza, Greenplum, Oracle to BigQuery/RedShift/Azure Data warehouse. b. Dimensional data modelling, reporting & analytics. c. Designing ETL pipelines. 5. You have experience with Advanced Analytics - Ability to work with the Applied AI team and assist in delivering predictive analytics, ML models etc. 6. You have experience with BigData ecosystem a. Self managed Hadoop clusters, distributions like Hortonworks and the cloud equivalents like EMR, Dataproc, HDInsight b. Apache Hudi, Hive, Presto, Spark, Flink, Kafka etc 7. You have hands-on experience with Tools: Apache Airflow, Talend, Tableau, Pandas, DataFlow, Kinesis, Stream Analytics etc. What are the must-haves to join us? 1. Is Education overrated? Yes. We believe so. However there is no way to locate you otherwise. So we might have to look for a Bachelor's or Master's Degree in engineering from a reputed institute or you should have been coding since your 6th grade. And the later is better. We will find you faster if you specify the latter in some manner. :) 2. 8-10+ years of overall IT experience with a strong data engineering and business intelligence background. 3. Minimum 3 years of experience on projects with GCP / AWS / Azure. 4. Minimum 3+ years of experience in data & analytics delivery and management consulting working with Data Migration, ETL, Business Intelligence, Data Quality, Data Analytics and AI tools. 5. 4+ years of hands-on experience with Python & SQL. 6. Experience across data solutions including data lake, warehousing, ETL, streaming, reporting and analytics tools. 7. Prior experience in recruitment, training & grooming of geeks. 8. Great to have certifications: a. GCP and/or AWS, professional level b. Your contributions to the community - tech blogs, stackover ow etc. 9. Strong communication skills to communicate across a diverse audience with varying levels of business and technical expertise. So, If you are passionate about tech, future & what you read above (we really are!), apply here to experience the ‘Art of Possible’

Posted 1 week ago

Apply

3.0 years

0 Lacs

India

Remote

Job Title: AWS Data Engineer 📍 Location: Remote (India) 🕒 Experience: 3+ Years 💼 Employment Type: Full-Time About the Role: We’re looking for a skilled AWS Data Engineer with 3+ years of hands-on experience in building and managing robust, scalable data pipelines using AWS services. The ideal candidate will have a strong foundation in processing both structured and unstructured data, particularly from IoT/sensor sources. Experience in the energy sector and with time-series data is highly desirable. Key Responsibilities: Design, develop, and maintain scalable ETL/ELT pipelines using AWS Glue, Redshift, Lambda, EMR, S3, and Athena Integrate and process structured, unstructured, and real-time sensor/IoT data Ensure pipeline performance, reliability, and fault tolerance Collaborate with data scientists, analysts, and engineering teams to build analytics-ready solutions Transform data using Python, Pandas , and SQL Enforce data integrity, quality, and security standards Use Terraform and CI/CD tools (e.g., Azure DevOps) for infrastructure and deployment automation Monitor workflows, troubleshoot pipeline issues, and implement solutions Explore and contribute to the use of modern AWS tools like Bedrock, Textract, Rekognition , and GenAI applications Required Skills & Qualifications: Bachelor’s/Master’s in Computer Science, IT, or related field Minimum 3 years of experience in AWS data engineering Proficient in: AWS Glue, Redshift, S3, Lambda, EMR, Athena Python, Pandas, SQL RDS, Postgres, SAP HANA Strong knowledge of data modeling, warehousing, and pipeline orchestration Experience with Git and Infrastructure as Code using Terraform Preferred Skills: Experience with energy sector data or sensor-based/IoT data Exposure to ML tools like SageMaker, TensorFlow, Scikit-learn Familiarity with Apache Spark, Kafka Experience with data visualization tools: Tableau, Power BI, AWS QuickSight Awareness of data governance tools like AWS Data Quality, Collibra, Databrew AWS certifications (e.g., Data Analytics Specialty, Solutions Architect Associate)

Posted 1 week ago

Apply

2.0 years

0 Lacs

Kochi, Kerala, India

On-site

Job Title - + + Management Level: Location: Kochi, Coimbatore, Trivandrum Must have skills: Python/Scala, Pyspark/Pytorch Good to have skills: Redshift Job Summary You’ll capture user requirements and translate them into business and digitally enabled solutions across a range of industries. Your responsibilities will include: Responsibilities Roles and Responsibilities Designing, developing, optimizing, and maintaining data pipelines that adhere to ETL principles and business goals Solving complex data problems to deliver insights that helps our business to achieve their goals. Source data (structured→ unstructured) from various touchpoints, format and organize them into an analyzable format. Creating data products for analytics team members to improve productivity Calling of AI services like vision, translation etc. to generate an outcome that can be used in further steps along the pipeline. Fostering a culture of sharing, re-use, design and operational efficiency of data and analytical solutions Preparing data to create a unified database and build tracking solutions ensuring data quality Create Production grade analytical assets deployed using the guiding principles of CI/CD. Professional And Technical Skills Expert in Python, Scala, Pyspark, Pytorch, Javascript (any 2 at least) Extensive experience in data analysis (Big data- Apache Spark environments), data libraries (e.g. Pandas, SciPy, Tensorflow, Keras etc.), and SQL. 2-3 years of hands-on experience working on these technologies. Experience in one of the many BI tools such as Tableau, Power BI, Looker. Good working knowledge of key concepts in data analytics, such as dimensional modeling, ETL, reporting/dashboarding, data governance, dealing with structured and unstructured data, and corresponding infrastructure needs. Worked extensively in Microsoft Azure (ADF, Function Apps, ADLS, Azure SQL), AWS (Lambda,Glue,S3), Databricks analytical platforms/tools, Snowflake Cloud Datawarehouse. Additional Information Experience working in cloud Data warehouses like Redshift or Synapse Certification in any one of the following or equivalent AWS- AWS certified data Analytics- Speciality Azure- Microsoft certified Azure Data Scientist Associate Snowflake- Snowpro core- Data Engineer Databricks Data Engineering About Our Company | Accenture , Experience: 3.5 -5 years of experience is required Educational Qualification: Graduation (Accurate educational details should capture)

Posted 1 week ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Software Engineer – Integration (Linux) Skills To be successful in this role as a Linux focused Integration “Software Engineer – OSS Platform Engineering", you should possess the following skillsets: Strong Linux proficiency and expertise with containerization and Kubernetes with programming expertise in one of the high-level languages like Python, Java, Golang and NetDevOps automation. Hands-on expertise with IaC, Cloud Platforms, CI/CD Pipelines for Data, Containerization & Orchestration and SRE principles; Strong Knowledge and demonstrable hands-on experience with middleware technologies (Kafka, API gateways etc) and Data Engineering tools/frameworks like Apache Spark, Airflow, Flink and Hadoop ecosystems. Some Other Highly Valued Skills Include Expertise building ELT pipelines and cloud/storage integrations - data lake/warehouse integrations (redshift, BigQuery, Snowflake etc). Solid understanding of DevOps tooling, GitOps, CI/CD, config management, Jenkins, build pipelines and source control systems. Working knowledge of cloud infrastructure services: compute, storage, networking, hybrid connectivity, monitoring/logging, security and IAM. SRE Experience. Expertise building and defining KPI’s (SLI/SLO’s) using open-source tooling like ELK, Prometheus and various other instrumentation, telemetry, and log analytics. You may be assessed on the key critical skills relevant for success in role, such as risk and controls, change and transformation, business acumen strategic thinking and digital and technology, as well as job-specific technical skills. This role is based in our Pune office.

Posted 1 week ago

Apply

7.0 years

0 Lacs

India

Remote

Position: Senior Database Administrator Job Type: Remote - Full Time Experience: 7+ years About the role: We are looking for a highly skilled Senior Database Administrator (DBA) with expertise across both cloud and on-premise environments to join our Production DBA Team. The ideal candidate will ensure high availability, performance, and security of critical database systems while driving root cause analysis, automation, and proactive monitoring. You will take end-to-end ownership of issues, maintain clear communication with stakeholders, and collaborate with cross-functional teams to drive timely resolution—all while adhering to change management and production governance protocols. Key Responsibilities: Administer, maintain, and optimize databases across on-premise and cloud platforms including AWS (RDS MySQL/Aurora/Postgres, Redshift) and Oracle Cloud Infrastructure (OCI - DBCS, ADW) . Manage and tune MySQL, PostgreSQL, Oracle (CDB/PDB), Redshift , and Hadoop environments. Perform advanced performance tuning, capacity planning, and health checks using tools such as SQL Developer, OCI Metrics, Performance Hub, and SolarWinds DPA . Implement monitoring and alerting systems (CloudWatch, Opsgenie, OCI Alarms), and proactively resolve CPU, memory, I/O, and storage issues. Handle database backup, recovery, replication, and housekeeping tasks , ensuring minimal downtime and data integrity. Troubleshoot issues related to tablespaces, indexes, mounting failures, blocking/deadlocks, and data replication . Work with command line tools (OCI CLI) and develop automation scripts in Shell, Python, or Perl . Administer wallets and password-less authentication , and manage Oracle ADW services. Collaborate with vendors (Oracle, AWS, SolarWinds) to resolve escalated issues efficiently. Maintain detailed documentation and communicate effectively across technical and non-technical teams. Requirements: 6–10 years of hands-on DBA experience with increasing responsibility in enterprise environments. Strong experience with MySQL, PostgreSQL, Oracle , and cloud-based databases (AWS RDS, Redshift, OCI DBCS/ADW) . Solid scripting skills in Python, Shell, or Perl for automation and operational efficiency. Experience with database performance tuning, capacity planning, and backup strategies . Working knowledge of Hadoop ecosystems is a strong plus. Familiarity with wallet management, password-less auth, and Oracle multi-tenant architecture (CDB/PDB) . Excellent problem-solving, interpersonal, and communication skills. Ability to work within SLAs and maintain high levels of ownership and accountability.

Posted 1 week ago

Apply

0.0 - 7.0 years

0 Lacs

Bengaluru, Karnataka

Remote

Bengaluru, Karnataka, India Department Data Engineering Job posted on Jul 09, 2025 Employment type Full Time About Us MatchMove is a leading embedded finance platform that empowers businesses to embed financial services into their applications. We provide innovative solutions across payments, banking-as-a-service, and spend/send management, enabling our clients to drive growth and enhance customer experiences. Are You The One? As a Technical Lead Engineer - Data , you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS . You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability , while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark . Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization , enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control , and compliance (GDPR, MAS TRM) . Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities Architect scalable, cost-optimized pipelines across real-time and batch paradigms , using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS , with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack : Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum. Expertise in designing data pipelines for real-time, streaming, and batch systems , including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale. Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls , encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain , with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts , data mesh patterns , and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases . Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. MatchMove Culture: We cultivate a dynamic and innovative culture that fuels growth, creativity, and collaboration. Our fast-paced fintech environment thrives on adaptability, agility, and open communication. We focus on employee development, supporting continuous learning and growth through training programs, learning on the job and mentorship. We encourage speaking up, sharing ideas, and taking ownership. Embracing diversity, our team spans across Asia, fostering a rich exchange of perspectives and experiences. Together, we harness the power of fintech and e-commerce to make a meaningful impact on people's lives. Grow with us and shape the future of fintech and e-commerce. Join us and be part of something bigger! Personal Data Protection Act: By submitting your application for this job, you are authorizing MatchMove to: collect and use your personal data, and to disclose such data to any third party with whom MatchMove or any of its related corporation has service arrangements, in each case for all purposes in connection with your job application, and employment with MatchMove; and retain your personal data for one year for consideration of future job opportunities (where applicable).

Posted 1 week ago

Apply

12.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About Crunchyroll Founded by fans, Crunchyroll delivers the art and culture of anime to a passionate community. We super-serve over 100 million anime and manga fans across 200+ countries and territories, and help them connect with the stories and characters they crave. Whether that experience is online or in-person, streaming video, theatrical, games, merchandise, events and more, it’s powered by the anime content we all love. Join our team, and help us shape the future of anime! Who We Are We're a cast of characters working to shine a spotlight on anime. Crunchyroll is an international business focused on creating both online and offline experiences for fans through content (licensed, co-produced, originals, distribution), merchandise, events, gaming, news, and more. Visit our About Us pages for more information about our collection of brands. Location: Hyderabad, India The intersection of media and technology is our sweet spot and we are fortunate to be global office in Hyderabad, India. This office houses many of our corporate functions and cross-functional teams tasked with creating exceptional experiences for our passionate communities. About the Team: The Center for Data and Insights (CDI) is the centralized team of data engineering, BI, analytics, and data science experts, passionate about servicing the organization with certified reports and insights! The mission of the group is to inspire, support, and guide our partners to be data-aware and to build the systems of intelligence to discover insights and act on them. About the Role: We are looking for a Director, CDI Operations who will manage partner relationships, ensure project success, and drive satisfaction. You will report to the Senior Director, Data Analytics. You will be a key point of contact for our growing global organization in our efforts towards Growth and Strategy. Your work also involves identifying opportunities for growth within existing stakeholder relationships. Responsibilities: Stakeholder Relationship Management: Build and maintain engaging, trusting relationships between global team members. Project Management: Oversee project execution, ensuring timelines are met, budgets are observed, and project scope is well-defined. Communication: Act as the primary point of contact, communicating project progress and updates to stakeholders and internal teams. Problem Solving: Identify and resolve challenges that may arise during the project lifecycle, ensuring partner satisfaction. Opportunity Identification: Identify and pursue new business opportunities with existing partners, potentially leading to expanded engagements. Team Leadership: Guide and develop team members involved in stakeholder projects. Partner Onboarding and Education: Ensuring new partners are onboarded and understand the value of the service/product. Global Business Impact: Lead and influence business principles and how they apply to stakeholder engagements. Global team management: Being available during the US/India timezones to collaborate with stakeholders. About You: 12+ years in Partner Relationship Management. 12+ years Project Management: Overseeing project execution, ensuring timelines are met, budgets are followed, and project scope is well-defined. 10+ years within a technical role. This includes Data Analytics, Data Engineering, Data Science, etc. Knowledge on Cloud data warehouses like Redshift, Snowflake, Imply etc. Knowledge about visualization tools like Tableau. Knowledge on large data sets (Terabytes of data/ billions of records). 5+ years of consulting working experience in international environments, with the stature necessary to work as a partner with senior colleagues and clients. 5+ years of Onsite/Offshore Management experience. Experience breaking down and solving problems through quantitative analysis. Knowledge about the Entertainment domain or equivalent B2C industry. Bachelor's degree in Business, Management, Data Science, or a related field A Day in the Life: On a daily basis, partner with the CDI Stakeholders in a structured manner, both verbally and in writing, including colleagues with different perspectives and seniority levels. Collaborate across time zones using relevant digital productivity tools and digital communication tools (e.g., email, Slack, Zoom). Work with offshore and onsite teams, including a 3-4 hour overlap with the US team. Maintain a culture of high-quality output and outstanding customer service. Why you will love working at Crunchyroll In addition to getting to work with fun, passionate and inspired colleagues, you will also enjoy the following benefits and perks: Best-in class medical, dental, and vision private insurance healthcare coverage Access to counseling & mental health sessions 24/7 through our Employee Assistance Program (EAP) Free premium access to Crunchyroll Professional Development Company's Paid Parental Leave up to 26 weeks for birthing parents up to 12 weeks for non-birthing parents Hybrid Work Schedule Paid Time Off Flex Time Off 5 Yasumi Days Half-Day Fridays during the summer Winter Break About Our Values We want to be everything for someone rather than something for everyone and we do this by living and modeling our values in all that we do. We value Courage. We believe that when we overcome fear, we enable our best selves. Curiosity. We are curious, which is the gateway to empathy, inclusion, and understanding. Kaizen. We have a growth mindset committed to constant forward progress. Service. We serve our community with humility, enabling joy and belonging for others. Our commitment to diversity and inclusion Our mission of helping people belong reflects our commitment to diversity & inclusion. It's just the way we do business. We are an equal opportunity employer and value diversity at Crunchyroll. Pursuant to applicable law, we do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status. Crunchyroll, LLC is an independently operated joint venture between US-based Sony Pictures Entertainment, and Japan's Aniplex, a subsidiary of Sony Music Entertainment (Japan) Inc., both subsidiaries of Tokyo-based Sony Group Corporation. Questions about Crunchyroll’s hiring process? Please check out our Hiring FAQs: https://help.crunchyroll.com/hc/en-us/articles/360040471712-Crunchyroll-Hiring-FAQs Please refer to our Candidate Privacy Policy for more information about how we process your personal information, and your data protection rights: https://tbcdn.talentbrew.com/company/22978/v1_0/docs/spe-jobs-privacy-policy-update-for-crpa-dec-21-22.pdf Please beware of recent scams to online job seekers. Those applying to our job openings will only be contacted directly from @crunchyroll.com email account.

Posted 1 week ago

Apply

6.0 years

0 Lacs

Gandhinagar, Gujarat, India

On-site

Role Overview We are seeking a Lead Data Engineer who will take ownership of our data infrastructure, manage a small data team, and oversee the design and implementation of reporting systems. Team Leadership This role is perfect for someone with strong technical skills in data engineering, who also has experience leading projects and delivering business-critical dashboards and Responsibilities : Lead and mentor a team of data engineers and BI developers. Assign tasks, review code, and ensure timely delivery of Pipeline Management : Design, build, and maintain scalable ETL/ELT pipelines across various data & BI Oversight : Oversee the development and delivery of operational and executive reports. Ensure data accuracy and alignment with business goals. Data Warehousing Architect and optimize data warehouses (e.g., Snowflake, Redshift, BigQuery) to support Analytical Workloads And Real-time Collaboration Work closely with business and analytics teams to understand data needs and translate them Into Technical Governance & Quality Implement standards for data governance, documentation, and quality : Evaluate and integrate new tools for data transformation, visualization (e.g., Tableau, Power BI, Looker), And 6+ years of experience in data engineering, with at least 2 years in a lead role. Strong experience in SQL, Python, and ETL tools (e.g., Airflow, DBT). Experience with BI/reporting tools like Power BI, Tableau, or Looker. Deep understanding of data modeling and warehouse architecture. Familiarity with cloud platforms (AWS, GCP, Azure). Excellent communication and stakeholder management skills. Experience managing or mentoring junior team members. (ref:hirist.tech)

Posted 1 week ago

Apply

7.0 years

0 Lacs

Gandhinagar, Gujarat, India

On-site

Key Responsibilities Lead and mentor a high-performing data pod composed of data engineers, data analysts, and BI developers. Design, implement, and maintain ETL pipelines and data workflows to support real-time and batch processing. Architect and optimize data warehouses for scale, performance, and security. Perform advanced data analysis and modeling to extract insights and support business decisions. Lead data science initiatives including predictive modeling, NLP, and statistical analysis. Manage and tune relational and non-relational databases (SQL, NoSQL) for availability and performance. Develop Power BI dashboards and reports for stakeholders across departments. Ensure data quality, integrity, and compliance with data governance and security standards. Work with cross-functional teams (product, marketing, ops) to turn data into strategy. Qualifications : : PhD in Data Science, Computer Science, Engineering, Mathematics, or related field. 7+ years of hands-on experience across data engineering, data science, analysis, and database administration. Strong experience with ETL tools (e.g., Airflow, Talend, SSIS) and data warehouses (e.g., Snowflake, Redshift, BigQuery). Proficient in SQL, Python, and Power BI. Familiarity with modern cloud data platforms (AWS/GCP/Azure). Strong understanding of data modeling, data governance, and MLOps practices. Exceptional ability to translate business needs into scalable data solutions. (ref:hirist.tech)

Posted 1 week ago

Apply

4.0 years

0 Lacs

Hyderabad, Telangana, India

Remote

Job Description Are you ready to make an impact at DTCC? Do you want to work on innovative projects, collaborate with a dynamic and supportive team, and receive investment in your professional development? At DTCC, we are at the forefront of innovation in the financial markets. We're committed to helping our employees grow and succeed. We believe that you have the skills and drive to make a real impact. We foster a thriving internal community and are committed to creating a workplace that looks like the world that we serve. Benefits Pay and Benefits: Competitive compensation, including base pay and annual incentive. Comprehensive health and life insurance and well-being benefits, based on location. Pension / Retirement benefits Paid Time Off and Personal/Family Care, and other leaves of absence when needed to support your physical, financial, and emotional well-being. DTCC offers a flexible/hybrid model of 3 days onsite and 2 days remote (onsite Tuesdays, Wednesdays and a third day unique to each team or employee). The Impact You Will Have In This Role The Development family is responsible for creating, designing, deploying, and supporting applications, programs, and software solutions. May include research, new development, prototyping, modification, reuse, re-engineering, maintenance, or any other activities related to software products used internally or externally on product platforms supported by the firm. The software development process requires in-depth subject matter expertise in existing and emerging development methodologies, tools, and programming languages. Software Developers work closely with business partners and / or external clients in defining requirements and implementing solutions. The Software Engineering role specializes in planning, documenting technical requirements, crafting, developing, and testing all software systems and applications for the firm. Works closely with architects, product managers, project management, and end-users in the development and improvement of existing software systems and applications, proposing and recommending solutions that solve complex business problems. Your Primary Responsibilities Participate in daily code deploys while working on individual or team projects. Translate business requirements into software designs and implementations. Participate in thorough code reviews with a goal of illustrating quality engineering practices and to produce the highest quality code possible. Build high quality and scalable / performant applications. Understand requirements and translate them into specific application and infrastructure related tasks. Design frameworks that promote concepts of isolation, extensibility, and reusability Supports team in handling client expectations and resolving issues urgently. Support development teams, testing, troubleshooting, and production support. Create applications and construct unit test cases that ensure compliance with functional and non-functional requirements. Work with peers to mature ways of working, continuous integration, and continuous delivery. Aligns risk and control processes into day to day responsibilities to supervise and mitigate risk; brings up appropriately. Qualifications Minimum of 4 years of related experience Bachelor's degree preferred or equivalent experience. Talents Needed For Success 4+ years’ experience in Application Development and system analysis Expert in Java/JEE and Coding standard methodologies Expert knowledge in development concepts. Good design and coding skills in Web Services, Spring/Spring Boot, Soap/Rest APIs, and Java Script Frameworks for modern web applications Solid understanding of HTML, CSS, and modern JavaScript Experience with Angular V15+ and/or React. Experience integrating with database technologies such as Oracle, PostgreSQL, etc. Ability to write quality and self-validating code using unit tests and following TDD. Experience with Agile methodology and ability to collaborate with other team members. Bachelor's degree in technical field or equivalent experience. Nice To Have Experience in developing and using AWS cloud stack (S3, SQS, Redshift, Lambda etc.) is a big plus. Ability to demonstrate DevOps techniques and practices like Continuous Integration, Continuous Deployment, Test Automation, Build Automation and Test-Driven Development to enable the rapid delivery of working code utilizing tools like Jenkins, Cloudbees, Git, etc. We do not discriminate on the basis of race, religion, color, national origin, sex, gender, gender expression, sexual orientation, age, marital status, veteran status, or disability status. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies