Jobs
Interviews

281 Elt Jobs - Page 3

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

6.0 - 10.0 years

16 - 25 Lacs

Faridabad

Work from Office

Job Summary: We are looking for a highly skilled Senior Data Engineer with expertise in Snowflake, DBT (Data Build Tool), and SAP Data Services (SAP DS). The ideal candidate will be responsible for building scalable data pipelines, designing robust data models, and ensuring high data quality across enterprise platforms. Key Responsibilities: Design, build, and optimize data pipelines and ETL/ELT workflows using Snowflake and DBT Integrate and manage data from various sources using SAP Data Services Develop and maintain scalable data models, data marts, and data warehouses Work closely with data analysts, business stakeholders, and BI teams to support reporting and analytics needs Implement best practices in data governance, data lineage, and metadata management Monitor data quality, troubleshoot issues, and ensure data integrity Optimize Snowflake data warehouse performance (partitioning, caching, query tuning) Automate data workflows and deploy DBT models with CI/CD tools (e.g., Git, Jenkins) Document architecture, data flows, and technical specifications

Posted 1 week ago

Apply

8.0 - 12.0 years

35 - 50 Lacs

Hyderabad

Work from Office

Job Description: Senior Data Analyst Location: Hyderabad, IN - Work from Office Experience: 7+ Years Role Summary We are seeking an experienced and highly skilled Senior Data Analyst to join our team. The ideal candidate will possess a deep proficiency in SQL, a strong understanding of data architecture, and a working knowledge of the Google Cloud platform (GCP)based ecosystem. They will be responsible for turning complex business questions into actionable insights, driving strategic decisions, and helping shape the future of our Product/Operations team. This role requires a blend of technical expertise, analytical rigor, and excellent communication skills to partner effectively with engineering, product, and business leaders. Key Responsibilities Advanced Data Analysis: Utilize advanced SQL skills to query, analyze, and manipulate large, complex datasets. Develop and maintain robust, scalable dashboards and reports to monitor key performance indicators (KPIs). Source Code management : Proven ability to effectively manage, version, and collaborate on code using codebase management systems like GitHub. Responsible for upholding data integrity, producing reproducible analyses, and fostering a collaborative database management environment through best practices in version control and code documentation. Strategic Insights: Partner with product managers and business stakeholders to define and answer critical business questions. Conduct deep-dive analyses to identify trends, opportunities, and root causes of performance changes. Data Architecture & Management: Work closely with data engineers to design, maintain, and optimize data schemas and pipelines. Provide guidance on data modeling best practices and ensure data integrity and quality. Reporting & Communication: Translate complex data findings into clear, concise, and compelling narratives for both technical and non-technical audiences. Present insights and recommendations to senior leadership to influence strategic decision-making. Project Leadership: Lead analytical projects from end to end, including defining project scope, methodology, and deliverables. Mentor junior analysts, fostering a culture of curiosity and data-driven problem-solving. Required Skills & Experience Bachelor's degree in a quantitative field such as Computer Science, Statistics, Mathematics, Economics, or a related discipline. 5+ years of professional experience in a data analysis or business intelligence role. Expert-level proficiency in SQL with a proven ability to write complex queries, perform window functions, and optimize queries for performance on massive datasets. Strong understanding of data architecture, including data warehousing, data modeling (e.g., star/snowflake schemas), and ETL/ELT principles. Excellent communication and interpersonal skills, with a track record of successfully influencing stakeholders. Experience with a business intelligence tool such as Tableau, Looker, or Power BI to create dashboards and visualizations. Experience with internal Google/Alphabet data tools and infrastructure, such as BigQuery, Dremel, or Google-internal data portals. Experience with statistical analysis, A/B testing, and experimental design. Familiarity with machine learning concepts and their application in a business context. A strong sense of curiosity and a passion for finding and communicating insights from data. Proficiency with scripting languages for data analysis (e.g., App scripting , Python or R ) would be an added advantage Responsibilities Lead a team of data scientists and analysts to deliver data-driven insights and solutions. Oversee the development and implementation of data models and algorithms to support new product development. Provide strategic direction for data science projects ensuring alignment with business goals. Collaborate with cross-functional teams to integrate data science solutions into business processes. Analyze complex datasets to identify trends and patterns that inform business decisions. Utilize generative AI techniques to develop innovative solutions for product development. Ensure adherence to ITIL V4 practices in all data science projects. Develop and maintain documentation for data science processes and methodologies. Mentor and guide team members to enhance their technical and analytical skills. Monitor project progress and adjust strategies to meet deadlines and objectives. Communicate findings and recommendations to stakeholders in a clear and concise manner. Drive continuous improvement in data science practices and methodologies. Foster a culture of innovation and collaboration within the data science team. Qualifications Possess strong experience in business analysis and data analysis. Demonstrate expertise in generative AI and its applications in product development. Have a solid understanding of ITIL V4 practices and their implementation. Exhibit excellent communication and collaboration skills. Show proficiency in managing and leading a team of data professionals. Display a commitment to working from the office during day shifts.

Posted 1 week ago

Apply

10.0 - 15.0 years

40 - 45 Lacs

Hyderabad

Remote

Job Title : Data Architect Location : Hyderabad/ Bangalore/Remote Experience Level: 10-15 years preferred Industry: IT/Software Services/SaaS Company Profile/Website: Pragma Edge | Powering Your Connected Enterprise Bangalore Office Location: 1st floor, IndiQube Platina, 15 Commissariat Road, Ashok Nagar, Bengaluru, Karnataka- 560025 Hyderabad Office Location: Pragma Towers, Plot No.07,Image Gardens Road, Silicon Valley, Madhapur, Hyderabad, TG-500081 Employment Type : Full-time Key Responsibilities Design and implement scalable, secure, and high-performance data architecture solutions tailored for logistics operations. Define data standards, models, and governance policies across heterogeneous data sources (e.g., EDI, ERP, TMS, WMS). Architect and optimize data pipelines to enable real-time analytics and reporting for warehouse management, freight, and inventory systems. Collaborate with business stakeholders to translate operational logistics needs into actionable data strategies. Ensure system reliability, data security, and compliance with relevant regulations. Evaluate and recommend tools and platforms including cloud-based data services (Azure, AWS, GCP). Lead data integration efforts including legacy systems migration and EDI transformations. Required Skills & Qualifications Proven experience as a Data Architect in logistics, transportation, or supply chain domains. Strong understanding of EDI formats, warehouse operations, fleet data, and logistics KPIs. Hands-on experience with data modeling, ETL, ELT, and data warehousing. Expertise in cloud platforms (Azure preferred), relational and NoSQL databases, and BI tools. Knowledge of data governance, security, and data lifecycle management. Familiarity with tools like Informatica, Talend, SQL Server, Snowflake, or BigQuery is a plus. Excellent analytical thinking and stakeholder communication skills.

Posted 1 week ago

Apply

8.0 - 13.0 years

25 - 40 Lacs

Hyderabad

Work from Office

Key Responsibilities Design conformed star & snowflake schemas , implement SCD2 dimensions and fact tables. Lead Spark (PySpark/Scala) or AWSGlue ELT pipelines from RDSZeroETL/S3 into Redshift. Tune RA3 clusterssort/dist keys, WLM queues, Spectrum partitionsfor subsecond BI queries. Establish dataquality, lineage, and costgovernance dashboards using CloudWatch & Terraform/CDK. Collaborate with Product & Analytics to translate HR KPIs into selfservice data marts. Mentor junior engineers; drive documentation and coding standards. MustHave Skills AmazonRedshift (sort & dist keys, RA3, Spectrum) Spark on EMR/Glue (PySpark or Scala) Dimensional modelling (Kimball), star schema, SCD2 Advanced SQL + Python/Scala scripting AWS IAM, KMS, CloudWatch, Terraform/CDK, CI/CD (GitHub Actions or CodePipeline) NicetoHave dbt, Airflow, Kinesis/Kafka, LakeFormation rowlevel ACLs GDPR / SOC2 compliance exposure AWSDataAnalytics or SolutionsArchitect certification Education B.E./B.Tech in Computer Science, IT, or related field (Master’s preferred but not mandatory). Compensation & Benefits Competitive CTC 25–40 LPA Health insurance for self & dependents Why Join Us? Own a greenfield HR analytics platform with executive sponsorship. Modern AWS stack (RedshiftRA3, LakeFormation, EMRonEKS). Culture of autonomy, fast decisionmaking, and continuous learning. Application Process 30min technical screen 4hour takehome Spark/SQL challenge 90min architecture deep dive Panel interview (leadership & stakeholder communication)

Posted 1 week ago

Apply

10.0 - 12.0 years

25 - 35 Lacs

Chennai, Bengaluru

Work from Office

Role : AWS, Snowflake with Data Architect What awaits you/ Job Profile Design and develop scalable data pipelines using AWS services. Integrate diverse data sources and ensure data consistency and reliability. Collaborate with data scientists and other stakeholders to understand data requirements. Implement data security measures and maintain data integrity. Monitor and troubleshoot data pipelines to ensure optimal performance. Optimize and maintain data warehouse and data lake architectures. Create and maintain comprehensive documentation for data engineering processes. What should you bring along Proven experience with Snowflake and SQL. Expert-level proficiency in building and managing data pipelines with Python. Strong experience in AWS cloud services, including Lambda, S3, Glue, and other data-focused services. Exposure Terraform for provisioning and managing infrastructure-as-code on AWS. Proficiency in SQL for querying and modeling large-scale datasets. Hands-on experience with Git for version control and managing collaborative workflows. Familiarity with ETL/ELT processes and tools for data transformation. Strong understanding of data architecture, data modeling, and data lifecycle management. Excellent problem-solving and debugging skills. Strong communication and collaboration skills to work effectively in a global, distributed team environment. Must have technical skill Good understanding of Architecting the data and solution. Data Quality, Governance, Data Security and Data Modelling concepts. Data Modelling, Mapping and Compliance to BaDM. Solutioning on Cloud (AWS) with cloud tools, with good Understanding Snowflake. Defining CI/CD config for GitHub and AWS Terraform for deployment config. Ensuring Architecture evolution with latest technology. Guiding and mentoring team, reviewing code and ensuring development to standards. Good to have Technical skills Strong understanding of ETL Concepts, design patterns, industry best practices. Experience with ETL testing Snowflake Certification. AWS Certification.

Posted 1 week ago

Apply

2.0 - 6.0 years

0 Lacs

karnataka

On-site

You should have a strong knowledge of SQL and Python. Experience in Snowflake is preferred. Additionally, you should have knowledge of AWS services such as S3, Lamdba, IAM, Step function, SNS, SQS, ECS, and Dynamo. It is important to have expertise in data movement technologies like ETL/ELT. Good to have skills include knowledge on DevOps, Continuous Integration, and Continuous Delivery with tools such as Maven, Jenkins, Stash, Control-M, Docker. Experience in automation and REST APIs would be beneficial for this role.,

Posted 1 week ago

Apply

6.0 - 10.0 years

0 Lacs

hyderabad, telangana

On-site

As a Data Engineering Manager at Micron Technology Inc., you will play a crucial role within the Technology Solutions group of the Smart Manufacturing and AI organization. Your responsibilities will involve working closely with Micron's Front End Manufacturing and Planning Ops business area, focusing on data engineering, Machine Learning, and advanced analytics solutions. We are seeking a leader with a strong technical background in Big Data and Cloud Data warehouse technologies, particularly in Cloud data warehouse platforms like Snowflake and GCP, monitoring solutions such as Splunk, and automation and machine learning using Python. Your primary tasks will include leading a team of Data Engineers, providing technical and people leadership, and ensuring the successful delivery of critical projects and production support. You will engage team members in their career development, maintain a positive work culture, and participate in the design, architecture review, and deployment of big data and cloud data warehouse solutions. Additionally, you will collaborate with key project stakeholders, analyze project needs, and translate requirements into technical specifications for the team of data engineers. To excel in this role, you should have a solid background in developing, delivering, and supporting big data engineering and advanced analytics solutions, with at least 10 years of experience in the field. Managing or leading data engineering teams for 6+ years and hands-on experience in building Cloud Data-centric solutions in GCP or other cloud platforms for 4-5 years is essential. Proficiency in Python programming, experience with Spark, ELT or ETL techniques, database management systems like SQL Server and Snowflake, and strong domain knowledge in Manufacturing Planning and Scheduling data are highly desired. Furthermore, you should possess intermediate to advanced programming skills, excellent communication abilities, and a passion for data and information. Being self-motivated, adaptable to a fast-paced environment, and having a Bachelor's degree in Computer Science, Management Information Systems, or related fields are prerequisites for this role. Micron Technology is a pioneering industry leader in memory and storage solutions, dedicated to transforming how information enriches lives globally. Our commitment to innovation, technology leadership, and operational excellence drives us to deliver high-performance memory and storage products through our Micron and Crucial brands, fueling advancements in artificial intelligence and 5G applications. If you are motivated by the power of data and eager to contribute to cutting-edge solutions, we encourage you to explore career opportunities with us at micron.com/careers.,

Posted 1 week ago

Apply

8.0 - 12.0 years

10 - 20 Lacs

Gurugram

Work from Office

Job Summary: We are seeking a highly experienced and motivated Snowflake Data Architect & ETL Specialist to join our growing Data & Analytics team. The ideal candidate will be responsible for designing scalable Snowflake-based data architectures, developing robust ETL/ELT pipelines, and ensuring data quality, performance, and security across multiple data environments. You will work closely with business stakeholders, data engineers, and analysts to drive actionable insights and ensure data-driven decision-making. Key Responsibilities: Design, develop, and implement scalable Snowflake-based data architectures . Build and maintain ETL/ELT pipelines using tools such as Informatica, Talend, Apache NiFi, Matillion , or custom Python/SQL scripts. Optimize Snowflake performance through clustering, partitioning, and caching strategies. Collaborate with cross-functional teams to gather data requirements and deliver business-ready solutions. Ensure data quality, governance, integrity, and security across all platforms. Migrate legacy data warehouses (e.g., Teradata, Oracle, SQL Server) to Snowflake . Automate data workflows and support CI/CD deployment practices. Implement data modeling techniques including dimensional modeling, star/snowflake schema , normalization/denormalization. Support and promote metadata management and data governance best practices. Technical Skills (Hard Skills): Expertise in Snowflake : Architecture design, performance tuning, cost optimization. Strong proficiency in SQL , Python , and scripting for data engineering tasks. Hands-on experience with ETL tools: Informatica, Talend, Apache NiFi, Matillion , or similar. Proficient in data modeling (dimensional, relational, star/snowflake schema). Good knowledge of Cloud Platforms : AWS, Azure, or GCP. Familiar with orchestration and workflow tools such as Apache Airflow, dbt, or DataOps frameworks . Experience with CI/CD tools and version control systems (e.g., Git). Knowledge of BI tools such as Tableau, Power BI , or Looker . Certifications (Preferred/Required): Snowflake SnowPro Core Certification Required or Highly Preferred SnowPro Advanced Architect Certification – Preferred Cloud Certifications (e.g., AWS Certified Data Analytics – Specialty, Azure Data Engineer Associate) – Preferred ETL Tool Certifications (e.g., Talend, Matillion) – Optional but a plus Soft Skills: Strong analytical and problem-solving capabilities. Excellent communication and collaboration skills. Ability to translate technical concepts into business-friendly language. Proactive, detail-oriented, and highly organized. Capable of multitasking in a fast-paced, dynamic environment. Passionate about continuous learning and adopting new technologies. Why Join Us? Work on cutting-edge data platforms and cloud technologies Collaborate with industry leaders in analytics and digital transformation Be part of a data-first organization focused on innovation and impact Enjoy a flexible, inclusive, and collaborative work culture

Posted 1 week ago

Apply

6.0 - 10.0 years

0 Lacs

karnataka

On-site

As an Informatica Developer with a specialization in Informatica Intelligent Data Management Cloud (IDMC), you will be responsible for designing, developing, and maintaining data pipelines and integrations using Informatica IDMC. Your role will involve working on Cloud Data Integration (CDI) and Cloud Application Integration (CAI) modules, building and optimizing ETL/ELT mappings, workflows, and data quality rules in a cloud setup, as well as deploying and monitoring data jobs using IDMC's operational dashboards and alerting tools. You will collaborate closely with data architects and business analysts to understand data integration requirements and write and optimize SQL queries for data processing. Strong hands-on experience with Informatica IDMC, proficiency in CDI, CAI, and cloud-based data workflows, as well as a solid understanding of ETL/ELT processes, data quality, and data integration best practices are essential for this role. Expertise in SQL and working with Oracle/SQL Server, strong analytical and problem-solving skills, along with excellent communication and interpersonal abilities, will be key to your success in this position. Your responsibilities will also include troubleshooting and resolving integration issues efficiently, ensuring performance tuning, and high availability of data solutions. This is a full-time position that requires you to work in person at locations in Bangalore, Cochin, or Trivandrum. If you are passionate about Informatica IDMC, Cloud Data Integration, and Cloud Application Integration, and possess the technical skills required for this role, we encourage you to apply and be part of our dynamic team.,

Posted 1 week ago

Apply

5.0 - 10.0 years

7 - 11 Lacs

Pune

Work from Office

Sr Data Engineer2 We are seeking a Sr. Data Engineer to join our Data Engineering team within our Enterprise Data Insightsorganization to build data solutions, design and implement ETL/ELT processes and manage our dataplatform to enable our cross functional stakeholders. As a part of our Corporate Engineering division, ourvision is to spearhead technology and data-led solutions and experiences to drive growth & innovation atscale.The ideal candidate will have a strong Data Engineering background, advanced Python knowledge andexperience with cloud services and SQL/NoSQL databases.You will work closely with our cross functional stakeholders in Product, Finance and GTM along withBusiness and Enterprise Technology teams.As a Senior Data Engineer, you willCollaborating closely with various stakeholders to prioritize requests, identify improvements, andoffer recommendations. Taking the lead in analyzing, designing, and implementing data solutions, which involvesconstructing and designing data models and ETL processes. Cultivating collaboration with corporate engineering, product teams, and other engineeringgroups. Leading and mentoring engineering discussions, advocating for best practices. Actively participating in design and code reviews. Accessing and exploring third-party data APIs to determine the data required to meet businessneeds. Ensuring data quality and integrity across different sources and systems. Managing data pipelines for both analytics and operational purposes. Continuously enhancing processes and policies to improve SLA and SOX compliance. You''ll be a great addition to the team if you haveHold a B.S., M.S., or Ph.D. in Computer Science or a related technical field. Possess over 5 years of experience in Data Engineering, focusing on building and maintainingdata environments. Demonstrate at least 5 years of experience in designing and constructing ETL/ELT processes,managing data solutions within an SLA-driven environment. Exhibit a strong background in developing data products, APIs, and maintaining testing,monitoring, isolation, and SLA processes. Possess advanced knowledge of SQL/NoSQL databases (such as Snowflake, Redshift,MongoDB). Proficient in programming with Python or other scripting languages. Have familiarity with columnar OLAP databases and data modeling. Experience in building ELT/ETL processes using tools like dbt, AirFlow, Fivetran, CI/CD usingGitHub, and reporting in Tableau. Possess excellent communication and interpersonal skills to effectively collaborate with variousbusiness stakeholders and translate requirements. Added bonus if you also haveA good understanding of Salesforce & Netsuite systems Experience in SAAS environments Designed and deployed ML models Experience with events and streaming data

Posted 1 week ago

Apply

5.0 - 8.0 years

12 - 18 Lacs

Bengaluru

Work from Office

Must haves :ETL/ELT testing 5/5Experience with SQL, analytical queries 5/5Responsibility: Responsible for designing and executing data validation scripts for a data migration project. The scope includes verifying the migration of data from Oracle and Informix (DB2) sources to GCP Cloud Storage, with further processing through Cloud Spanner.Role and Skills:-Good communication and analytical skills-Years of relevant experience 4 to 7 years (in ETL/ELT testing)-Ability to analyse mingled Data models, Data processes and able to maintain referential intact data flows, quality controls-Experienced in working with one or more RDBMS/Columnar databases and preferably having exposure to semi structured and unstructured datasets -Prior experience in 1 or more large-scale data migration project(s)-Sound experience in writing complex SQL queries, analytical queries and able to validate results-Working knowledge in Google Cloud based data services BigQuery, Data Storage etc.-Hands-on experience using Python programming language to build data validation scripts.-Good understanding about QA Test Management, Defect Management and Testing methodologies

Posted 1 week ago

Apply

4.0 - 9.0 years

3 - 7 Lacs

Bengaluru

Work from Office

Role Purpose The purpose of this role is to design, test and maintain software programs for operating systems or applications which needs to be deployed at a client end and ensure its meet 100% quality assurance parameters Must have technical skills: 4 years+ onSnowflake advanced SQL expertise 4 years+ on data warehouse experiences hands on knowledge with the methods to identify, collect, manipulate, transform, normalize, clean, and validate data, star schema, normalization / denormalization, dimensions, aggregations etc, 4+ Years experience working in reporting and analytics environments development, data profiling, metric development, CICD, production deployment, troubleshooting, query tuning etc, 3 years+ on Python advanced Python expertise 3 years+ on any cloud platform AWS preferred hands on experience on AWS on Lambda, S3, SNS / SQS, EC2 is bare minimum, 3 years+ on any ETL / ELT tool Informatica, Pentaho, Fivetran, DBT etc. 3+ years with developing functional metrics in any specific business vertical (finance, retail, telecom etc), Must have soft skills: Clear communication written and verbal communication, especially with time off, delays in delivery etc. Team Player Works in the team and works with the team, Enterprise Experience Understands and follows enterprise guidelines for CICD, security, change management, RCA, on-call rotation etc, Nice to have: Technical certifications from AWS, Microsoft, Azure, GCP or any other recognized Software vendor, 4 years+ on any ETL / ELT tool Informatica, Pentaho, Fivetran, DBT etc. 4 years+ with developing functional metrics in any specific business vertical (finance, retail, telecom etc), 4 years+ with team lead experience, 3 years+ in a large-scale support organization supporting thousands ofusers

Posted 1 week ago

Apply

2.0 - 6.0 years

3 - 7 Lacs

Gurugram

Work from Office

We are looking for a Pyspark Developer that loves solving complex problems across a full spectrum of technologies. You will help ensure our technological infrastructure operates seamlessly in support of our business objectives. Responsibilities Develop and maintain data pipelines implementing ETL processes. Take responsibility for Hadoop development and implementation. Work closely with a data science team implementing data analytic pipelines. Help define data governance policies and support data versioning processes. Maintain security and data privacy working closely with Data Protection Officer internally. Analyse a vast number of data stores and uncover insights. Skillset Required Ability to design, build and unit test the applications in Pyspark. Experience with Python development and Python data transformations. Experience with SQL scripting on one or more platforms Hive, Oracle, PostgreSQL, MySQL etc. In-depth knowledge of Hadoop, Spark, and similar frameworks. Strong knowledge of Data Management principles. Experience with normalizing/de-normalizing data structures, and developing tabular, dimensional and other data models. Have knowledge about YARN, cluster, executor, cluster configuration. Hands on working in different file formats like Json, parquet, csv etc. Experience with CLI on Linux-based platforms. Experience analysing current ETL/ELT processes, define and design new processes. Experience analysing business requirements in BI/Analytics context and designing data models to transform raw data into meaningful insights. Good to have knowledge on Data Visualization. Experience in processing large amounts of structured and unstructured data, including integrating data from multiple sources.

Posted 1 week ago

Apply

3.0 - 8.0 years

5 - 12 Lacs

Chennai

Work from Office

Min 3+ yrs in Data engineer(GenAI platform) ETL/ELT workflows using AWS,Azure Databricks,Airflow,Azure Data Factory Exp in Azure Databricks,Snowflake,Airflow,Python,SQL,Spark,Spark Streaming,AWS EKS, CI/CD(Jenkins) Elasticsearch,SOLR,OpenSearch,Vespa

Posted 1 week ago

Apply

3.0 - 7.0 years

5 - 9 Lacs

Bengaluru

Work from Office

We are looking for a Strategic thinker, have ability grasp new technologies, innovate, develop, and nurture new solutions. a self-starter to work in a diverse and fast-paced environment to support, maintain and advance the capabilities of the unified data platform. This a global role that requires partnering with the broader JLLT team at the country, regional and global level by utilizing in-depth knowledge of cloud infrastructure technologies and platform engineering experience. Responsibilities Developing ETL/ELT pipelines using Synapse pipelines and data flows Integrating Synapse with other Azure data services (Data Lake Storage, Data Factory, etc.) Building and maintaining data warehousing solutions using Synapse Design and contribute to information infrastructure and data management processes Develop data ingestion systems that cleanse and normalize diverse datasets Build data pipelines from various internal and external sources Create structure for unstructured data Develop solutions using both relational and non-relational databases Create proof-of-concept implementations to validate solution proposals Sounds like you To apply you need to be: Experience & Education Bachelors degree in Information Science, Computer Science, Mathematics, Statistics or a quantitative discipline in science, business, or social science. Worked with Cloud delivery team to provide technical solutions services roadmap for customer. Knowledge on creating IaaS and PaaS cloud solutions in Azure Platform that meet customer needs for scalability, reliability, and performance Technical Skills & Competencies Developing ETL/ELT pipelines using Synapse pipelines and data flows Integrating Synapse with other Azure data services (Data Lake Storage, Data Factory, etc.) Building and maintaining data warehousing solutions using Synapse Design and contribute to information infrastructure and data management processes Develop data ingestion systems that cleanse and normalize diverse datasets Build data pipelines from various internal and external sources Create structure for unstructured data

Posted 1 week ago

Apply

6.0 - 10.0 years

20 - 25 Lacs

Bengaluru

Work from Office

Database Administration skills database software Installation Patch Management Maintenance data ETL and ELT jobs on Microsoft SQL Server & Oracle databases Azure Data Factory & Synapse data warehousing data mining database Backups and Recovery Required Candidate profile 6 - 9 yrs of exp as Database administrator Datawarehouse Data Lake SQL Development skills Tables Views Schemas Procedure functions trigger CTE Cursor security log data structure data integration

Posted 1 week ago

Apply

4.0 - 9.0 years

4 - 9 Lacs

Hyderabad

Remote

As an ETL Developer for the Data and Analytics team, at Guidewire you will participate and collaborate with our customers and SI Partners who are adopting our Guidewire Data Platform as the centerpiece of their data foundation. You will facilitate and be an active developer when necessary to operationalize the realization of the agreed upon ETL Architecture goals of our customers adhering to Guidewire best practices and standards. You will work with our customers, partners, and other Guidewire team members to deliver successful data transformation initiatives. You will utilize best practices for design, development, and delivery of customer projects. You will share knowledge with the wider Guidewire Data and Analytics team to enable predictable project outcomes and emerge as a leader in our thriving data practice. One of our principles is to have fun while we deliver, so this role will need to keep the delivery process fun and engaging for the team in collaboration with the broader organization. Given the dynamic nature of the work in the Data and Analytics team, we are looking for decisive, highly-skilled technical problem solvers who are self-motivated and take proactive actions for the benefit of our customers and ensure that they succeed in their journey to Guidewire Cloud Platform. You will collaborate closely with teams located around the world and adhere to our core values Integrity, Collegiality, and Rationality. Key Responsibilities: Build out technical processes from specifications provided in High Level Design and data specifications documents. Integrate test and validation processes and methods into every step of the development process Work with Lead Architects and provide inputs into defining user stories, scope, acceptance criteria and estimates. Systematic problem-solving approach, coupled with a sense of ownership and drive Ability to work independently in a fast-paced Agile environment Actively contribute to the knowledge base from every project you are assigned to. Qualifications: Bachelors or Master’s Degree in Computer Science, or equivalent level of demonstrable professional competency, and 3 - 5 years + in a technical capacity building out complex ETL Data Integration frameworks. 3+ years of Experience with data processing and ETL (Extract, Transform, Load) and ELT (Extract, Load, and Transform) concepts. Experience with ADF or AWS Glue, Spark/Scala, GDP, CDC, ETL Data Integration, Experience working with relational and/or NoSQL databases Experience working with different cloud platforms (such as AWS, Azure, Snowflake, Google Cloud, etc.) Ability to work independently and within a team. Nice to have: Insurance industry experience Experience with ADF or AWS Glue Experience with the Azure data factory, Spark/Scala Experience with the Guidewire Data Platform.

Posted 1 week ago

Apply

15.0 - 20.0 years

10 - 14 Lacs

Hyderabad

Work from Office

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Microsoft Fabric Good to have skills : NAMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to ensure that project goals are met, facilitating discussions to address challenges, and guiding your team through the development process. You will also be responsible for maintaining communication with stakeholders to provide updates and gather feedback, ensuring that the applications align with business needs and technical requirements. Your role will require a balance of technical expertise and leadership skills to drive successful project outcomes. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate training and development opportunities for team members to enhance their skills.- Monitor project progress and implement necessary adjustments to meet deadlines. Professional & Technical Skills: Should have exposure in Azure Data Components Azure Data factory and Azure Data LakeBuilding ETL processes to extract, transform, and load data into the data modelsDeveloping and maintaining data pipelines and integration workflowsTroubleshooting and resolving issues related to data models, ETL processes, and reportingDesign and implement data pipelines for ingestion, transformation, and loading (ETL/ELT) using Fabric Data Factory Pipeline and Dataflows Gen2.Develop scalable and reliable solutions for batch data integration across various structured and unstructured data sources.Oversee the development of data pipelines for smooth data flow into the Fabric Data Warehouse.Implement and maintain data solutions in Fabric Lakehouse and Fabric Warehouse. Additional Information:- The candidate should have minimum 7.5 years of experience in Microsoft Fabric.- This position is based in Hyderabad.- A 15 years full time education is required. Qualification 15 years full time education

Posted 1 week ago

Apply

10.0 - 15.0 years

12 - 17 Lacs

Hyderabad

Work from Office

As a Data Engineering Manager at Micron Technology Inc., you will be a key member of our Technology Solutions group within the Smart Manufacturing and AI organization. The Data Engineering team works closely with Microns Front End Manufacturing and Planning Ops business area in all aspects of data, data engineering, Machine Learning, and advanced analytics solutions. We are looking for leaders with strong technical experience in Big Data and Cloud Data warehouse technologies. This role will work primarily in Cloud data warehouse like Snowflake and GCP platforms , monitoring solutions such as Splunk and automation and machine learning using Python. You will provide technical and people leadership for the team. You will ensure critical projects as well as higher level production support are delivered with high quality in collaboration with internal Micron team members. Job Description : Responsibilities and Tasks: Lead a team of Data Engineers Accountable for performance discussion for direct reports , E ngage team members and work with team members on their career development. Responsible for the development, coaching and performance management of those who report to you. Build, maintain , and support a positive work culture that promotes safety, security, and environmental programs. Succession planning Participate in design, architecture review?and deployment of big data and cloud data warehouse solutions. Lead and drive project requirements and deliverable's Implement solutions that eliminate or minimize technical debt through a well-designed architecture, data model, and lifecycle. Collaborate with Key project stakeholders, I4 Solution analyst s on project needs and translate requirements into technical needs for the team of data engineers. Bring together and share best-practice knowledge among the data engineering community. Coach, mentor, and help develop data engineers. Guide and manage the team through operational issues, escalations and resolve business partner issues in a timely manner with strong collaboration and care for business priorities. Ability to learn and be conversational with multiple utilities and tools that help with Operations monitoring and alerting. Collaborate with business partners and other teams to ensure data solutions are available, recover from failures and operate healthy. Contribute to site level initiatives such as hiring, cross pillar leadership collaboration, resource management and engagement Qualifications and Experience: 10+ years' developing, delivering, and/or supporting big data engineering and advanced analytics solutions. 6 + years ' of experience in managing or leading data engineering teams 4 -5 years of hands-on experience building Cloud Data centric solutions in GCP or other cloud platforms Intermediate to Advanced level programing experience, preferably Python . Spark experience is a plus Proficient with ELT or ETL (preferably NiFi) techniques for complex data processing Proficient with various database management systems - preferably SQL Server, Snowflake Strong domain knowledge and understanding of Mfg Planning and Scheduling data Candidate should be strong in Data Structures, Data processing and implementing complex data integration s with application . Good to have knowledge on any visualization tool like Power BI, Tableau. Demonstrate ability to lead multi-functional groups, with diverse interests and requirements, to a common objective . Presentation skills with a high degree of comfort speaking with management and developers. A passion for data and information with strong analytical, problem solving, and organizational skills. The ability to work in a dynamic, fast-paced, work environment. Self-motivated with the ability to work under minimal supervision. Education: B.S. in Computer Science, Management Information Systems, or related fields

Posted 1 week ago

Apply

2.0 - 4.0 years

7 - 11 Lacs

Hyderabad

Remote

We are hiring a Python-based Data Engineer to develop ETL processes and data pipelines. Key Responsibilities : Build and optimize ETL/ELT data pipelines. Integrate APIs and large-scale data ingestion systems. Automate data workflows using Python and cloud tools. Collaborate with data science and analytics teams. Required Qualifications: 2+ years in data engineering using Python. Familiar with tools like Airflow, Pandas, and SQL. Experience with cloud data services (AWS/GCP/Azure).

Posted 1 week ago

Apply

2.0 - 4.0 years

7 - 11 Lacs

Mumbai

Remote

We are hiring a Python-based Data Engineer to develop ETL processes and data pipelines. Key Responsibilities : Build and optimize ETL/ELT data pipelines. Integrate APIs and large-scale data ingestion systems. Automate data workflows using Python and cloud tools. Collaborate with data science and analytics teams. Required Qualifications: 2+ years in data engineering using Python. Familiar with tools like Airflow, Pandas, and SQL. Experience with cloud data services (AWS/GCP/Azure).

Posted 1 week ago

Apply

2.0 - 4.0 years

7 - 11 Lacs

Kolkata

Remote

We are hiring a Python-based Data Engineer to develop ETL processes and data pipelines. Key Responsibilities : Build and optimize ETL/ELT data pipelines. Integrate APIs and large-scale data ingestion systems. Automate data workflows using Python and cloud tools. Collaborate with data science and analytics teams. Required Qualifications: 2+ years in data engineering using Python. Familiar with tools like Airflow, Pandas, and SQL. Experience with cloud data services (AWS/GCP/Azure).

Posted 1 week ago

Apply

2.0 - 4.0 years

7 - 11 Lacs

Bengaluru

Remote

We are hiring a Python-based Data Engineer to develop ETL processes and data pipelines. Key Responsibilities : Build and optimize ETL/ELT data pipelines. Integrate APIs and large-scale data ingestion systems. Automate data workflows using Python and cloud tools. Collaborate with data science and analytics teams. Required Qualifications: 2+ years in data engineering using Python. Familiar with tools like Airflow, Pandas, and SQL. Experience with cloud data services (AWS/GCP/Azure).

Posted 1 week ago

Apply

10.0 - 15.0 years

0 Lacs

chennai, tamil nadu

On-site

Are you a skilled Data Architect with a passion for tackling intricate data challenges from various structured and unstructured sources Do you excel in crafting micro data lakes and spearheading data strategies at an enterprise level If this sounds like you, we are eager to learn more about your expertise. In this role, you will be responsible for designing and constructing tailored micro data lakes specifically catered to the lending domain. Your tasks will include defining and executing enterprise data strategies encompassing modeling, lineage, and governance. You will play a crucial role in architecting robust data pipelines for both batch and real-time data ingestion, as well as devising strategies for extracting, transforming, and storing data from diverse sources like APIs, PDFs, logs, and databases. Furthermore, you will be instrumental in establishing best practices related to data quality, metadata management, and data lifecycle control. Your hands-on involvement in implementing processes, strategies, and tools will be pivotal in creating innovative products. Collaboration with engineering and product teams to align data architecture with overarching business objectives will be a key aspect of your role. To excel in this position, you should bring to the table over 10 years of experience in data architecture and engineering. A deep understanding of both structured and unstructured data ecosystems is essential, along with practical experience in ETL, ELT, stream processing, querying, and data modeling. Proficiency in tools and languages such as Spark, Kafka, Airflow, SQL, Amundsen, Glue Catalog, and Python is a must. Additionally, expertise in cloud-native data platforms like AWS, Azure, or GCP is highly desirable, along with a solid foundation in data governance, privacy, and compliance standards. While exposure to the lending domain, ML pipelines, or AI integrations is considered advantageous, a background in fintech, lending, or regulatory data environments is also beneficial. This role offers you the chance to lead data-first transformation, develop products that drive AI adoption, and the autonomy to design, build, and scale modern data architecture. You will be part of a forward-thinking, collaborative, and tech-driven culture with access to cutting-edge tools and technologies in the data ecosystem. If you are ready to shape the future of data with us, we encourage you to apply for this exciting opportunity based in Chennai. Join us in redefining data architecture and driving innovation in the realm of structured and unstructured data sources.,

Posted 2 weeks ago

Apply

6.0 - 10.0 years

0 Lacs

chennai, tamil nadu

On-site

As an organization with over 26 years of experience in delivering Software Product Development, Quality Engineering, and Digital Transformation Consulting Services to Global SMEs & Large Enterprises, CES has established long-term relationships with leading Fortune 500 Companies across various industries such as Automotive, AgTech, Bio Science, EdTech, FinTech, Manufacturing, Online Retailers, and Investment Banks. These relationships, spanning over a decade, are built on our commitment to timely delivery of quality services, investments in technology innovations, and fostering a true partnership mindset with our customers. In our current phase of exponential growth, we maintain a consistent focus on continuous improvement and a process-oriented culture. To further support our accelerated growth, we are seeking qualified and committed individuals to join us and play an exceptional role. You can learn more about us at: http://www.cesltd.com/ Experience with Azure Synapse Analytics is a key requirement for this role. The ideal candidate should have hands-on experience in designing, developing, and deploying solutions using Azure Synapse Analytics, including a good understanding of its various components such as SQL pools, Spark pools, and Integration Runtimes. Proficiency in Azure Data Lake Storage is also essential, with a deep understanding of its architecture, features, and best practices for managing a large-scale Data Lake or Lakehouse in an Azure environment. Moreover, the candidate should have experience with AI Tools and LLMs (e.g. GitHub Copilot, Copilot, ChatGPT) for automating responsibilities related to the role. Knowledge of Avro and Parquet file formats is required, including experience in data serialization, compression techniques, and schema evolution in a big data environment. Prior experience working with data in a healthcare or clinical laboratory setting is highly desirable, along with a strong understanding of PHI, GDPR, HIPPA, and HITRUST regulations. Relevant certifications such as Azure Data Engineer Associate or Azure Synapse Analytics Developer Associate are highly desirable for this position. The essential functions of the role include designing, developing, and maintaining data pipelines for ingestion, transformation, and loading of data into Azure Synapse Analytics, as well as working on data models, SQL queries, stored procedures, and other artifacts necessary for data processing and analysis. Successful candidates should possess proficiency in relational databases such as Oracle, Microsoft SQL Server, PostgreSQL, MySQL/MariaDB, strong SQL skills, experience in building ELT pipelines and data integration solutions, familiarity with data modeling and warehousing concepts, and excellent analytical and problem-solving abilities. Effective communication and collaboration skills are also crucial for collaborating with cross-functional teams. If you are a dedicated professional with the required expertise and skills, we invite you to join our team and contribute to our continued success in delivering exceptional services to our clients.,

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies