Home
Jobs

1188 Adf Jobs - Page 13

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0 years

0 Lacs

Trivandrum, Kerala, India

On-site

A contract opportunity for 6 months and then will be hired on a permanent role based on performance (MNC) Senior Data Engineer will be responsible for designing, implementing, and maintaining data solutions on the Microsoft Azure Data platform and SQL Server (SSIS, SSAS, UC4 Atomic) Collaborate with various stakeholders, and ensuring the efficient processing, storage, and retrieval of large volumes of data Technical Expertise and Responsibilities Design, build, and maintain scalable and reliable data pipelines. Should be able to design and build solutions in Azure data factory and Databricks to extract, transform and load data into different source and target systems. Should be able to design and build solutions in SSIS Should be able to analyze and understand the existing data landscape and provide recommendations/innovative ideas for rearchitecting / optimizing / streamlining to bring efficiency and scalability. Must be able to collaborate and effectively communicate with onshore counterparts to address technical gaps, requirement challenges, and other complex scenarios. Monitor and troubleshoot data systems to ensure high performance and reliability. Should be highly analytical and detail-oriented with extensive familiarity with database management principles. Optimize data processes for speed and efficiency. Ensure the data architecture supports business requirements and data governance policies. Define and execute the data engineering strategy in alignment with the company’s goals. Integrate data from various sources, ensuring data quality and consistency. Stay updated with emerging technologies and industry trends. Understand the big picture business process utilizing deep knowledge in banking industry and translate them to data requirements. Enabling and running data migrations across different databases and different servers Perform thorough testing and validation to support the accuracy of data transformations and data verification used in machine learning models. Analyze data and different systems to define data requirements. Should be well versed with Data Structures & algorithms. Define data mapping working along with business and digital team and data team. Data pipeline maintenance/testing/performance validation Assemble large, complex data sets that meet functional / non-functional business requirements. Analyze and identify gaps on data needs and work with business and IT to bring in alignment on data needs. Troubleshoot and resolve technical issues as they arise. Optimize data flow and collection for cross-functional teams. Work closely with Data counterparts at onshore, product owners, and business stakeholders to understand data needs and strategies. Collaborate with IT and DevOps teams to ensure data infrastructure aligns with overall IT architecture. Implement best practices for data security and privacy. Drive continuous improvement initiatives within the data engineering function Optimize data flow and collection for cross-functional teams. Understand impact of data conversions as they pertain to servicing operations. Manage higher volume and more complex cases with accuracy and efficiency. Role Expectations Design and develop warehouse solutions using Azure Synapse Analytics, ADLS, ADF, Databricks, Power BI, Azure Analysis Services Should be proficient in SSIS, SQL and Query optimization. Should have worked in onshore offshore model managing challenging scenarios. Expertise in working with large amounts of data (structured and unstructured), building data pipelines for ETL workloads and generate insights utilizing Data Science, Analytics. Expertise in Azure, AWS cloud services, and DevOps/CI/CD frameworks. Ability to work with ambiguity and vague requirements and transform them into deliverables. Good combination of technical and interpersonal skills with strong written and verbal communication; detail-oriented with the ability to work independently. Drive automation efforts across the data analytics team utilizing Infrastructure as Code (IaC) using Terraform, Configuration Management, and Continuous Integration (CI) / Continuous Delivery (CD) tools such as Jenkins. Help build define architecture frameworks, best practices & processes. Collaborate on Data warehouse architecture and technical design discussions. Expertise in Azure Data factory and should be familiar with building pipelines for ETL projects. Expertise in SQL knowledge and experience working with relational databases. Expertise in Python and ETL projects Experience in data bricks will be of added advantage. Should have expertise in data life cycle, data ingestion, transformation, data loading, validation, and performance tuning. Skillsets Required MUST TO HAVE SQL, PL/SQL SSIS SSAS TFS Azure Data Factory Prefer to have Azure Databricks Azure Synapse ADLS Lakehouse Architecture Python SCD Concepts and Implementation UC4 Power BI DevOps CI/CD Banking Domain

Posted 2 weeks ago

Apply

15.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About Kanerika Who we are: Kanerika Inc. is a premier global software products and services firm that specializes in providing innovative solutions and services for data-driven enterprises. Our focus is to empower businesses to achieve their digital transformation goals and maximize their business impact through the effective use of data and AI. We leverage cutting-edge technologies in data analytics, data governance, AI-ML, GenAI/ LLM and industry best practices to deliver custom solutions that help organizations optimize their operations, enhance customer experiences, and drive growth. Locations: We are located in Hyderabad, Indore and Ahmedabad (India). What You Will Do: As a Data Governance Architect at Kanerika, you will play a pivotal role in shaping and executing the enterprise data governance strategy. Your responsibilities include: 1. Strategy, Framework, and Governance Operating Model - Develop and maintain enterprise-wide data governance strategies, standards, and policies. - Align governance practices with business goals like regulatory compliance and analytics readiness. - Define roles and responsibilities within the governance operating model. - Drive governance maturity assessments and lead change management initiatives. 2. Stakeholder Alignment & Organizational Enablement - Collaborate across IT, legal, business, and compliance teams to align governance priorities. - Define stewardship models and create enablement, training, and communication programs. - Conduct onboarding sessions and workshops to promote governance awareness. 3. Architecture Design for Data Governance Platforms - Design scalable and modular data governance architecture. - Evaluate tools like Microsoft Purview, Collibra, Alation, BigID, Informatica. - Ensure integration with metadata, privacy, quality, and policy systems. 4. Microsoft Purview Solution Architecture - Lead end-to-end implementation and management of Microsoft Purview. - Configure RBAC, collections, metadata scanning, business glossary, and classification rules. - Implement sensitivity labels, insider risk controls, retention, data map, and audit dashboards. 5. Metadata, Lineage & Glossary Management - Architect metadata repositories and ingestion workflows. - Ensure end-to-end lineage (ADF → Synapse → Power BI). - Define governance over business glossary and approval workflows. 6. Data Classification, Access & Policy Management - Define and enforce rules for data classification, access, retention, and sharing. - Align with GDPR, HIPAA, CCPA, SOX regulations. - Use Microsoft Purview and MIP for policy enforcement automation. 7. Data Quality Governance - Define KPIs, validation rules, and remediation workflows for enterprise data quality. - Design scalable quality frameworks integrated into data pipelines. 8. Compliance, Risk, and Audit Oversight - Identify risks and define standards for compliance reporting and audits. - Configure usage analytics, alerts, and dashboards for policy enforcement. 9. Automation & Integration - Automate governance processes using PowerShell, Azure Functions, Logic Apps, REST APIs. - Integrate governance tools with Azure Monitor, Synapse Link, Power BI, and third-party platforms. Required Qualifications: - 15+ years in data governance and management. - Expertise in Microsoft Purview, Informatica, and related platforms. - Experience leading end-to-end governance initiatives. - Strong understanding of metadata, lineage, policy management, and compliance regulations. - Hands-on skills in Azure Data Factory, REST APIs, PowerShell, and governance architecture. - Familiar with Agile methodologies and stakeholder

Posted 2 weeks ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Key Responsibilities • ARCHITECTURE AND DESIGN FOR DATA ENGINEERING AND MACHINE LEARNING PROJECTS Establishing architecture and target design for data engineering and machine learning projects. • REQUIREMENT ANALYSIS, PLANNING, EFFORT AND RESOURCE NEEDS ESTIMATION Current inventory analysis, review and formalize requirements, project planning and execution plan. • ADVISORY SERVICES AND BEST PRACTICES Troubleshooting, Performance Tuning, Cost Optimization, Operational Runbooks and Mentoring • LARGE MIGRATIONS Assist customers with large migrations to Databricks from Hadoop ecosystems, Data Warehouses (Teradata, DataStage, Netezza, Ab Initio), ETL engines (Informatica), SAS, SQL, DW, Cloud-based Data platforms like Redshift, Snowflake, EMR, etc • DESIGN, BUILD AND OPTIMIZE DATA PIPELINES The Databricks implementation will be best in class, with flexibility for future iterations. • PRODUCTION READINESS Assisting with production readiness for customers, including exception handling, production cutover, capture analysis, alert scheduling and monitoring • MACHINE LEARNING (ML) – MODEL REVIEW, TUNING, ML OPERATIONS AND OPTIMIZATION Build and review ML models, ML best practices, model lifecycle, ML frameworks and deploying of models in production. Must Have: ▪ Pre- Sales experience is a must. ▪ Hands on experience with distributed computing framework like DataBricks, Spark Ecosystem (Spark Core, PySpark, Spark Streaming, SparkSQL) ▪ Willing to work with product teams to best optimize product features/functions. ▪ Experience on Batch workloads and real time streaming with high volume data frequency. ▪ Performance optimization on Spark workloads ▪ Environment setup, user management, Authentication and cluster management on Databricks ▪ Professional curiosity and the ability to enable yourself in new technologies and tasks. ▪ Good understanding of SQL and a good grasp of relational and analytical database management theory and practice. Key Skills: • Python, SQL and Pyspark • Big Data Ecosystem (Hadoop, Hive, Sqoop, HDFS, Hbase) • Spark Ecosystem (Spark Core, Spark Streaming, Spark SQL) / Databricks • Azure (ADF, ADB, Logic Apps, Azure SQL database, Azure Key Vaults, ADLS, Synapse) • AWS (Lambda,AWS Glue, S3, Redshift) • Data Modelling, ETL Methodology

Posted 2 weeks ago

Apply

2.0 - 4.0 years

0 Lacs

Greater Hyderabad Area

On-site

About Us: Join our stealth-mode AI startup on a mission to revolutionize AI and data solutions. Headquartered in Hyderabad, we are a well-funded startup with a world-class team and a passion for innovation in AI, NLP, Computer Vision, and Speech Recognition. We are looking for a highly motivated Data Engineer with 2 to 4 years of experience to join our team and work on cutting-edge projects in AI and big data technologies. Role Overview: As a Data Engineer , you will design, build, and optimize scalable data pipelines and platforms to support our AI-driven solutions. You’ll collaborate with cross-functional teams to enable real-time data processing and insights for enterprise-level applications. Key Responsibilities: Develop and maintain robust data pipelines using tools like PySpark , Kafka , and Airflow . Design and optimize data workflows for high scalability and performance using Hadoop , HDFS , and Hive . Integrate structured and unstructured data from diverse sources into a centralized platform. Leverage big data technologies for real-time processing and streaming using Spark Streaming and Nifi . Work on cloud-based platforms such as AWS , Azure , and GCP to deploy and monitor scalable data solutions. Collaborate with AI/ML teams to deploy machine learning models using MLflow and integrate AI capabilities into data pipelines. Automate and monitor workflows to ensure seamless operations using CI/CD pipelines , Kubernetes , and Docker . Implement data validation, performance testing, and troubleshooting of large-scale datasets. Prepare and share actionable insights through BI tools like Tableau and Grafana . Required Skills and Qualifications: Education: Bachelor’s or Master’s degree in Computer Science, Data Engineering, or related fields. Experience: 2 to 4 years in data engineering roles, working with big data ecosystems. Technical Proficiency: Big Data Tools: Hadoop, HDFS, PySpark, Hive, Sqoop, Kafka, Spark Streaming, Airflow, Presto, Nifi. Cloud Platforms: AWS (Glue, S3, EMR), Azure (ADF, HDInsight), GCP (BigQuery, Pub/Sub). Programming Languages: Python, SQL, Scala. DevOps & Automation: Jenkins, Ansible, Kubernetes, Docker. Databases: MySQL, Oracle, HBase, Redis. Visualization Tools: Tableau, Grafana, Zeppelin. Knowledge of machine learning models, AI tools (e.g., TensorFlow, H2O), and feature engineering is a plus. Strong problem-solving skills with attention to detail and ability to manage multiple projects. Excellent communication and collaboration skills in a fast-paced environment. What We Offer: Opportunity to work on innovative AI projects with a global impact. Collaborative work culture with access to cutting-edge technologies.

Posted 2 weeks ago

Apply

5.0 - 31.0 years

16 - 17 Lacs

Hyderabad

On-site

Job Title: Cloud Migration Consultant – (AWS to Azure) Experience: 4+ years in application assessment and migration About the Role We’re looking for a Cloud Migration Consultant with hands-on experience assessing and migrating complex applications to Azure. You'll work closely with Microsoft business units, participating in Intake & Assessment and Planning & Design phases, creating migration artifacts, and leading client interactions. You’ll also support application modernization efforts in Azure, with exposure to AWS as needed. Key Responsibilities Assess application readiness and document architecture, dependencies, and migration strategy. Conduct interviews with stakeholders and generate discovery insights using tools like Azure Migrate, CloudockIt, PowerShell. Create architecture diagrams, migration playbooks, and maintain Azure DevOps boards. Set up applications both on-premises and in cloud environments (primarily Azure). Support proof-of-concepts (PoCs) and advise on migration options. Collaborate with application, database, and infrastructure teams to enable smooth transition to migration factory teams. Track progress, blockers, and risks, reporting timely status to project leadership. Required Skills 4+ years of experience in cloud migration and assessment Strong expertise in Azure IaaS/PaaS (VMs, App Services, ADF, etc.) Familiarity with AWS IaaS/PaaS (EC2, RDS, Glue, S3) Experience with Java (SpringBoot)/C#, .Net/Python, Angular/React.js, REST APIs Working knowledge of Kafka, Docker/Kubernetes, Azure DevOps Network infrastructure understanding (VNets, NSGs, Firewalls, WAFs) IAM knowledge: OAuth, SAML, Okta/SiteMinder Experience with Big Data tools like Databricks, Hadoop, Oracle, DocumentDB Preferred Qualifications Azure or AWS certifications Prior experience with enterprise cloud migrations (especially in Microsoft ecosystem) Excellent communication and stakeholder management skills Educational qualification: B.E/B.Tech/MCA

Posted 2 weeks ago

Apply

10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Title: Azure Solutions Architect or Lead. Skills: Data warehousing, SQL, ETL, Python/PySpark, Understanding of Data governance, data security & Compliances. Azure Data Factory, Azure Synapse Analytics, Azure Data Lake Storage Experience Required: 10 - 15 Years. Job Location: Hyderabad, Pune & Greater Noida. We at Coforge are hiring Azure Solutions Architect or Lead with following skillset: Design and build data architecture frameworks leveraging Azure services (Azure Data Factory, Azure Synapse Analytics, Azure Data Lake Storage, Azure SQL Database, ADLS Gen2, Synapse Engineering, Fabric Notebook, Pyspark, Scala, Python etc.). Define and implement reference architectures and architecture blueprinting Experience demonstrating and ability to talk about wide variety of data engineering tools, architectures across cloud providers Especially on Azure platform Experience in building Data Product, data processing frameworks, Metadata Driven ETL pipelines , Data Security, Data Standardization, Data Quality and Data Reconciliation workflows Vast experience on building data product on MS AZURE / Fabric platform, Azure Managed instance, Microsoft Fabrics, Lakehouse, Synapse Engineering, MS onelake Hands-on development experience on above technologies Experience on implementing Devops Implement data modeling best practices (dimensional modeling, data vault). Ensure data security and compliance using Azure security tools (Azure Active Directory, Azure Key Vault). Implement data governance and data quality processes. Utilize version control tools (specify tools, example Git). Work with Infrastructure as Code (specify tools, example Terraform, ARM templates). Work within an Agile environment (specify agile method, example Scrum). Effectively communicate with stakeholders at all levels. 10+ years of experience in Data Warehousing and Azure Cloud technologies. Strong hands-on experience with Azure Fabrics, Synapse, ADf, SQL, Python/PySpark. Proven expertise in designing and implementing data architectures on Azure using Microsoft fabric, azure synapse, ADF, MS fabric notebook Exposure to Azure DevOps and Business Intelligence. Solid understanding of data governance, data security, and compliance. Excellent communication and collaboration skills. Ability to work effectively in a UK shift (1 PM IST to 9:30 PM IST). Ability to work in a hybrid environment, with 3 days/week in office. Please share your CV on Gaurav.2.Kumar@coforge.com or WhatsApp 9667427662 for any queries.

Posted 2 weeks ago

Apply

15.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

YASH Technologies is a leading technology integrator specializing in helping clients reimagine operating models, enhance competitiveness, optimize costs, foster exceptional stakeholder experiences, and drive business transformation. At YASH, we’re a cluster of the brightest stars working with cutting-edge technologies. Our purpose is anchored in a single truth – bringing real positive changes in an increasingly virtual world and it drives us beyond generational gaps and disruptions of the future. We are looking forward to hire Project Management Professionals in the following areas : Technical skills Should have 15 + years of working experience handling end to end DWH projects. Experience handling ETL Migration / Visualization projects includes technologies like AWS Glue /Redshift, Power BI/Tableau, Azure ADF/Data bricks, Lead technical design and architecture discussions across cross-functional teams Oversee software requirements (including design, architecture, and testing) Manage through agile methodologies, such as Scrum Decipher technical needs of other departments within the organization and translate them across stakeholder groups. Leadership skills Act as a communications liaison between technical and non-technical audiences Develop and maintain productive internal relationships Facilitate cross-collaboration and understanding between IT and other departments Generate targeted reports for different internal and/or external audiences Stay current on the latest news, information, and trends about program management and the organization’s industry. Business responsibilities Organize and track jobs, clarify project scopes, proactively manage risks, deal with project escalations, ruthlessly prioritize tasks and dependencies, and problem solve Meet specific business objectives and metrics Support the roadmap planning process Develop strategies and implement tactics to follow through on those strategies Solve complex business problems within allocated timelines and budget Represent company management to technical teams and vice versa At YASH, you are empowered to create a career that will take you to where you want to go while working in an inclusive team environment. We leverage career-oriented skilling models and optimize our collective intelligence aided with technology for continuous learning, unlearning, and relearning at a rapid pace and scale. Our Hyperlearning workplace is grounded upon four principles Flexible work arrangements, Free spirit, and emotional positivity Agile self-determination, trust, transparency, and open collaboration All Support needed for the realization of business goals, Stable employment with a great atmosphere and ethical corporate culture

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Project Description: Our client is an EU subsidiary of a Global Financial Bank working in multiple markets and asset classes. The Bank's Data Store has been transformed to a Data warehouse (DWH) which is the central source for Regulatory Reporting. It is also intended to be the core data integration platform which not only provide date for regulatory reporting but also provide data for Risk Modelling, Portfolio Analysis, Ad Hoc Analysis & Reporting (Finance, Risk, other), MI Reporting, Data Quality Management, etc. Due to high demand of regulatory requirements, a lot of regulatory projects are in progress to reflect regulatory requirements on existing regulatory reports and to develop new regulatory reports on MDS. Examples are IFRS9, AnaCredit, IRBB, the new Deposit Guarantee Directive (DGSD), Bank Data Retrieval Portal (BDRP) and the Fundamental Review of the Trading Book (FRTB). DWH / ETL Tester will work closely with the Development Team to design, build interfaces and integrate data from a variety from internal and external data sources into the new Enterprise Data Warehouse environment. The ETL Tester will be primarily responsible for testing Enterprise Data Warehouse using Automation within industry recognized ETL standards, architecture, and best practices. Responsibilities: Testing the Bank's data warehouse system changes, testing the changes (user stories), support IT integration testing in TST and support business stakeholders with User Acceptance Testing. It is hands-on position: you will be required to write and execute test cases, build test automation where it is applicable. Overall Purpose of Job - Test the MDS data warehouse system - Validate regulatory reports - Supporting IT and Business stakeholders during UAT phase - Contribute to improvement of testing and development processes - Work as part of a cross-functional team and take ownership of tasks - Contribute in Testing Deliverables. - Ensure the implementation of test standards and best practices for the agile model & contributes to their development. - Engage with internal stakeholders in various areas of the organization to seek alignment and collaboration. - Deals with external stakeholders / Vendors. - Identify risks / issues and present associated mitigating actions taking into account the critically of the domain of the underlying business. - Contribute to continuous improvement of testing standard processes. Additional responsibilities include work closely with the systems analysts and the application developers, utilize functional design documentation and technical specifications to facilitate the creation and execution of manual and automated test scripts, perform data analysis and creation of test data, track and help resolve defects and ensure that all testing is conducted and documented in adherence with the bank's standard. Mandatory Skills Description: Must have experience/expertise : Tester, Test Automation, Data Warehouse, Banking Technical: - At least 5 years of testing experience of which at least 2 years in the finance industry with good level knowledge on Data Warehouse, RDBMS concepts. - Strong SQL scripting knowledge and hands-on experience and experience with ETL & Databases. - Expertise on new age cloud based Data Warehouse solutions - ADF, SnowFlake, GCP etc. - Hands-On expertise in writing complex SQL using multiple JOINS and highly complex functions to test various transformations and ETL requirements. - Knowledge and Experience on creating Test Automation for Database and ETL Testing Regression Suite. - Automation using Selenium with Python (or Java Script), Python Scripts, Shell Script. - Knowledge of framework designing, REST API Testing of databases using Python. - Experience using Atlassian tool set, Azure DevOps and code & Version Management - GIT, Bitbucket, Azure Repos etc. - Help and provide inputs for the creation of Test Plan to address the needs of Cloud Based ETL Pipelines. Non-Technical: - Able to work in an agile environment - Experience in working in high priority projects (high pressure on delivery) - Some flexibility outside 9-5 working hours (Netherlands Time zone) - Able to work in demanding environment and have pragmatic approach with "can do" attitude. - Able to work independently and also to collaborate across the organization - Highly developed problem-solving skills with minimal supervision - Able to easily adapt to new circumstances / technologies / procedures. - Stress resistant and constructive - whatever the context. - Able to align with existing standards and acting with attention to detail.

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Job Family Data Science & Analysis (India) Travel Required None Clearance Required None What You Will Do Design, develop, and maintain robust, scalable, and efficient data pipelines and ETL/ELT processes. Lead and execute data engineering projects from inception to completion, ensuring timely delivery and high quality. Build and optimize data architectures for operational and analytical purposes. Collaborate with cross-functional teams to gather and define data requirements. Implement data quality, data governance, and data security practices. Manage and optimize cloud-based data platforms (Azure\AWS). Develop and maintain Python/PySpark libraries for data ingestion, Processing and integration with both internal and external data sources. Design and optimize scalable data pipelines using Azure data factory and Spark(Databricks) Work with stakeholders, including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs. Develop frameworks for data ingestion, transformation, and validation. Mentor junior data engineers and guide best practices in data engineering. Evaluate and integrate new technologies and tools to improve data infrastructure. Ensure compliance with data privacy regulations (HIPAA, etc.). Monitor performance and troubleshoot issues across the data ecosystem. Automated deployment of data pipelines using GIT hub actions \ Azure devops What You Will Need Bachelors or master’s degree in computer science, Information Systems, Statistics, Math, Engineering, or related discipline. Minimum 5 + years of solid hands-on experience in data engineering and cloud services. Extensive working experience with advanced SQL and deep understanding of SQL. Good Experience in Azure data factory (ADF), Databricks , Python and PySpark. Good experience in modern data storage concepts data lake, lake house. Experience in other cloud services (AWS) and data processing technologies will be added advantage. Ability to enhance , develop and resolve defects in ETL process using cloud services. Experience handling large volumes (multiple terabytes) of incoming data from clients and 3rd party sources in various formats such as text, csv, EDI X12 files and access database. Experience with software development methodologies (Agile, Waterfall) and version control tools Highly motivated, strong problem solver, self-starter, and fast learner with demonstrated analytic and quantitative skills. Good communication skill. What Would Be Nice To Have AWS ETL Platform – Glue , S3 One or more programming languages such as Java, .Net Experience in US health care domain and insurance claim processing. What We Offer Guidehouse offers a comprehensive, total rewards package that includes competitive compensation and a flexible benefits package that reflects our commitment to creating a diverse and supportive workplace. About Guidehouse Guidehouse is an Equal Opportunity Employer–Protected Veterans, Individuals with Disabilities or any other basis protected by law, ordinance, or regulation. Guidehouse will consider for employment qualified applicants with criminal histories in a manner consistent with the requirements of applicable law or ordinance including the Fair Chance Ordinance of Los Angeles and San Francisco. If you have visited our website for information about employment opportunities, or to apply for a position, and you require an accommodation, please contact Guidehouse Recruiting at 1-571-633-1711 or via email at RecruitingAccommodation@guidehouse.com. All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodation. All communication regarding recruitment for a Guidehouse position will be sent from Guidehouse email domains including @guidehouse.com or guidehouse@myworkday.com. Correspondence received by an applicant from any other domain should be considered unauthorized and will not be honored by Guidehouse. Note that Guidehouse will never charge a fee or require a money transfer at any stage of the recruitment process and does not collect fees from educational institutions for participation in a recruitment event. Never provide your banking information to a third party purporting to need that information to proceed in the hiring process. If any person or organization demands money related to a job opportunity with Guidehouse, please report the matter to Guidehouse’s Ethics Hotline. If you want to check the validity of correspondence you have received, please contact recruiting@guidehouse.com. Guidehouse is not responsible for losses incurred (monetary or otherwise) from an applicant’s dealings with unauthorized third parties. Guidehouse does not accept unsolicited resumes through or from search firms or staffing agencies. All unsolicited resumes will be considered the property of Guidehouse and Guidehouse will not be obligated to pay a placement fee.

Posted 2 weeks ago

Apply

6.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Title: Azure Certified AI Engineer / Data Scientist Experience: 4–6 Years Engagement: Contract to Hire (C2H) Location: Pune (Onsite – 5 Days a Week) Company: Optimum Data Analytics (ODA) About Optimum Data Analytics (ODA) Optimum Data Analytics is a fast-growing data and AI consulting firm delivering innovative solutions to enterprise clients across industries. We specialize in data engineering, machine learning, and AI/GenAI-based platforms on cloud ecosystems. Role Overview We are looking for an Azure Certified AI Engineer or Data Scientist with 4–6 years of experience to join our Pune office on a full-time, onsite C2H engagement. The ideal candidate should be hands-on with building and deploying AI/ML solutions using Azure cloud services, and must hold an active Azure AI Engineer Associate or Azure Data Scientist Associate certification. Key Responsibilities Design and deploy AI/ML models using Azure AI/ML Studio, Azure Machine Learning, and Azure Cognitive Services. Implement and manage data pipelines, model training workflows, and ML lifecycle in the Azure ecosystem. Work with business stakeholders to gather requirements, analyze data, and deliver predictive insights. Collaborate with data engineers and product teams to deliver scalable and production-ready AI solutions. Ensure model monitoring, versioning, governance, and responsible AI practices are in place. Contribute to solution documentation and technical architecture. Required Skills & Qualifications 4–6 years of hands-on experience in AI/ML, data science, or machine learning engineering. Mandatory Certification: Microsoft Azure AI Engineer Associate OR Microsoft Azure Data Scientist Associate Strong knowledge of Azure services: Azure Machine Learning, Cognitive Services, Azure Functions, Data Factory, and Azure Storage. Proficient in Python, with experience using ML libraries such as scikit-learn, TensorFlow, PyTorch, or similar. Solid understanding of data science lifecycle, model evaluation, and performance optimization. Experience with version control tools like Git and deployment through CI/CD pipelines. Excellent problem-solving and communication skills. Good To Have Familiarity with LLMs, prompt engineering, or GenAI tools (Azure OpenAI, Hugging Face). Experience with Power BI or other data visualization tools. Exposure to MLOps tools and practices. Skills: machine learning,azure,scikit-learn,open ai,pytorch,azure machine learning,cognitive services,git,azure ai engineer associate,python,data science,tensorflow,communication,azure functions,azure storage,adf,data factory,artificial intelligence,azure data scientist associate,problem-solving,ci/cd pipelines,mlops

Posted 2 weeks ago

Apply

3.0 - 6.0 years

5 - 8 Lacs

Indore, Hyderabad, Ahmedabad

Work from Office

Role Overview As a Data Governance Developer at Kanerika, you will be responsible for developing and managing robust metadata, lineage, and compliance frameworks using Microsoft Purview and other leading tools. Youll work closely with engineering and business teams to ensure data integrity, regulatory compliance, and operational transparency. Key Responsibilities Set up and manage Microsoft Purview: accounts, collections, RBAC, and policies. Integrate Purview with Azure Data Lake, Synapse, SQL DB, Power BI, Snowflake. Schedule & monitor metadata scanning, classification, and lineage tracking jobs. Build ingestion workflows for technical, business, and operational metadata. Tag, enrich, and organize assets with glossary terms and metadata. Automate lineage, glossary, and scanning processes via REST APIs, PowerShell, ADF, and Logic Apps. Design and enforce classification rules for PII, PCI, PHI. Collaborate with domain owners for glossary and metadata quality governance. Generate compliance dashboards and lineage maps in Power BI. Tools & Technologies Governance Platforms: Microsoft Purview, Collibra, Atlan, Informatica Axon, IBM IG Catalog Integration Tools: Azure Data Factory, dbt, Talend Automation & Scripting: PowerShell, Azure Functions, Logic Apps, REST APIs Compliance Areas in Purview: Sensitivity Labels, Policy Management, Auto-labeling Data Loss Prevention (DLP), Insider Risk Mgmt, Records Management Compliance Manager, Lifecycle Mgmt, eDiscovery, Audit DSPM, Information Barriers, Unified Catalog Required Qualifications 46 years of experience in Data Governance / Data Management. Hands-on with Microsoft Purview, especially lineage and classification workflows. Strong understanding of metadata management, glossary governance, and data classification. Familiarity with Azure Data Factory, dbt, Talend. Working knowledge of data compliance regulations: GDPR, CCPA, SOX, HIPAA. Strong communication skills to collaborate across technical and non-technical teams.

Posted 2 weeks ago

Apply

5.0 - 10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Title: Dataops Engineer Location: Hyderabad Experience: 5-10 Years Required Skills Bachelor's degree in Computer Science, Information Systems, or a related field. 5-10 years of experience in Data Engineering or DataOps roles. Strong hands-on experience with: ADF, ADLS, Snowflake, Azure DevOps or similar CI/CD platforms Proficient in SQL and scripting languages such as Python.. Solid understanding of ETL/ELT concepts and data warehousing. Experience with source control (e.g., Git) and infrastructure as code (e.g., ARM templates, Terraform). Knowledge of data security best practices in cloud environments.

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Job Description Bachelor's degree in Computer Science, Software Engineering, or a related field. 5+ years of experience as a Full Stack Developer. Strong proficiency in ReactJS, Next.js, C#, .NET, and Azure cloud services. Understanding of AngularJS and a willingness to learn new technologies. Experience with Azure Functions, Azure CosmosDB, AKS, ADF, Logic Apps, Azure Event Hubs, APIM, and Front Door. Excellent problem-solving, analytical, and debugging skills. Strong communication and interpersonal skills. Ability to work independently and as part of a team. Roles & Responsibilities Responsibilities: Development: Contribute to the development of robust and scalable applications using our primary tech stack, including ReactJS, Next.js, C#, .NET, Azure Functions, and Azure CosmosDB. Cross-Platform Compatibility: Demonstrate a solid understanding of AngularJS and be open to learning newer technologies as needed. Cloud Expertise: Possess a deep understanding of Azure cloud services, including Azure Functions, Azure CosmosDB, AKS, ADF, Logic Apps, Azure Event Hubs, APIM, and Front Door. Problem-Solving: Identify and resolve technical challenges effectively, leveraging your problem-solving skills and expertise. Collaboration: Work collaboratively with cross-functional teams to deliver high-quality solutions that meet business objectives. Job Description Bachelor's degree in Computer Science, Software Engineering, or a related field. 5+ years of experience as a Full Stack Developer. Strong proficiency in ReactJS, Next.js, C#, .NET, and Azure cloud services. Understanding of AngularJS and a willingness to learn new technologies. Experience with Azure Functions, Azure CosmosDB, AKS, ADF, Logic Apps, Azure Event Hubs, APIM, and Front Door. Excellent problem-solving, analytical, and debugging skills. Strong communication and interpersonal skills. Ability to work independently and as part of a team. Roles & Responsibilities Responsibilities: Development: Contribute to the development of robust and scalable applications using our primary tech stack, including ReactJS, Next.js, C#, .NET, Azure Functions, and Azure CosmosDB. Cross-Platform Compatibility: Demonstrate a solid understanding of AngularJS and be open to learning newer technologies as needed. Cloud Expertise: Possess a deep understanding of Azure cloud services, including Azure Functions, Azure CosmosDB, AKS, ADF, Logic Apps, Azure Event Hubs, APIM, and Front Door. Problem-Solving: Identify and resolve technical challenges effectively, leveraging your problem-solving skills and expertise. Collaboration: Work collaboratively with cross-functional teams to deliver high-quality solutions that meet business objectives.

Posted 2 weeks ago

Apply

8.0 - 10.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

TCS Hiring for Azure FinOps Experience: 8 to 10 Years Only Job Location: Kolkata/Pune TCS Hiring for Azure FinOps Required Technical Skill Set: As a Cloud FinOps consultant, you are responsible for developing and implementing a robust program for cloud cost management that includes: a service-based cost allocation and classification strategy, tracking and management of cloud cost, cloud services rates setting, and defining consumption and show back/chargeback reports from both the provider and consumer perspective. Experienced Cloud FinOps Practitioner who can work on mainly Azure along with AWS platforms for Cloud Optimization & Cost savings (Certification as a FinOps Practitioner will have an added advantage) Create cloud cost optimization framework and Governance mechanism. Who takes ownership in Cost Analysis, Reviewing Recommendations, creating budgeting alerts, purchasing reservations / savings plan, tagging, anomaly detection and forecasting spend. Experienced in rightsizing of computer services, identifying unused resources and AHUB implementation. Define and setup the Cloud Spend governance process and Cloud Spend Tracking mechanism. Experience in driving deep architectural discussions to help customers ensure they are making the most cost-efficient cloud usage choices. Interacting with vendors on Enterprise Agreements, MCA, Discounts and various other optimization opportunities. Creating dashboards for visualizing monthly, yearly cost by adding various filters. Drive a close working relationships with IT teams, Finance teams, Architects, Operations Knowledge of IaaS, PaaS services and cloud technologies, for example Databricks, AKS, ADF, Log Analytics, Load Balancing, Disaster recovery. Knowledge of app & data architectures and cloud native patterns for development, etc. Kind Regards, Priyankha M

Posted 2 weeks ago

Apply

0 years

25 Lacs

India

On-site

Work Location: Kochi/Trivandrum 10-15+ Yrs Experience ▪ Expertise in Azure services including App Services, Functions, DevOps pipelines and ADF ▪ Expert-level knowledge of MuleSoft Anypoint Platform, API lifecycle management, and enterprise integration ▪ Proven skills in Java-based web applications with RDBMS or NoSQL backends ▪ Proven skills in Python and JavaScript for backend and full-stack solutions ▪ Deep understanding of object-oriented programming and design patterns ▪ Solid hands-on experience with RESTful API development and microservices architecture ▪ Familiarity with unit testing frameworks (e.g., TestNG) and integration testing in Java ▪ Experience with code review processes, test coverage validation, and CI/CD pipelines ▪ Proficiency in Git, SVN, and other version control systems ▪ Comfortable working with static code analysis tools and agile methodologies ▪ Good knowledge of JIRA, Confluence, and project collaboration tools ▪ Strong communication skills and ability to mentor team members ▪ Ability to prepare detailed technical documentation and design specifications ▪ Passion for clean code, best practices, and scalable architecture ▪ Nice to have: experience with identity providers like Auth0, Keycloak, IdentityServer. ▪ Take ownership of tasks and user stories; provide accurate estimations ▪ Provide technically sound advice/decisions on how to develop functionality to implement client’s business logic in software systems ▪ Lead sprint activities, including task management and code reviews ▪ Design and implement secure, scalable, and maintainable solutions ▪ Conduct technical discovery and performance analysis of critical systems ▪ Write low-level design and as-built documents ▪ Translate business requirements into technical specifications and working code ▪ Develop and maintain unit, integration, and regression test cases ▪ Ensure adherence to TDD and promote high code coverage ▪ Integrate multiple data sources and systems with dissimilar structures ▪ Contribute to overall system design with focus on security and resilience ▪ Use static code analysis tools and set up CI/CD pipelines ▪ Participate in Agile ceremonies: grooming, planning, stand-ups, and retrospectives ▪ Collaborate with technical architects on solution design ▪ Mentor junior team members on coding standards, sprint tasks, and technology ▪ Troubleshoot, test, and optimize core software and databases ▪ Stay updated with industry trends and promote best practices within the team. ▪ Identify challenges, initiate PoCs, and perform feasibility studies. Participate in the full product development cycle, including brainstorming, release planning and estimation, implementing and iterating on code, coordinating with internal and external clients, internal code and design reviews, MVP and production releases, quality assurance, and product support. Highly effective and thrive in a dynamic environment. Comfortable with proactive outward communication and technical leadership and positive about accepting challenges. Adhere to ISMS Policies and procedures. Job Type: Full-time Pay: From ₹2,500,000.00 per year Benefits: Health insurance Provident Fund Schedule: Day shift Work Location: In person

Posted 2 weeks ago

Apply

10.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Greetings from TCS!!! TCS is hiring for Snowflake Tech Architect / Tech Lead Experience: 10+ years Location: Chennai/Bangalore/Mumbai Required Technical Skill Set 10 years of total experience and at least 3+ years of expertise in Cloud data warehouse technologies on Snowflake and AWS, Azure or GCP. At least one End-to-end Snowflake implementation is a must covering all aspects including architecture, design, data engineering, data visualization and data governance (specifically data quality and lineage). Significant experience with data migrations and development of Operational Data Stores, Enterprise Data Warehouses and Data Marts. Good hands-on knowledge on SQL and Data Warehousing life cycle is an absolute requirement. Significant experience with data migrations and development, design, Operational Data Stores, Enterprise Data Warehouses and Data Marts. Experience with cloud ETL and ELT in one of the tools like DBT/Glue/ADF or Matillion or any other ELT tool and exposure to Bigdata ecosystem (Hadoop). Expertise with at least one of the Traditional data warehouses solutions on Oracle, Teradata, and Microsoft SQL server. Excellent communication skills to liaise with Business & IT stakeholders. Expertise in planning execution of a project and efforts estimation. Understanding of Data Vault, data mesh and data fabric architecture patterns. Exposure to working in Agile ways of working. Must-Have Experience on cloud services like S3/Blob/GCS, Lambda, Glue/ADF/, Apache Airflow. Experience or understanding of Banking and Financial Services business domain. Good-to-Have Experience in coding languages like Python and PySpark would be an added advantage. Experience in DevOps, CI/CD, GitHub is a big plus. Responsibility of / Expectations from the Role 1.Provide Technical pre-sales enablement on data on cloud aspects covering data architecture, data engineering, data modelling, data consumption and data governance aspects focusing on Snowflake. 2.Expert level knowledge on Snowflake Data engineering, performance, consumption, security, governance and admin aspects 3.To work with cross-functional teams in onsite/offshore setup and discuss and solve technical problems with various stakeholders including customer teams. 4.Creating technical proposals and responding to large scale RFPs. 5.Have discussion on existing solution, design/optimize solution and prepare execution planning for development, deployment and enabling end users to utilize the data platform. 6.Role demands excellent oral and written communication skills to organize workshops, meetings with account teams, account leadership and senior stakeholders from client including CXO level. 7.Adept in creating POV and conduct PoC. 8.Liaise with Technology partners like Snowflake, Matillion, DBT etc.,

Posted 2 weeks ago

Apply

45.0 - 50.0 years

0 Lacs

India

On-site

Bombay Mercantile Co-Operative Bank Ltd., a leading Multi-State Scheduled Bank, with 52 branches across 10 states, requires dynamic and experience personnel. Age: 45-50 Years Location: Mumbai Qualification and Experience: Graduate/Postgraduate in Computer Science, Information Systems, Data Analytics, Statistics, or a related field. Experience with BI tools such as Tableau, Power BI, or similar is an added advantage. Minimum 10–15 years of relevant experience in MIS, with at least 5 years in a leadership role, preferably in a cooperative or public sector bank. Knowledge of CBS systems, RBI reporting portals, data analytics tools, and SQL/database management essential. Key Responsibilities: 1. MIS Strategy & Planning Develop and implement an effective MIS framework to support strategic and operational objectives. Ensure integration of MIS with Core Banking System (CBS), Loan Origination System (LOS), and other internal systems for seamless data flow. 2. Data Collection, Processing & Reporting Design, standardize, and maintain reporting formats for daily, weekly, monthly, and quarterly reporting across departments. Ensure timely generation of reports for internal management, Board of Directors, auditors, and regulators. Prepare and submit statutory and compliance reports to RBI, NABARD, State Registrar, etc. 3. Regulatory & Compliance Reporting Ensure all RBI-mandated MIS submissions (e.g., CRILC, XBRL, returns under ADF, etc.) are accurate and timely. Track regulatory changes and incorporate them into reporting frameworks. 4. Performance & Operational Dashboards Develop real-time dashboards and KPIs for key functions such as credit, deposits, NPA tracking, branch performance, etc. Provide analytics support to business heads for performance analysis and forecasting. 5. Data Governance & Quality Maintain high standards of data integrity, consistency, and security across systems. Conduct regular audits and validations of MIS data to identify and correct discrepancies. 6. System Enhancement & Automation Liaise with IT department and software vendors to automate recurring reports. Support implementation of business intelligence (BI) tools, data warehouses, and report automation solutions. 7. Support to Management Assist senior management with ad-hoc analysis, strategic reviews, and Board-level presentations. Provide MIS support for product planning, regulatory inspections, audits, and business strategy. Job Type: Full-time Schedule: Day shift Ability to commute/relocate: Masjid, Mumbai, Maharashtra: Reliably commute or planning to relocate before starting work (Preferred) Education: Bachelor's (Preferred) Experience: Management Information Systems: 10 years (Preferred) Work Location: In person

Posted 2 weeks ago

Apply

8.0 - 10.0 years

0 Lacs

Pune, Maharashtra, India

On-site

TCS Hiring for Azure FinOps Experience: 8 to 10 Years Only Job Location: Kolkata/Pune TCS Hiring for Azure FinOps Required Technical Skill Set: As a Cloud FinOps consultant, you are responsible for developing and implementing a robust program for cloud cost management that includes: a service-based cost allocation and classification strategy, tracking and management of cloud cost, cloud services rates setting, and defining consumption and show back/chargeback reports from both the provider and consumer perspective. Experienced Cloud FinOps Practitioner who can work on mainly Azure along with AWS platforms for Cloud Optimization & Cost savings (Certification as a FinOps Practitioner will have an added advantage) Create cloud cost optimization framework and Governance mechanism. Who takes ownership in Cost Analysis, Reviewing Recommendations, creating budgeting alerts, purchasing reservations / savings plan, tagging, anomaly detection and forecasting spend. Experienced in rightsizing of computer services, identifying unused resources and AHUB implementation. Define and setup the Cloud Spend governance process and Cloud Spend Tracking mechanism. Experience in driving deep architectural discussions to help customers ensure they are making the most cost-efficient cloud usage choices. Interacting with vendors on Enterprise Agreements, MCA, Discounts and various other optimization opportunities. Creating dashboards for visualizing monthly, yearly cost by adding various filters. Drive a close working relationships with IT teams, Finance teams, Architects, Operations Knowledge of IaaS, PaaS services and cloud technologies, for example Databricks, AKS, ADF, Log Analytics, Load Balancing, Disaster recovery. Knowledge of app & data architectures and cloud native patterns for development, etc. Kind Regards, Priyankha M

Posted 2 weeks ago

Apply

5.0 - 9.0 years

3 - 9 Lacs

Chennai

On-site

Technical Lead Chennai 5-9 Years INDIA Job Family Practice (Digital) Job Description (Posting). To be responsible for managing technology in projects and providing technical guidance / solutions for work completion. (1.) To develop and guide the team members in enhancing their technical capabilities and increasing productivity (2.) To prepare and submit status reports for minimizing exposure and risks on the project or closure of escalations. (3.) To be responsible for providing technical guidance / solutions ;define, advocate, and implement best practices and coding standards for the team. (4.) To ensure process compliance in the assigned module| and participate in technical discussions/review as a technical consultant for feasibility study (technical alternatives, best packages, supporting architecture best practices, technical risks, breakdown into components, estimations). Qualification B-Tech No. of Positions 1 Skill (Primary) Oracle (APPS)-Oracle E- Business Suite Technical-ADF Auto req ID 1541137BR Skill Level 3 (Secondary Skill 1) Technical Skills (APPS)-Datawarehouse-Extract Transform Load (ETL) Automation

Posted 2 weeks ago

Apply

5.0 years

15 - 24 Lacs

Bengaluru

On-site

Job Title: Senior Data Engineer – Azure | ADF | Databricks | PySpark | AWS Location: Bangalore, Hyderabad, Chennai (Hybrid Mode) Experience Required: 5+ Years Notice Period: Immediate Job Description We are looking for a Senior Data Engineer who is passionate about designing and developing scalable data pipelines, optimizing data architecture, and working with advanced big data tools and cloud platforms. This is a great opportunity to be a key player in transforming data into meaningful insights by leveraging modern data engineering practices on Azure, AWS, and Databricks . You will be working with cross-functional teams including data scientists, analysts, and software engineers to deliver robust data solutions. The ideal candidate will be technically strong in Azure Data Factory, PySpark, Databricks, and AWS services , and will have experience in building end-to-end ETL workflows and driving business impact through data. Key Responsibilities Design, build, and maintain scalable and reliable data pipelines and ETL workflows Implement data ingestion and transformation using Azure Data Factory (ADF) and Azure Databricks (PySpark) Work across multiple data platforms including Azure, AWS, Snowflake, and Redshift Collaborate with data scientists and business teams to understand data needs and deliver solutions Optimize data storage, processing, and retrieval for performance and cost-effectiveness Develop data quality checks and monitoring frameworks for pipeline health Ensure data governance, security, and compliance with industry standards Lead code reviews, set data engineering standards, and mentor junior team members Propose and evaluate new tools and technologies for continuous improvement Must-Have Skills Strong programming skills in Python , SQL , or Scala Azure Data Factory , Azure Databricks , Synapse Analytics Hands-on with PySpark , Spark, Hadoop, Hive Experience with cloud platforms (Azure preferred; AWS/GCP acceptable) Data Warehousing: Snowflake , Redshift , BigQuery Strong ETL/ELT pipeline development experience Workflow orchestration tools such as Airflow , Prefect , or Luigi Excellent problem-solving, debugging, and communication skills Nice to Have Experience with real-time streaming tools (Kafka, Flink, Spark Streaming) Exposure to data governance tools and regulations (GDPR, HIPAA) Familiarity with ML model integration into data pipelines Containerization and CI/CD exposure: Docker, Git, Kubernetes (basic) Experience with Vector databases and unstructured data handling Technical Environment Programming: Python, Scala, SQL Big Data Tools: Spark, Hadoop, Hive Cloud Platforms: Azure (ADF, Databricks, Synapse), AWS (S3, Glue, Lambda), GCP Data Warehousing: Snowflake, Redshift, BigQuery Databases: PostgreSQL, MySQL, MongoDB, Cassandra Orchestration: Apache Airflow, Prefect, Luigi Tools: Git, Docker, Azure DevOps, CI/CD pipelines Soft Skills Strong analytical thinking and problem-solving abilities Excellent verbal and written communication Collaborative team player with leadership qualities Self-motivated, organized, and able to manage multiple projects Education & Certifications Bachelor’s or Master’s Degree in Computer Science, IT, Engineering, or equivalent Cloud certifications (e.g., Microsoft Azure Data Engineer, AWS Big Data) are a plus Key Result Areas (KRAs) Timely delivery of high-performance data pipelines Quality of data integration and governance compliance Business team satisfaction and data readiness Proactive optimization of data processing workloads Key Performance Indicators (KPIs) Pipeline uptime and performance metrics Reduction in overall data latency Zero critical issues in production post-release Stakeholder satisfaction score Number of successful integrations and migrations Job Types: Full-time, Permanent Pay: ₹1,559,694.89 - ₹2,441,151.11 per year Benefits: Provident Fund Schedule: Day shift Supplemental Pay: Performance bonus Application Question(s): What is your notice period in days? Experience: Azure Data Factory, Azure Databricks, Synapse Analytics: 5 years (Required) Python, SQL, or Scala: 5 years (Required) Work Location: In person

Posted 2 weeks ago

Apply

7.0 - 12.0 years

20 - 27 Lacs

Bengaluru

Work from Office

TECHNICAL SKILLS AND EXPERIENCE Most important: 7+ years professional experience as a data engineer, with at least 4 utilizing cloud technologies. Proven experience building ETL or ETL data pipelines with Databricks either in Azure or AWS using PySpark language. Strong experience with the Microsoft Azure Data Stack (Databricks, Data Lake Gen2, ADF etc.) Strong SQL skills and proficiency in Python adhering to standards such as PEP Proven experience with unit testing and applying appropriate testing methodologies using libraries such as Pytest, Great Expectations, or similar. Demonstrable experience with CICD including release and test automation tools and processes such as Azure Devops, Terraform, Powershell and Bash scripting or similar. Strong understanding of data modeling, data warehousing, and OLAP concepts. Excellent technical documentation skills. Preferred candidate profile

Posted 2 weeks ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Role Description Role Proficiency: This role requires proficiency in data pipeline development including coding and testing data pipelines for ingesting wrangling transforming and joining data from various sources. Must be skilled in ETL tools such as Informatica Glue Databricks and DataProc with coding expertise in Python PySpark and SQL. Works independently and has a deep understanding of data warehousing solutions including Snowflake BigQuery Lakehouse and Delta Lake. Capable of calculating costs and understanding performance issues related to data solutions. Outcomes Act creatively to develop pipelines and applications by selecting appropriate technical options optimizing application development maintenance and performance using design patterns and reusing proven solutions.rnInterpret requirements to create optimal architecture and design developing solutions in accordance with specifications. Document and communicate milestones/stages for end-to-end delivery. Code adhering to best coding standards debug and test solutions to deliver best-in-class quality. Perform performance tuning of code and align it with the appropriate infrastructure to optimize efficiency. Validate results with user representatives integrating the overall solution seamlessly. Develop and manage data storage solutions including relational databases NoSQL databases and data lakes. Stay updated on the latest trends and best practices in data engineering cloud technologies and big data tools. Influence and improve customer satisfaction through effective data solutions. Measures Of Outcomes Adherence to engineering processes and standards Adherence to schedule / timelines Adhere to SLAs where applicable # of defects post delivery # of non-compliance issues Reduction of reoccurrence of known defects Quickly turnaround production bugs Completion of applicable technical/domain certifications Completion of all mandatory training requirements Efficiency improvements in data pipelines (e.g. reduced resource consumption faster run times). Average time to detect respond to and resolve pipeline failures or data issues. Number of data security incidents or compliance breaches. Outputs Expected Code Development: Develop data processing code independently ensuring it meets performance and scalability requirements. Define coding standards templates and checklists. Review code for team members and peers. Documentation Create and review templates checklists guidelines and standards for design processes and development. Create and review deliverable documents including design documents architecture documents infrastructure costing business requirements source-target mappings test cases and results. Configuration Define and govern the configuration management plan. Ensure compliance within the team. Testing Review and create unit test cases scenarios and execution plans. Review the test plan and test strategy developed by the testing team. Provide clarifications and support to the testing team as needed. Domain Relevance Advise data engineers on the design and development of features and components demonstrating a deeper understanding of business needs. Learn about customer domains to identify opportunities for value addition. Complete relevant domain certifications to enhance expertise. Project Management Manage the delivery of modules effectively. Defect Management Perform root cause analysis (RCA) and mitigation of defects. Identify defect trends and take proactive measures to improve quality. Estimation Create and provide input for effort and size estimation for projects. Knowledge Management Consume and contribute to project-related documents SharePoint libraries and client universities. Review reusable documents created by the team. Release Management Execute and monitor the release process to ensure smooth transitions. Design Contribution Contribute to the creation of high-level design (HLD) low-level design (LLD) and system architecture for applications business components and data models. Customer Interface Clarify requirements and provide guidance to the development team. Present design options to customers and conduct product demonstrations. Team Management Set FAST goals and provide constructive feedback. Understand team members' aspirations and provide guidance and opportunities for growth. Ensure team engagement in projects and initiatives. Certifications Obtain relevant domain and technology certifications to stay competitive and informed. Skill Examples Proficiency in SQL Python or other programming languages used for data manipulation. Experience with ETL tools such as Apache Airflow Talend Informatica AWS Glue Dataproc and Azure ADF. Hands-on experience with cloud platforms like AWS Azure or Google Cloud particularly with data-related services (e.g. AWS Glue BigQuery). Conduct tests on data pipelines and evaluate results against data quality and performance specifications. Experience in performance tuning of data processes. Expertise in designing and optimizing data warehouses for cost efficiency. Ability to apply and optimize data models for efficient storage retrieval and processing of large datasets. Capacity to clearly explain and communicate design and development aspects to customers. Ability to estimate time and resource requirements for developing and debugging features or components. Knowledge Examples Knowledge Examples Knowledge of various ETL services offered by cloud providers including Apache PySpark AWS Glue GCP DataProc/DataFlow Azure ADF and ADLF. Proficiency in SQL for analytics including windowing functions. Understanding of data schemas and models relevant to various business contexts. Familiarity with domain-related data and its implications. Expertise in data warehousing optimization techniques. Knowledge of data security concepts and best practices. Familiarity with design patterns and frameworks in data engineering. Additional Comments Tech skills Proficient in Python (Including popular python packages e.g. Pandas, NumPy etc.) and SQL Strong background in distributed data processing and storage (e.g. Apache Spark, Hadoop) Large scale (TBs of data) data engineering skills - Model data, create production ready ETL pipelines Development experience with at least one cloud (Azure high preference, AWS, GCP) Knowledge of data lake and data lake house patterns Knowledge of ETL performance tuning and cost optimization Knowledge of data structures and algorithms and good software engineering practices Soft skills Strong communication skills to articulate complex situation concisely Comfortable with picking up new technologies independently Eye for detail, good data intuition, and a passion for data quality Comfortable working in a rapidly changing environment with ambiguous requirements Skills Python,Sql,Aws,Azure

Posted 2 weeks ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Location: Hyderabad, Indore and Ahmedabad (India) What You Will Do: Following are high level responsibilities that you will play but not limited to: · Develop and maintain data pipelines using Azure Data Factory (ADF) /Databricks for data integration and ETL processes. · Design, implement, and optimize Power BI /Fabric reports and dashboards to deliver actionable business insights. · Collaborate with data analysts, business users, and other teams to understand data requirements and deliver solutions using ADF and Power BI. · Extract, transform, and load (ETL) data from various sources into cloud-based storage systems such as Azure Data Lake or Azure SQL Database. · Work with large datasets and optimize queries and pipelines for performance and scalability. · Ensure data quality, integrity, and availability throughout the data lifecycle. · Automate repetitive data tasks, ensuring timely and accurate reporting. · Monitor and troubleshoot data pipelines, addressing any performance or data issues promptly. · Support data visualization and reporting tools, including Power BI, to enable business stakeholders to make data-driven decisions. · Write clear, efficient, and maintainable code for data transformations and automation. Required Qualifications: · Bachelor's degree in computer science, Information Technology, Engineering, or a related field. · 8+ years of hands-on experience in Data Engineering, BI Developer or a similar role. · Proficiency with Azure Data Factory (ADF), including the creation of data pipelines and managing data flows. · Strong experience in Power BI, including report creation, dashboard development, and data modeling. · Experience with SQL and database management (e.g., Azure SQL Database, SQL Server). · Knowledge of cloud platforms, especially Microsoft Azure. · Familiarity with data warehousing concepts and ETL processes. · Experience working with cloud-based data storage solutions (e.g., Azure Data Lake, Azure Blob Storage). · Strong programming skills in languages such as Python, SQL, or other relevant languages. · Ability to troubleshoot and optimize data pipelines for performance and reliability. Preferred Qualifications: · Familiarity with data modeling techniques and practices for Power BI. · Knowledge of Azure Databricks or other data processing frameworks. · Knowledge of Microsoft Fabric or other Cloud Platforms. What we need? · B. Tech computer science or equivalent. Why join us? · Work with a passionate and innovative team in a fast-paced, growth-oriented environment. · Gain hands-on experience in content marketing with exposure to real-world projects. · Opportunity to learn from experienced professionals and enhance your marketing skills. · Contribute to exciting initiatives and make an impact from day one. · Competitive stipend and potential for growth within the company. · Recognized for excellence in data and AI solutions with industry awards and accolades.

Posted 2 weeks ago

Apply

0 years

0 Lacs

Indore, Madhya Pradesh, India

On-site

Role -: Azure Integration Engineer Exp -: 5 to 8 yrs Location -: Indore Immediate Joiners preferred Must Have-: Proficiency in Azure Logic Apps, Azure API Management, Azure Service Bus, Azure Event Grid, ADF, C#.NET and Azure Functions. Experience with JSON, XML, and other data format Working experience with Azure DevOps and GitHub Knowledge of Integration monitoring and lifecycle management Roles & Responsibilities -: Designing, developing, and deploying integration workflows using Azure Logic Apps. Creating and managing APIs using Azure API Management. Developing event-driven solutions with Azure Event Grid and Azure Service Bus. Building serverless functions with Azure Functions to support integration logic. Developing data transformations and mappings. Implementing integration patterns such as API integration, message queuing, and event-driven architecture. Working with different data formats (e.g., JSON, XML) & protocols (SOAP, REST etc) Perform UT & Help with integration testing. Support UAT

Posted 2 weeks ago

Apply

8.0 - 10.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Greetings from TCS!!! TCS is hiring for Azure Data Engineer Exp: 8-10 years Location: Kolkata/Pune/Mumbai/Bangalore Must-Have Strong experience in Azure Data Factory, ADB (Azure Databricks) Synapse, pyspark; establishing the cloud connectivity between different system like ADLS, ADF, Synapse, Databricks etc. A minimum of 7 years' experience with large SQL data marts. Expert relational database experience, Candidate should demonstrate ability to navigate through massive volumes of data to deliver effective and efficient data extraction, design, load, and reporting solutions to business partners, Minimum 7 years of troubleshooting and supporting large databases and testing activities; Identifying reporting, and managing database security issues, user access/management; Designing database backup, archiving and storage, performance tuning, ETL importing large volume of data extracted from multiple systems, capacity planning Experience in TSQL programming along with Azure Data Factory framework and Python scripting Work well independently as well as within a team Proactive, organized, excellent analytical and problem-solving skills Flexible and willing to learn, can-do attitude is key Strong verbal and written communication skills Good-to-Have Financial institution data mart experience is an asset. Experience in .net application is an asset Experience and expertise in Tableau driven dashboard design is an asset Responsibility of / Expectations from the Role Azure Data Engineer (ADF,ADB) ETL processes using frameworks like Azure Data Factory or Synapse or Databricks; Establishing the cloud connectivity between different system like ADLS ,ADF, Synapse, Databricks etc TSQL programming along with Azure Data Factory framework and Python scripting

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies