Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
8.0 - 18.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Greetings from TCS!! TCS is Hiring for Data Architect Interview Mode: Virtual Required Experience: 8-18 years Work location: PAN INDIA Data Architect Technical Architect with experience in designing data platforms, experience in one of the major platforms such as snowflake, data bricks, Azure ML, AWS data platforms etc., Hands on Experience in ADF, HDInsight, Azure SQL, Pyspark, python, MS Fabric, data mesh Good to have - Spark SQL, Spark Streaming, Kafka Hands on exp in Databricks on AWS, Apache Spark, AWS S3 (Data Lake), AWS Glue, AWS Redshift / Athena Good To Have - AWS Lambda, Python, AWS CI/CD, Kafka MLflow, TensorFlow, or PyTorch, Airflow, CloudWatch If interested kindly send your updated CV and below mentioned details through E-mail: srishti.g2@tcs.com Name: E-mail ID: Contact Number: Highest qualification: Preferred Location: Highest qualification university: Current organization: Total, years of experience: Relevant years of experience: Any gap: Mention-No: of months/years (career/ education): If any then reason for gap: Is it rebegin: Previous organization name: Current CTC: Expected CTC: Notice Period: Have you worked with TCS before (Permanent / Contract ) : Show more Show less
Posted 1 month ago
0 years
0 Lacs
Kochi, Kerala, India
On-site
About Neudesic Passion for technology drives us, but it’s innovation that defines us . From design to development and support to management, Neudesic offers decades of experience, proven frameworks, and a disciplined approach to quickly deliver reliable, quality solutions that help our customers go to market faster. What sets us apart from the rest is an amazing collection of people who live and lead with our core values. We believe that everyone should be Passionate about what they do, Disciplined to the core, Innovative by nature, committed to a Team and conduct themselves with Integrity . If these attributes mean something to you - we'd like to hear from you. We are currently looking for Azure Data Engineers to become a member of Neudesic’ s Data & AI team. Must Have Skills: Prior experience in ETL, data pipelines, data flow techniques using Azure Data Services Working experience in Python, Scala, Py Spark, Azure Data Factory, Azure Data Lake Gen2, Databricks, Azure Synapse and file formats like JSON & Parquet Experience in creating ADF Pipelines to source and process data sets. Experience in creating Databricks notebooks to cleanse, transform and enrich data sets. Good understanding about SQL, Databases, NO-SQL DBs, Data Warehouse, Hadoop and various data storage options on the cloud. Development experience in orchestration of pipelines Experience in deployment and monitoring techniques Working experience with Azure DevOps CI/CD pipelines to deploy Azure resources. Experience in handling operations/Integration with source repository Must have good knowledge on Datawarehouse concepts and Datawarehouse modelling Good to Have Skills: Familiarity with DevOps, Agile Scrum methodologies and CI/CD Domain-driven development exposure Analytical / problem solving skills Strong communication skills Good experience with unit, integration and UAT support Able to design and code reusable components and functions Should able to review design, code & provide review comments with justification Zeal to learn new tool/technologies and adoption Power BI and Data Catalog experience * Be aware of phishing scams involving fraudulent career recruiting and fictitious job postings; visit our Phishing Scams page to learn more. Neudesic is an Equal Opportunity Employer Neudesic provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by local laws. Neudesic is an IBM subsidiary which has been acquired by IBM and will be integrated into the IBM organization. Neudesic will be the hiring entity. By proceeding with this application, you understand that Neudesic will share your personal information with other IBM companies involved in your recruitment process, wherever these are located. More Information on how IBM protects your personal information, including the safeguards in case of cross-border data transfer, are available here: https://www.ibm.com/us-en/privacy?lnk=flg-priv-usen Show more Show less
Posted 1 month ago
8.0 years
0 Lacs
Noida, Uttar Pradesh, India
Remote
Job Title: Azure Solution Architect (Data Engineering) Location: Noida or Bangalore Job Type: Full-time Experience: 8+ years Industry: IT Services & Analytics Position Summary: We are looking for individuals with experience in analytics architecture domains along with experience in architecture experience with Design, Plan, and Estimations of Analytics Key Responsibility: Should have experience in Interaction with Customers in Terms of Requirement Gathering Should have experience in handling Team ADF and ADB hands-on experience (development, implementation) including cluster knowledge. Should be able to convert technical requirements into the after-solution designs. Capable to design solution architecture with pros and cons and require a project plan of source loading and how many resources are required to achieve the development. Should be able to plan the tentative cost. Requirements: Bachelor’s degree in computer science or a related analytical field or equivalent experience is preferred 8+ years’ experience in analytics architecture domains (e.g., business architecture, solutions architecture, physical architecture) Minimum of 4 years of experience in designing and implementing cloud analytics workloads in Azure. Minimum of 8 years of architecture experience with Design, Plan, and Estimations of Analytics Minimum of 4 years of experience handling analytics workloads in large-scale analytical environments. Experience in managing large operational cloud environments spanning multiple tenants through techniques such as multi-account management, and Azure Well Architected Best Practices. Experience with analysing and defining technical requirements & design specifications. Experience with database design with both relational and document-based database systems Experience with integrating complex multi-tier applications. Proven ability to write programs using an object-oriented or functional programming language such as Azure data factory, and Azure Databricks. Technical Skills: Hands-on experience with MS Azure technology stack and related tools (Databricks, Data Factory, Data Flow, Synapse Analytics, Synapse ML, Gen2 Storage, etc.). B.Tech or a degree in engineering is needed. Current understanding of best practices for system security measures. Experience in software engineering and architectural design. Positive attitude in meeting challenges and working at a high level. Advanced understanding of business analysis techniques and processes. Azure ML experience (nice to have). MS Azure Certifications: Fundamentals, Solution Architect, Data Engineer (Preferred). About Polestar Polestar Solutions enables enterprises in their digital transformation journey by offering Consulting & Implementation Services related to Data Analytics & Enterprise Performance Management (EPM). We apply our brains and hearts to deliver the best-suited solutions to our customers across industries such as Retail & E-commerce, Consumer Goods, Pharmaceuticals & Life Sciences, Real Estate & Senior Housing, Hi-tech, Media & Telecom Manufacturing, and Automotive clientele. Our in-house research and innovation lab has conceived multiple plug-n-play apps, toolkits, and plugins to streamline implementation and faster time-to-market. We leverage leading technology stacks like Microsoft Azure, AWS, Google Cloud, Microsoft Power BI, Anaplan, Qlik, Tableau, SAS, Cloudera, Redhat, and many others - to match your business goals with optimum technology solutions for all your stakeholders. In remotely located yet hyper-connected, geographic presence is hardly any constraint. Yet for the details, we have a geographic presence in the United States (Delaware) & India (Delhi-NCR, Mumbai & Kolkata). We are serving customers across 19 countries. Our expertise and deep passion for what we do has brought us many accolades. The list includes: Featured on Forrester's Now Tech: Customer Analytics Service Providers Report Q2, 2021. Recognized as Anaplan's India RSI Partner of the Year FY21! Featured on the Economic Times India's Growth Champions in FY2022. Recognised as a Clutch Leader for Top IT Services for the FY 2020, 2021 & 2022, and Top Business Analytics Services in India Categories in FY20. FT High-Growth Companies Asia-Pacific for the years 2018 and 2020 1st rank in Deloitte’s Tech Fast50 List in the Asia-Pacific Region, 2017 and - featured 3 years in a row. Elite Qlik Partner and a member of the ‘Qlik Partner Advisory Council’ & Microsoft Gold Partners for Data & Cloud Platforms. Show more Show less
Posted 1 month ago
0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
About Neudesic Passion for technology drives us, but it’s innovation that defines us . From design to development and support to management, Neudesic offers decades of experience, proven frameworks, and a disciplined approach to quickly deliver reliable, quality solutions that help our customers go to market faster. What sets us apart from the rest is an amazing collection of people who live and lead with our core values. We believe that everyone should be Passionate about what they do, Disciplined to the core, Innovative by nature, committed to a Team and conduct themselves with Integrity . If these attributes mean something to you - we'd like to hear from you. We are currently looking for Azure Data Engineers to become a member of Neudesic’ s Data & AI team. Must Have Skills: Minimum 5 yrs of relevant experience in Azure Data Engineering. Prior experience in ETL, data pipelines, data flow techniques using Azure Data Services Working experience in Python, Scala, Py Spark, Azure Data Factory, Azure Data Lake Gen2, Databricks, Azure Synapse and file formats like JSON & Parquet Experience in creating ADF Pipelines to source and process data sets. Experience in creating Databricks notebooks to cleanse, transform and enrich data sets. Good understanding about SQL, Databases, NO-SQL DBs, Data Warehouse, Hadoop and various data storage options on the cloud. Development experience in orchestration of pipelines Experience in deployment and monitoring techniques Working experience with Azure DevOps CI/CD pipelines to deploy Azure resources. Experience in handling operations/Integration with source repository Must have good knowledge on Datawarehouse concepts and Datawarehouse modelling Good to Have Skills: Familiarity with DevOps, Agile Scrum methodologies and CI/CD Domain-driven development exposure Analytical / problem solving skills Strong communication skills Good experience with unit, integration and UAT support Able to design and code reusable components and functions Should able to review design, code & provide review comments with justification Zeal to learn new tool/technologies and adoption Power BI and Data Catalog experience * Be aware of phishing scams involving fraudulent career recruiting and fictitious job postings; visit our Phishing Scams page to learn more. Neudesic is an Equal Opportunity Employer Neudesic provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by local laws. Neudesic is an IBM subsidiary which has been acquired by IBM and will be integrated into the IBM organization. Neudesic will be the hiring entity. By proceeding with this application, you understand that Neudesic will share your personal information with other IBM companies involved in your recruitment process, wherever these are located. More Information on how IBM protects your personal information, including the safeguards in case of cross-border data transfer, are available here: https://www.ibm.com/us-en/privacy?lnk=flg-priv-usen Show more Show less
Posted 1 month ago
0.0 - 4.0 years
0 Lacs
Jaipur, Rajasthan
Remote
Senior Data Engineer Kadel Labs is a leading IT services company delivering top-quality technology solutions since 2017, focused on enhancing business operations and productivity through tailored, scalable, and future-ready solutions. With deep domain expertise and a commitment to innovation, we help businesses stay ahead of technological trends. As a CMMI Level 3 and ISO 27001:2022 certified company, we ensure best-in-class process maturity and information security, enabling organizations to achieve their digital transformation goals with confidence and efficiency. Role: Senior Data Engineer Experience: 4-6 Yrs Location: Udaipur , Jaipur,Kolkata Job Description: We are looking for a highly skilled and experienced Data Engineer with 4–6 years of hands-on experience in designing and implementing robust, scalable data pipelines and infrastructure. The ideal candidate will be proficient in SQL and Python and have a strong understanding of modern data engineering practices. You will play a key role in building and optimizing data systems, enabling data accessibility and analytics across the organization, and collaborating closely with cross-functional teams including Data Science, Product, and Engineering. Key Responsibilities: ·Design,develop, and maintain scalable ETL/ELT data pipelines using SQL and Python · Collaborate with data analysts, data scientists, and product teams to understand data needs · Optimize queries and data models for performance and reliability · Integrate data from various sources, including APIs, internal databases, and third-party systems · Monitor and troubleshoot data pipelines to ensure data quality and integrity · Document processes, data flows, and system architecture · Participate in code reviews and contribute to a culture of continuous improvement Required Skills: ·4–6 years of experience in data engineering, data architecture, or backend development with a focus on data · Strong command of SQL for data transformation and performance tuning · Experience with Python (e.g., pandas, Spark, ADF) · Solid understanding of ETL/ELT processes and data pipeline orchestration · Proficiency with RDBMS (e.g., PostgreSQL, MySQL, SQL Server) · Experience with data warehousing solutions (e.g., Snowflake, Redshift, BigQuery) · Familiarity with version control (Git), CI/CD workflows, and containerized environments (Docker, Kubernetes) · Basic Programming Skills · Excellent problem-solving skills and a passion for clean, efficient data systems Preferred Skills: ·Experience with cloud platforms (AWS, Azure, GCP) and services like S3, Glue, Dataflow, etc. · Exposure to enterprise solutions (e.g., Databricks, Synapse) · Knowledge of big data technologies (e.g., Spark, Kafka, Hadoop) · Background in real-time data streaming and event-driven architectures · Understanding of data governance, security, and compliance best practices · Prior experience working in agile development environment Educational Qualifications: ·Bachelor's degree in Computer Science, Information Technology, or a related field. Visit us: https://kadellabs.com/ https://in.linkedin.com/company/kadel-labs https://www.glassdoor.co.in/Overview/Working-at-Kadel-Labs-EI_IE4991279.11,21.htm Job Types: Full-time, Permanent Pay: ₹826,249.60 - ₹1,516,502.66 per year Benefits: Flexible schedule Health insurance Leave encashment Paid time off Provident Fund Work from home Schedule: Day shift Monday to Friday Supplemental Pay: Overtime pay Performance bonus Quarterly bonus Yearly bonus Ability to commute/relocate: Jaipur, Rajasthan: Reliably commute or planning to relocate before starting work (Required) Experience: Data Engineer: 4 years (Required) Location: Jaipur, Rajasthan (Required) Work Location: In person
Posted 1 month ago
10.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Responsibilities: * Design and build data architecture frameworks leveraging Azure services (Azure Data Factory, Azure Synapse Analytics, Azure Data Lake Storage, Azure SQL Database, ADLS Gen2, Synapse Engineering, Fabric Notebook, Pyspark, Scala, Python etc.). * Define and implement reference architectures and architecture blueprinting. * Experience demonstrating and ability to talk about wide variety of data engineering tools, architectures across cloud providers Especially on Azure platform. * Experience in building Data Product, data processing frameworks, Metadata Driven ETL pipelines , Data Security, Data Standardization, Data Quality and Data Reconciliation workflows. Requirements:- * 10+ years of experience in Data Warehousing and Azure Cloud technologies. * Strong hands-on experience with Azure Fabrics, Synapse, ADf, SQL, Python/PySpark. * Proven expertise in designing and implementing data architectures on Azure using Microsoft fabric, azure synapse, ADF, MS fabric notebook * Exposure to Azure DevOps and Business Intelligence. Show more Show less
Posted 1 month ago
4.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Role : Informatica, IICS, ADF, Databricks Skill : Informatica, IICS, ADF, Databricks Level : SSE (Sr. Software Engg) Location : Pune Type of Demand : Full Time Employee Work Details : Office Minimum Experience : 4+ Yrs Company Name : PibyThree Consulting Pvt Ltd. Website : http://pibythree.com About Us Πby3 is A Cloud Transformation company enabling Enterprises for Future. We are nimble, and Highly dynamic focused team with a passion to serve our clients with utmost trust and ownership. Our expertise in Technology with vast experience over the years helps client get Solutions with optimized cost and reduced risks. We are seeking a skilled Data Engineer with hands-on experience in Informatica PowerCenter, Informatica Intelligent Cloud Services (IICS), Azure Data Factory (ADF), and Databricks. The ideal candidate will be responsible for designing, developing, and optimizing data pipelines and integration workflows across cloud and on-premise environments. Experience with cloud platforms (preferably Azure), data lakes, and ETL/ELT best practices is essential. Key Responsibilities Develop and maintain data integration workflows using Informatica (PowerCenter and IICS). Design and implement scalable data pipelines in Azure using ADF and Databricks. Collaborate with data architects, analysts, and stakeholders to understand data requirements. Ensure data quality, performance tuning, and error handling across ETL processes. Monitor, troubleshoot, and optimize existing data pipelines and jobs. Required Skills 4+ years of experience in ETL development using Informatica (PowerCenter/IICS). Strong experience with Azure Data Factory and Databricks (SQL and/or PySpark). Good understanding of data warehousing, data lakes, and cloud data architecture. Proficiency in SQL and data modeling. Skills: snowflake,sql,informatica,data lakes,data bricks,pyspark,iics,data warehousing,azure data factory,databricks,data modeling Show more Show less
Posted 1 month ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Name: Senior Data Engineer Azure Years of Experience: 5 Job Description: We are looking for a skilled and experienced Senior Azure Developer to join our team! As part of the team, you will be involved in the implementation of the ongoing and new initiatives for our company. If you love learning, thinking strategically, innovating, and helping others, this job is for you! Primary Skills: ADF,Databricks Secondary Skills: DBT, Python, Databricks, Airflow, Fivetran, Glue, Snowflake Role Description: Data engineering role requires creating and managing technological infrastructure of a data platform, be in-charge /involved in architecting, building, and managing data flows / pipelines and construct data storages (noSQL, SQL), tools to work with big data (Hadoop, Kafka), and integration tools to connect sources or other databases Role Responsibility: Translate functional specifications and change requests into technical specifications Translate business requirement document, functional specification, and technical specification to related coding Develop efficient code with unit testing and code documentation Ensuring accuracy and integrity of data and applications through analysis, coding, documenting, testing, and problem solving Setting up the development environment and configuration of the development tools Communicate with all the project stakeholders on the project status Manage, monitor, and ensure the security and privacy of data to satisfy business needs Contribute to the automation of modules, wherever required To be proficient in written, verbal and presentation communication (English) Co-ordinating with the UAT team Role Requirement: Proficient in basic and advanced SQL programming concepts (Procedures, Analytical functions etc.) Good Knowledge and Understanding of Data warehouse concepts (Dimensional Modeling, change data capture, slowly changing dimensions etc.) Knowledgeable in Shell / PowerShell scripting Knowledgeable in relational databases, nonrelational databases, data streams, and file stores Knowledgeable in performance tuning and optimization Experience in Data Profiling and Data validation Experience in requirements gathering and documentation processes and performing unit testing Understanding and Implementing QA and various testing process in the project Knowledge in any BI tools will be an added advantage Sound aptitude, outstanding logical reasoning, and analytical skills Willingness to learn and take initiatives Ability to adapt to fast-paced Agile environment Additional Requirement: Demonstrated expertise as a Data Engineer, specializing in Azure cloud services. Highly skilled in Azure Data Factory, Azure Data Lake, Azure Databricks, and Azure Synapse Analytics. Create and execute efficient, scalable, and dependable data pipelines utilizing Azure Data Factory. Utilize Azure Databricks for data transformation and processing. Effectively oversee and enhance data storage solutions, emphasizing Azure Data Lake and other Azure storage services. Construct and uphold workflows for data orchestration and scheduling using Azure Data Factory or equivalent tools. Proficient in programming languages like Python, SQL, and conversant with pertinent scripting languages Show more Show less
Posted 1 month ago
5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Exciting Opportunity: Compliance Officer - KEB Hana Bank (New Devanahalli Branch!) KEB Hana Bank is a leading global financial institution with a rich history and a strong presence across Asia and worldwide. We are committed to providing innovative financial solutions and superior customer service. As part of our continued growth and commitment to the Indian market, we are thrilled to announce the upcoming launch of our new branch in Devanahalli, Bangalore , set to open its doors in October 2025 ! We are seeking a diligent and experienced Compliance Officer to establish and oversee the compliance framework for our new Devanahalli team. This is a pivotal role in ensuring our operations adhere to all regulatory requirements and internal policies from day one. If you have 5+ years of robust compliance experience within the banking sector, a deep understanding of RBI regulations, and are willing to commute to Devanahalli , we want to hear from you! Your Key Responsibilities will include: Regulatory Adherence & Reporting: Manage and maintain the RBI ADF/reporting system for timely and accurate submission of regulatory returns. Act as the Money Laundering Reporting Officer (MLRO), performing all associated duties to ensure adherence with RBI rules and regulatory bodies. Liaise effectively with the RBI, FIU, and other regulatory bodies to ensure compliance. Authorize and release payment orders filtered by the OFAC Filtering System. Monitor internal control processes and submit Monthly Compliance Reports to Head Office (H.O.). Internal Audit & Controls: Act in the capacity of Internal Auditor, ensuring regular audits are performed across all branch departments. Manage daily and monthly internal audits. Coordinate and manage audits initiated by Head Office. Manage responses and follow-ups related to external bank audits. Policy, Procedure & Advisory: Work closely with the Chief Executive Officer (CEO) in overseeing compliance procedures and advising on risk management. Assist the CEO with developing the entity-wide budget for compliance efforts, identifying resource gaps. Create, review, and update internal processes and manuals according to KEB Hana Bank policy and regulatory changes. Preview and assess new/renewal of contracts, proposals for new banking products/services, and submissions of bank data to external parties. Training & Planning: Develop and deliver training for all staff on internal controls and Anti-Money Laundering (AML) procedures, reporting completion to H.O. Establish, execute, and report results of the yearly Compliance Plan to H.O. Stakeholder Management: Provide managers of other teams with appropriate and up-to-date compliance information or data promptly. Manage and maintain strong, cooperative relationships with regulators. What We're Looking For: Minimum 5+ years of progressive experience in a banking compliance role. In-depth knowledge of RBI regulations, AML/KYC laws, OFAC, and other relevant Indian banking compliance frameworks. Proven experience as an MLRO or in a similar capacity. Strong experience with regulatory reporting systems (e.g., RBI ADF). Experience in developing and implementing compliance policies, procedures, and training programs. Demonstrable experience in conducting and managing internal audits. Excellent analytical, problem-solving, and decision-making skills. Strong communication, interpersonal, and liaison skills for effective interaction with regulators and internal teams. Meticulous attention to detail. Crucially, a willingness and ability to commute to our new Devanahalli branch. Relevant professional certifications (e.g., CAMS, IIBF certification in Compliance/AML) would be a significant advantage. About KEB Hana Bank: KEB Hana Bank is a premier global financial group headquartered in South Korea, with an extensive network spanning numerous countries. We pride ourselves on our customer-centric approach, commitment to innovation, and a legacy of trust built over decades. Our expansion into Devanahalli signifies our dedication to serving the Indian market and contributing to its vibrant economy. Join us as we embark on this exciting new chapter! Why Join KEB Hana Bank? Be a pioneering member of a new branch for a globally recognized bank. Opportunity to establish and shape the compliance culture from the ground up. Competitive salary and benefits package. A dynamic and supportive work environment with opportunities for growth. Ready to make your mark? If you are a proactive and experienced compliance professional ready for a challenging and rewarding role, we encourage you to apply! You can apply via the "Apply" button on LinkedIn OR send your resume directly to job.hanabank@gmail.com with the subject line "Application for Compliance Officer - Devanahalli." We look forward to reviewing your application! #ComplianceOfficer #BankingCompliance #AML #KYC #RBI #RegulatoryAffairs #RiskManagement #FinanceJobs #BankingJobs #KEBHanaBank #Devanahalli #BangaloreJobs #JobOpening #Hiring #NewBranch #KarnatakaJobs Show more Show less
Posted 1 month ago
0 years
0 Lacs
Pune, Maharashtra, India
Remote
Teamified works with top enterprises and digital native businesses in Australia helping them build their remote teams in India, Philippines and Sri Lanka. We pride ourselves with hiring great teams to work on exciting game changing technology. Teamified currently has 200+ engineers, testers, product managers etc. working across 20+ partners. We focus on uplifting the way organisations build and manage their remote teams through great working relationships, trust, integrity, culture and hiring processes. Additional to this we are building our own technology product offerings. We strive to deliver the best outcomes for our customers, our partners and our people. Key Responsibilities: Data Collection and Interpretation: Gather and interpret data from various sources to identify relevant information and insights Pattern and Trend Identification: Analyze data sets to identify patterns and trends, enabling informed decision-making and forecasting Insight Derivation: Derive actionable insights and forecasts from data analysis, supporting business objectives and strategies Data Quality Assessment: Assess data quality and implement remediation measures to ensure accuracy and reliability Data Mapping: Map data from different sources to facilitate integration and compatibility Report Creation and Data Visualization: Create comprehensive reports and visualize data using reporting tools to effectively communicate findings and insights Data Pipeline Management: Build and maintain data pipelines to streamline the flow of information and automate processes Process Enhancement: Identify opportunities for process enhancements and contribute to their implementation to improve efficiency and effectiveness. Key Requirements: Proficiency in analyzing large data sets and writing comprehensive reports Excellent analytical skills with the ability to identify trends, patterns, and insights from data Strong attention to detail and methodical mindset Strong problem-solving skills Intermediate understanding of databases and data models Hands-on experience with SQL database design Experience designing, developing, and publishing data ETL pipelines Exposure to Azure Data Factory (ADF) and Azure Storage Team skills and collaboration. Desirable Skills: Understanding of reporting and data visualization tools such as Power BI Excellent communication skills Education in Mathematics, Computer Science, or Statistics Knowledge of programming languages such as Python or C#. Benefits Flexibility in work hours, with a focus on managing energy rather than time Access to online learning platforms and a budget for professional development A collaborative, no-silos environment, encouraging learning and growth across teams A dynamic social culture with team lunches, social events, and opportunities for creative input Private Health insurance If you possess these skills and are passionate about leveraging data to drive insights and business outcomes, we encourage you to apply for the role of Data Engineer. Join us in our mission to unlock the power of data for informed decision-making and business success. Apply now! Show more Show less
Posted 1 month ago
5.0 years
0 Lacs
India
Remote
As an Azure Data Engineer, your mission will be to lead the design and development of data ingestion processes into our Azure Databricks environment , supporting business workflows and data-sharing initiatives through modern APIs. Responsibilities Design, build, and maintain scalable data pipelines in Azure. Develop and manage ETL processes to ingest, transform, and serve clean data. Ensure data integrity through validation, cleansing, and quality controls. Monitor and troubleshoot data pipeline performance and errors. Collaborate with cross-functional teams (analysts, scientists, app developers). Write and optimize SQL and Python/Spark code for large-scale data handling. Drive data governance practices including cataloging, lineage, and metadata. Implement data security/privacy in alignment with policy and compliance. Stay up to date with Azure data engineering tools and industry best practices. Requirements 5+ years of hands-on experience with Azure data engineering. Deep expertise in Azure Data Factory, Databricks, Synapse, Azure SQL, Logic Apps , and Azure Functions . Strong programming skills in Python and PySpark . Experience with Dremio , Kafka , or Snowflake is a big plus. Bonus: Experience extracting data from SharePoint Online . Hands-on experience with structured & unstructured data processing. DP-203 certification or equivalent Azure training is highly valued. Strong communication skills—able to work with technical and business teams alike. 🔗 Apply now or share your profile with us at info@papigen.com #AzureDataEngineer #Databricks #DataPipelines #AzureJobs #ETL #ADF #Synapse #DataEngineerJobs #RemoteJobs Show more Show less
Posted 1 month ago
2.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Associate Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Responsibilities: Job title: Data Engineer About The Role: As a Junior/Senior Data Engineer, you'll be taking the lead in designing and maintaining complex data ecosystems. Your experience will be instrumental in optimizing data processes, ensuring data quality, and driving data-driven decision-making within the organization. Responsibilities: Architecting and designing complex data systems and pipelines. Leading and mentoring junior data engineers and team members. Collaborating with cross-functional teams to define data requirements. Implementing advanced data quality checks and ensuring data integrity. Optimizing data processes for efficiency and scalability. Overseeing data security and compliance measures. Evaluating and recommending new technologies to enhance data infrastructure. Providing technical expertise and guidance for critical data projects. Required Skills & Experience: Proficiency in designing and building complex data pipelines and data processing systems. Leadership and mentorship capabilities to guide junior data engineers and foster skill development. Strong expertise in data modeling and database design for optimal performance. Skill in optimizing data processes and infrastructure for efficiency, scalability, and cost-effectiveness. Knowledge of data governance principles, ensuring data quality, security, and compliance. Familiarity with big data technologies like Hadoop, Spark, or NoSQL. Expertise in implementing robust data security measures and access controls. Effective communication and collaboration skills for cross-functional teamwork and defining data requirements. Skills: Cloud: Azure/GCP/AWS DE Technologies: ADF, Big Query, AWS Glue etc., Data Lake: Snowflake, Data Bricks etc., Mandatory Skill Sets: Cloud: Azure/GCP/AWS DE Technologies: ADF, Big Query, AWS Glue etc., Data Lake: Snowflake, Data Bricks etc. Preferred Skill Sets: Cloud: Azure/GCP/AWS DE Technologies: ADF, Big Query, AWS Glue etc., Data Lake: Snowflake, Data Bricks etc. Years Of Experience Required: 2-4 years Education Qualification: BE/BTECH, ME/MTECH, MBA, MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master of Engineering, Master of Business Administration, Bachelor of Engineering Degrees/Field Of Study Preferred: Certifications (if blank, certifications not specified) Required Skills AWS Glue, Microsoft Azure Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Apache Hadoop, Azure Data Factory, Communication, Data Anonymization, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline, Data Quality, Data Transformation, Data Validation, Data Warehouse, Data Warehouse Indexing {+ 13 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date Show more Show less
Posted 1 month ago
2.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Associate Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Responsibilities: Job title: Data Engineer About The Role: As a Junior/Senior Data Engineer, you'll be taking the lead in designing and maintaining complex data ecosystems. Your experience will be instrumental in optimizing data processes, ensuring data quality, and driving data-driven decision-making within the organization. Responsibilities: Architecting and designing complex data systems and pipelines. Leading and mentoring junior data engineers and team members. Collaborating with cross-functional teams to define data requirements. Implementing advanced data quality checks and ensuring data integrity. Optimizing data processes for efficiency and scalability. Overseeing data security and compliance measures. Evaluating and recommending new technologies to enhance data infrastructure. Providing technical expertise and guidance for critical data projects. Required Skills & Experience: Proficiency in designing and building complex data pipelines and data processing systems. Leadership and mentorship capabilities to guide junior data engineers and foster skill development. Strong expertise in data modeling and database design for optimal performance. Skill in optimizing data processes and infrastructure for efficiency, scalability, and cost-effectiveness. Knowledge of data governance principles, ensuring data quality, security, and compliance. Familiarity with big data technologies like Hadoop, Spark, or NoSQL. Expertise in implementing robust data security measures and access controls. Effective communication and collaboration skills for cross-functional teamwork and defining data requirements. Skills: Cloud: Azure/GCP/AWS DE Technologies: ADF, Big Query, AWS Glue etc., Data Lake: Snowflake, Data Bricks etc., Mandatory Skill Sets: Cloud: Azure/GCP/AWS DE Technologies: ADF, Big Query, AWS Glue etc., Data Lake: Snowflake, Data Bricks etc. Preferred Skill Sets: Cloud: Azure/GCP/AWS DE Technologies: ADF, Big Query, AWS Glue etc., Data Lake: Snowflake, Data Bricks etc. Years Of Experience Required: 2-4 years Education Qualification: BE/BTECH, ME/MTECH, MBA, MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering, Master of Engineering, Master of Business Administration Degrees/Field Of Study Preferred: Certifications (if blank, certifications not specified) Required Skills AWS Glue, Microsoft Azure Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Apache Hadoop, Azure Data Factory, Communication, Data Anonymization, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline, Data Quality, Data Transformation, Data Validation, Data Warehouse, Data Warehouse Indexing {+ 13 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date Show more Show less
Posted 1 month ago
13.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Director Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Job Description & Summary: A career within PWC About The Role As a Director, you'll be taking the lead in designing and maintaining complex data ecosystems. Your experience will be instrumental in optimizing data processes, ensuring data quality, and driving data-driven decision-making within the organization. Responsibilities Architecting and designing complex data systems and pipelines. Leading and mentoring junior data engineers and team members. Collaborating with cross-functional teams to define data requirements. Implementing advanced data quality checks and ensuring data integrity. Optimizing data processes for efficiency and scalability. Overseeing data security and compliance measures. Evaluating and recommending new technologies to enhance data infrastructure. Providing technical expertise and guidance for critical data projects. Required Skills & Experience Proficiency in designing and building complex data pipelines and data processing systems. Leadership and mentorship capabilities to guide junior data engineers and foster skill development. Strong expertise in data modeling and database design for optimal performance. Skill in optimizing data processes and infrastructure for efficiency, scalability, and cost-effectiveness. Knowledge of data governance principles, ensuring data quality, security, and compliance. Familiarity with big data technologies like Hadoop, Spark, or NoSQL. Expertise in implementing robust data security measures and access controls. Effective communication and collaboration skills for cross-functional teamwork and defining data requirements. Skills Cloud: Azure/GCP/AWS DE Technologies: ADF, Big Query, AWS Glue etc., Data Lake: Snowflake, Data Bricks etc., Mandatory Skill Sets Cloud: Azure/GCP/AWS DE Technologies: ADF, Big Query, AWS Glue etc., Data Lake: Snowflake, Data Bricks etc. Preferred Skill Sets Cloud: Azure/GCP/AWS DE Technologies: ADF, Big Query, AWS Glue etc., Data Lake: Snowflake, Data Bricks etc. Years Of Experience Required 13+ years Education Qualification BE/BTECH, ME/MTECH, MBA, MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master of Engineering, Bachelor of Engineering, Master of Business Administration Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Microsoft Azure Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Hadoop, Azure Data Factory, Coaching and Feedback, Communication, Creativity, Data Anonymization, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline, Data Quality, Data Transformation {+ 30 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Up to 60% Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date Show more Show less
Posted 1 month ago
4.0 years
0 Lacs
Greater Kolkata Area
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Manager Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Job Description & Summary: A career within PWC Responsibilities About the role: As a Junior/Senior Data Engineer, you'll be taking the lead in designing and maintaining complex data ecosystems. Your experience will be instrumental in optimizing data processes, ensuring data quality, and driving data-driven decision-making within the organization. Architecting and designing complex data systems and pipelines. Leading and mentoring junior data engineers and team members. Collaborating with cross-functional teams to define data requirements. Implementing advanced data quality checks and ensuring data integrity. Optimizing data processes for efficiency and scalability. Overseeing data security and compliance measures. Evaluating and recommending new technologies to enhance data infrastructure. Providing technical expertise and guidance for critical data projects. Required Skills & Experience Proficiency in designing and building complex data pipelines and data processing systems. Leadership and mentorship capabilities to guide junior data engineers and foster skill development. Strong expertise in data modeling and database design for optimal performance. Skill in optimizing data processes and infrastructure for efficiency, scalability, and cost-effectiveness. Knowledge of data governance principles, ensuring data quality, security, and compliance. Familiarity with big data technologies like Hadoop, Spark, or NoSQL. Expertise in implementing robust data security measures and access controls. Effective communication and collaboration skills for cross-functional teamwork and defining data requirements. Skills Cloud: Azure/GCP/AWS DE Technologies: ADF, Big Query, AWS Glue etc., Data Lake: Snowflake, Data Bricks etc., Mandatory Skill Sets Cloud: Azure/GCP/AWS DE Technologies: ADF, Big Query, AWS Glue etc., Data Lake: Snowflake, Data Bricks etc. Preferred Skill Sets Cloud: Azure/GCP/AWS DE Technologies: ADF, Big Query, AWS Glue etc., Data Lake: Snowflake, Data Bricks etc. Years Of Experience Required 4-7years Education Qualification BE/BTECH, ME/MTECH, MBA, MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering, Master of Business Administration, Master of Engineering Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Microsoft Azure Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Analytical Thinking, Artificial Intelligence, Big Data, C++ Programming Language, Coaching and Feedback, Communication, Complex Data Analysis, Creativity, Data-Driven Decision Making (DIDM), Data Engineering, Data Lake, Data Mining, Data Modeling, Data Pipeline, Data Quality, Data Science, Data Science Algorithms, Data Science Troubleshooting, Data Science Workflows, Deep Learning, Embracing Change, Emotional Regulation {+ 27 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date Show more Show less
Posted 1 month ago
9.0 years
0 Lacs
Gurugram, Haryana, India
On-site
About Markovate. At Markovate, we dont just follow trendswe drive them. We transform businesses through innovative AI and digital solutions that turn vision into reality. Our team harnesses breakthrough technologies to craft bespoke strategies that align seamlessly with our clients' ambitions. From AI Consulting And Gen AI Development To Pioneering AI Agents And Agentic AI, We Empower Our Partners To Lead Their Industries With Forward-thinking Precision And Unmatched Overview We are seeking a highly experienced and innovative Senior Data Engineer with a strong background in hybrid cloud data integration, pipeline orchestration, and AI-driven data modelling. Requirements This role is responsible for designing, building, and optimizing robust, scalable, and production-ready data pipelines across both AWS and Azure platforms, supporting modern data architectures such as CEDM and Data Vault Requirements : 9+ years of experience in data engineering and data architecture. Excellent communication and interpersonal skills, with the ability to engage with teams. Strong problem-solving, decision-making, and conflict-resolution abilities. Proven ability to work independently and lead cross-functional teams. Ability to work in a fast-paced, dynamic environment and handle sensitive issues with discretion and professionalism. Ability to maintain confidentiality and handle sensitive information with attention to detail with discretion. The candidate must have strong work ethics and trustworthiness. Must be highly collaborative and team oriented with commitment to Responsibilities : Design and develop hybrid ETL/ELT pipelines using AWS Glue and Azure Data Factory (ADF). Process files from AWS S3 and Azure Data Lake Gen2, including schema validation and data profiling. Implement event-based orchestration using AWS Step Functions and Apache Airflow (Astronomer). Develop and maintain bronze ? silver ? gold data layers using DBT or Coalesce. Create scalable ingestion workflows using Airbyte, AWS Transfer Family, and Rivery. Integrate with metadata and lineage tools like Unity Catalog and Open Metadata. Build reusable components for schema enforcement, EDA, and alerting (e.g., MS Teams). Work closely with QA teams to integrate test automation and ensure data quality. Collaborate with cross-functional teams including data scientists and business stakeholders to align solutions with AI/ML use cases. Document architectures, pipelines, and workflows for internal stakeholders. Experience with cloud platforms: AWS (Glue, Step Functions, Lambda, S3, CloudWatch, SNS, Transfer Family) and Azure (ADF, ADLS Gen2, Azure Functions, Event Grid). Skilled in transformation and ELT tools: Databricks (PySpark), DBT, Coalesce, and Python. Proficient in data ingestion using Airbyte, Rivery, SFTP/Excel files, and SQL Server extracts. Strong understanding of data modeling techniques including CEDM, Data Vault 2.0, and Dimensional Modelling. Hands-on experience with orchestration tools such as AWS Step Functions, Airflow (Astronomer), and ADF Triggers. Expertise in monitoring and logging with CloudWatch, AWS Glue Metrics, MS Teams Alerts, and Azure Data Explorer (ADX). Familiar with data governance and lineage tools: Unity Catalog, OpenMetadata, and schema drift detection. Proficient in version control and CI/CD using GitHub, Azure DevOps, CloudFormation, Terraform, and ARM templates. Experienced in data validation and exploratory data analysis with pandas profiling, AWS Glue Data Quality, and Great to have: Experience with cloud data platforms (e.g., AWS, Azure, GCP) and their data and AI services. Knowledge of ETL tools and frameworks (e.g., Apache NiFi, Talend, Informatica). Deep understanding of AI/Generative AI concepts and frameworks (e.g., TensorFlow, PyTorch, Hugging Face, OpenAI APIs). Experience with data modeling, data structures, and database design. Proficiency with data warehousing solutions (e.g., Redshift, BigQuery, Snowflake). Hands-on experience with big data technologies (e.g., Hadoop, Spark, Kafka). Proficiency in SQL and at least one programming language (e.g., Python, it's like to be at Markovate : At Markovate, we thrive on collaboration and embrace every innovative idea. We invest in continuous learning to keep our team ahead in the AI/ML landscape. Transparent communication is keyevery voice at Markovate is valued. Our agile, data-driven approach transforms challenges into opportunities. We offer flexible work arrangements that empower creativity and balance. Recognition is part of our DNAyour achievements drive our success. Markovate is committed to sustainable practices and positive community impact. Our people-first culture means your growth and well-being are central to our mission. Location : hybrid model 2 days onsite. (ref:hirist.tech) Show more Show less
Posted 1 month ago
2.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
What You'll Do Data Analytics & Modeling : Apply strong Data Analytics and Analytical Skills to understand complex business requirements and translate them into effective Data Modeling solutions. Data Pipeline Development : Design, develop, and maintain robust ETL (Extract, Transform, Load) pipelines using Azure Data Engineering services to ingest, process, and transform large datasets. Data Warehousing : Build and optimize fact and dimension tables within analytical databases, contributing to scalable data warehousing solutions. Azure Data Engineering : Hands-on development using Azure Data Engineering tools such as Azure Data Factory (ADF), Databricks, and Fabric. Programming for Data : Utilize expertise in PySpark and Python to develop and maintain efficient data processing solutions, ensuring data integrity, performance, and scalability. BI Dashboarding : Develop and maintain compelling Business Intelligence (BI) dashboards using tools like PowerBI and/or Tableau, turning raw data into actionable insights. Analytical Databases : Work with analytical databases such as Snowflake, Azure Synapse, and others to store and process large volumes of data. SQL & Programming : Demonstrate proficiency in SQL for data manipulation and querying, and possess skills in other relevant programming languages. Performance & Integrity : Ensure data integrity, quality, and optimal performance of data pipelines and BI solutions. Collaboration : Collaborate effectively with data scientists, business analysts, product managers, and other engineering teams to understand data needs and deliver comprehensive Skills & Qualifications : Experience : Minimum 2 years of core, hands-on experience in Azure Data Engineering and Business Intelligence (PowerBI and/or Tableau). Data Fundamentals : Strong understanding of Data Analytics, Analytical Skills, Data Analysis, Data Management, and Data Modeling concepts. Azure Data Engineering : Mandatory hands-on experience with Azure Data Engineering services including ADF, Databricks, and Fabric. Programming for Data : Proficiency in PySpark and Python, with a proven ability to develop and maintain robust data processing solutions. SQL Expertise : Strong proficiency in SQL for complex data querying and manipulation. BI Tools : Practical experience building BI dashboards using PowerBI and/or Tableau. Analytical Databases : Experience with analytical databases like Snowflake, Azure Synapse, etc. Problem Solving : Strong problem-solving and critical thinking abilities to tackle complex data challenges. Education : Bachelor's degree in Computer Science, Information Systems, or a related Qualifications (Nice-to-Have) : Relevant Microsoft Azure certifications (e.g., Azure Data Engineer Associate, Azure Data Analyst Associate). Experience with real-time data streaming technologies (e.g., Kafka, Azure Event Hubs). Familiarity with Data Governance and Data Quality frameworks. Exposure to MLOps concepts and tools. (ref:hirist.tech) Show more Show less
Posted 1 month ago
3.0 - 5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Summary We are looking for a Data Engineer with solid hands-on experience in Azure-based data pipelines and Snowflake to help build and scale data ingestion, transformation, and integration processes in a cloud-native Responsibilities : Develop and maintain data pipelines using ADF, Snowflake, and Azure Storage Perform data integration from various sources including APIs, flat files, and databases Write clean, optimized SQL and support data modeling efforts in Snowflake Monitor and troubleshoot pipeline issues and data quality concerns Contribute to documentation and promote best practices across the : 3- 5 years of experience in data engineering or related role Strong hands-on knowledge of Snowflake, Azure Data Factory, SQL, and Azure Data Lake Proficient in scripting (Python preferred) for data manipulation and automation Understanding of data warehousing concepts and ETL/ELT patterns Experience with Git, JIRA, and agile delivery environments is a plus Strong attention to detail and eagerness to learn in a collaborative team setting (ref:hirist.tech) Show more Show less
Posted 1 month ago
6.0 - 9.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
YASH Technologies is a leading technology integrator specializing in helping clients reimagine operating models, enhance competitiveness, optimize costs, foster exceptional stakeholder experiences, and drive business transformation. At YASH, we’re a cluster of the brightest stars working with cutting-edge technologies. Our purpose is anchored in a single truth – bringing real positive changes in an increasingly virtual world and it drives us beyond generational gaps and disruptions of the future. We are looking forward to hire Power BI Professionals in the following areas : Experience 6-9 Years Job Description Power BI Developer Azure ADF Work on Power BI reports - Develop new reports or /fix any data issues in existing reports and support users for any data validation. Support the Data team to understand the functional requirements. Strong experience in SQL & writing complex DAX queries. Understand the existing report requirements & capture new report specifications. Coordinate amongst various groups in understanding Report KPI’s Participating in the data requirement sessions and develop Power BI reports. Provide the solutioning and design the prototype for use case reports. Specialized in different reporting tools. Responsible for report feature assessment and building report matrix. Certifications Mandatory At YASH, you are empowered to create a career that will take you to where you want to go while working in an inclusive team environment. We leverage career-oriented skilling models and optimize our collective intelligence aided with technology for continuous learning, unlearning, and relearning at a rapid pace and scale. Our Hyperlearning workplace is grounded upon four principles Flexible work arrangements, Free spirit, and emotional positivity Agile self-determination, trust, transparency, and open collaboration All Support needed for the realization of business goals, Stable employment with a great atmosphere and ethical corporate culture Show more Show less
Posted 1 month ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Job Description At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture, and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Azure Cloud – Technology Assurance As Risk Assurance Senior, you’ll contribute technically to Risk Assurance client engagements and internal projects. An important part of your role will be to assist fellow Seniors & Managers while actively participating within the client engagement Similarly, you’ll anticipate and identify risks within engagements and share any issues with senior members of the team. In line with EY commitment to quality, you’ll confirm that work is of high quality and is reviewed by the next-level reviewer. As a member of the team, you’ll help to create a positive learning culture and assist fellow team members while delivering an assignment. The opportunity We’re looking for professional having at least 3 years or more of experience. You’ll be part of a cross-functional team that’s responsible for the full software development life cycle, from conception to deployment. This is a fantastic opportunity to be part of a leading firm whilst being instrumental in the growth of a new service offering. Skills and Summary of Accountabilities: Designing, architecting, and developing solutions leveraging Azure cloud to ingest, process and analyse large, disparate data sets to exceed business requirements. Proficient in Azure Data Factory, Azure Databricks, Azure SQL Database, Azure Synapse Analytics and Azure Data Lake Storage for data storage and processing. Designed data pipelines using these technologies. Working knowledge of Data warehousing/Modelling ETL/ELT pipelines, Data Democratization using cloud services. Design, build and maintain efficient, reusable, and reliable code ensuring the best possible performance, quality, and responsiveness of applications using reliable Python code. Automate tasks through Python scripting, databases, and other advanced technologies like databricks, synapse studia, ADF etc Exposure working in client facing roles, collaborate with cross functional teams including internal audits, IT security and business stakeholders to assess control effectiveness and facilitate remediation activities. Preferred knowledge/understanding of experience in IT Controls, Risk and Compliance. Design IT Risk Controls framework such as IT SOX. Testing of internal controls such as IT general controls, IT application controls, IPE related controls, interface controls etc. To qualify for the role, you must have. 3 years of experience in building end-to-end business solutions using big data and data engineering. Expertise in core Microsoft Azure Data Services (e.g., Azure Data Factory, Azure Databricks, Azure Synapse, Azure SQL, Data Lake services etc). Familiar with integrating services, Azure Logic apps, Function Apps, Stream analytics, Triggers, Event hubs etc. Expertise in Cloud related Big Data integration and infrastructure tech stack using Azure Databricks & Apache Spark framework. Must have the following: Python, SQL and preferred R, Scala. Experience developing software tools using utilities, pandas, NumPy and other libraries/components etc. Hands-on expertise in using Python frameworks (like Django, Pyramid, Flask). Preferred substantial background in data extraction and transformation, developing data pipelines using MS SSIS, Informatica, Talend or any other on-premises tools. Preferred knowledge on Power BI or any BI tools. Should have good understanding of version controlling with Git, JIRA, Change / Release management, build/deploy, CI/CD Azure Devops. Ideally, you'll also have Bachelor's Degree or above in mathematics, information systems, statistics, computer science, data analytics or related disciplines. Experience with AI/ML is a plus. Preferred Certification in DP-203 Azure Data Engineer or any other. Ability to communicate clearly and concisely and use strong writing and verbal skills to communicate facts, figures, and ideas to others. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less
Posted 1 month ago
7.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Description Oracle Global Services Center (GSC) is a fast-growing cloud consulting team passionate about our customer’s rapid and successful adoption of Oracle Cloud Solutions. Our flexible and innovative “Optimum Shore” approach helps our clients implement, maintain, and integrate their Oracle Cloud Applications and Technology environments while reducing overall total cost of ownership. We assemble an efficient team for each client by blending resources from onshore, near shore, and offshore global delivery centers to match the right expertise, to the right solution, for the right cost. To support our rapid growth, we are seeking versatile consultants that bring a passion for providing excellent client experience, enabling client success by developing innovative solutions. Our cloud solutions are redefining the world of business, empowering governments, and helping society evolve with the pace of change. Join the team of top-class consultants and help our customers achieve more than ever before.. Senior consulting position operating independently with some assistance and mentorship to a project team or customer align with Oracle methodologies and practices. Performs standard duties and tasks with some variation to implement Oracle products and technology to meet customer specifications. Life at Oracle: We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform crucial job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veteran’s status or any other characteristic protected by law. At Oracle, we don’t just value differences—we celebrate them! Committed to crafting a workplace where all kinds of people work together. We believe innovation starts with diversity. https://www.oracle.com/corporate/careers/culture/diversity.html Career Level - IC2 Responsibilities Oracle Global Services Center (GSC) is a fast-growing cloud consulting team passionate about our customer’s rapid and successful adoption of Oracle Cloud Solutions. Our flexible and innovative “Optimum Shore” approach helps our clients implement, maintain, and integrate their Oracle Cloud Applications and Technology environments while reducing overall total cost of ownership. We assemble an efficient team for each client by blending resources from onshore, near shore, and offshore global delivery centers to match the right expertise, to the right solution, for the right cost. To support our rapid growth, we are seeking versatile consultants that bring a passion for providing excellent client experience, enabling client success by developing innovative solutions. Our cloud solutions are redefining the world of business, empowering governments, and helping society evolve with the pace of change. Join the team of top-class consultants and help our customers achieve more than ever before.. Life at Oracle: We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform crucial job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veteran’s status or any other characteristic protected by law. At Oracle, we don’t just value differences—we celebrate them! Committed to crafting a workplace where all kinds of people work together. We believe innovation starts with diversity. https://www.oracle.com/corporate/careers/culture/diversity.html Detailed Description Operates independently to provide quality work products to an engagement. Performs multifaceted and complex tasks that need independent judgment. Applies Oracle methodology, company procedures, and leading practices. Demonstrates expertise to deliver solutions on complex engagements. May act as the functional team lead on projects. Efficiently collaborates with management of customer organizations. Participates in business development activities. Develops and configures detailed solutions for complex projects. Detail Requirements: The candidate is expected to have a sound domain knowledge in HCM covering the hire to retire cycle with 7 to 12 years experience. They must have been a part of at least 3 end to end HCM Cloud implementations along with experience in at least 1 projects as a lead. FUNCTIONAL - The candidate must have knowledge in any of the modules along with Core HR module -Time and Labor Absence Management Payroll Benefits Compensation Recruiting The candidate should have been in client facing roles and interacted with customers in requirement gathering workshops, design, configuration, testing and go-live. Engineering graduates with MBA (HR) will be preferred. TECHNICAL - In-depth understanding of Data Model and Business process functionality and its data flow) in HCM Cloud application and Oracle EBS / PeopleSoft AU (HRMS). Experienced knowledge on Cloud HCM Conversions, integrations (HCM Extracts & BIP), Reporting (OTBI & BIP), Fast Formula & Personalization. Engineering Graduation in any field or MCA Degree or equivalent experience. Proven experience with Fusion technologies including HDL, HCM Extracts, Fast Formulas, BI Publisher Reports & Design Studio. Apart from the above experience, advanced knowledge in OIC, ADF, Java, PaaS, DBCS etc would be an added advantage. Good functional or technical leadership capability with strong planning and follow up skills, mentorship, Work Allocation, Monitoring and status updates to Project Coordinator Should have strong written and verbal communication skills, personal drive, flexibility, teammate, problem solving, influencing and negotiating skills and organizational awareness and sensitivity, engagement delivery, continuous improvement and sharing the knowledge and client management. Assist in the identification, assessment and resolution of complex Technical issues/problems. Interact with client frequently around specific work efforts/deliverable. Candidate should be open for domestic or international travel for short as well as long duration. About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law. Show more Show less
Posted 1 month ago
2.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Title: Data Engineer - Azure Location: Bengalauru, India - 12 days onsite per month Duration: 6 month extending contract, potential to flip FTE Required Skills & Experience At least 2 year of Development/Support experience in ADF V2 . At least 1 years of Experience with Spark (PySpark and other python data libraries), Spark Scala is also fine. At least 1 years of experience in T-SQL, creating complex store procedures, triggers etc. Should have used Azure DevOps extensively as part of projects. Should have worked on at least 1 project involving hybrid cloud implementation. Should have worked on SQL Database, Azure SQL Database, Azure Datawarehouse and should be comfortable with data warehousing concepts. Should have worked on Azure Logic Apps, Azure Functions for not less than 1 year. 3+ Years professional experience in a data Engineer/analyst role or similar. Knowledge in creating Power BI Reports & Dashboards. Job Description Insight Global is seeking a Data Engineer for a large company based in India. You will join a team in the midst of a data cleansing initiative (Product Data and Sales Orders Data). You will build compelling and clear visualizations of data and design, architect and implement Azure Data Factory V2 Pipelines. You will help the team develop solutions to implement error logging alerting mechanism using various available azure services. Compensation: $25 to $35 LPA Exact compensation may vary based on several factors, including skills, experience, and education. Benefit packages for this role will start on the 31st day of employment and include medical, dental, and vision insurance, as well as HSA, FSA, and DCFSA account options, and 401k retirement account access with employer matching. Employees in this role are also entitled to paid sick leave and/or other paid time off as provided by applicable law. Show more Show less
Posted 1 month ago
5.0 - 10.0 years
10 - 16 Lacs
Gurugram
Hybrid
Strong exp in SQL development along with exp in cloud AWS & good exp in ADF
Posted 1 month ago
5.0 years
0 Lacs
Gurgaon
On-site
Minimum 5 years of experience in SQL development with strong query optimization skill Hands-on experience in designing, developing, and maintaining SQL queries, stored procedures, and database structures Proven expertise in building and managing ETL pipelines using Azure Data Factory (ADF) Experience in integrating data from on-premise and cloud sources using AWS services Knowledge of data warehousing concepts, data modeling, and transformation logic Ability to ensure data accuracy, consistency, and performance across multiple systems Strong troubleshooting skills for resolving data and performance-related issues Familiarity with cloud-based data architecture and modern data engineering best practices Effective communication and collaboration with cross-functional teams Ability to document technical processes and data workflows clearly Location: Gurugram Job type: Hybrid Job Type: Full-time Work Location: In person
Posted 1 month ago
30.0 years
5 - 9 Lacs
Chennai
On-site
Senior Data Engineer – DBT & Snowflake About the Company Systech is a modern Data and Analytics consulting firm, helping clients embed data-driven capabilities into their business operations. We offer end-to-end data engineering services and outcomes-based analytics solutions, to drive your business forward in the digital age. Systech has over 30 years of experience, delivering 1500+ projects to industry-leading brands across the world. Job ID: JD20250527-1123 Job Name: Senior Data Engineer – DBT & Snowflake Years of Experience: 5 No of Openings: 2 Job Description: We are looking for a skilled and experienced DBT-Snowflake Developer to join our team! You will be involved in implementing ongoing and new initiatives. If you love learning, thinking strategically, innovating, and helping others, this role is for you. Primary Skills: DBT, Snowflake Secondary Skills: ADF, Databricks, Python, Airflow, Fivetran, Glue Role Description: This data engineering role includes: Creating and managing the technological infrastructure of a data platform. Architecting, building, and managing data flows/pipelines. Constructing data storage systems (NoSQL, SQL). Utilizing big data tools (Hadoop, Kafka) and integration tools for connecting data sources. Role Responsibilities: Translate functional specs and change requests into technical specs. Convert BRDs and specs into code. Develop efficient, testable, well-documented code. Ensure data/application accuracy and integrity via analysis, coding, and problem-solving. Set up development environments and configure dev tools. Communicate project status to stakeholders. Manage and secure data to meet business needs. Automate processes where needed. Communicate proficiently in English (written, verbal, presentation). Coordinate with the UAT team. Role Requirements: Proficient in basic and advanced SQL (procedures, analytical functions). Strong understanding of data warehousing (dimensional modeling, CDC, SCDs). Knowledge of Shell/PowerShell scripting. Familiar with relational and non-relational databases, data streams, and file stores. Skilled in performance tuning and optimization. Experience in data profiling and validation. Familiar with requirements gathering, documentation, and unit testing. Understanding of QA/testing processes. Knowledge of BI tools is a plus. Logical reasoning and analytical skills. Willingness to learn and take initiative. Comfort in fast-paced Agile environments. Additional Requirements: Design, develop, and maintain scalable data models and transformations using DBT with Snowflake . Effectively transform/load data from diverse sources into data warehouses or lakes. Manage DBT models ensuring accurate, aligned data transformation. Convert raw/unstructured data into structured datasets via DBT. Optimize SQL queries within DBT to boost performance. Establish DBT best practices for performance, scalability, and reliability. Strong SQL skills and a deep understanding of data warehousing and modern data architectures. Familiarity with cloud platforms (AWS, Azure, GCP). Migrate legacy transformation code into modular DBT models. How to apply: Qualified applicants please send resumes to: Systech Solutions, Inc.,
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
31458 Jobs | Dublin
Wipro
16542 Jobs | Bengaluru
EY
10788 Jobs | London
Accenture in India
10711 Jobs | Dublin 2
Amazon
8660 Jobs | Seattle,WA
Uplers
8559 Jobs | Ahmedabad
IBM
7988 Jobs | Armonk
Oracle
7535 Jobs | Redwood City
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi
Capgemini
6091 Jobs | Paris,France