Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Join us as an "Associate" at Barclays, where you'll spearhead the evolution of our digital landscape, driving innovation and excellence. You'll harness cutting-edge technology to revolutionize our digital offerings, ensuring unapparelled customer experiences. To be successful as a "Associate", you should have experience with: About India Corporate Operations About Regulatory Reporting department As part of the regulatory and supervisory functions bestowed on it, the Regulators in India collects various fixed format data (called 'Returns') from commercial banks, financial institutions, authorised dealers and non-banking financial institutions. This department is responsible for timely and accurate filing of Operations Returns to Regulator either directly or indirectly. This department is also accountable for preparation and oversight of various exposure reports for local and group Credit risk. Overall purpose of role The purpose of this role is to lead the Regulatory Reporting team in preparation, submission and automation of Corporate & Investment Banking Regulatory returns for Corporate and Investment Bank Operations as well as exposure reports for local and group Credit risk team. This role envisages team management, stakeholder management and maintain robust control environment. Managing and leading the team in delivering solutions and effective decision making Liaise with respective Stakeholders (Finance, Credit, Coverage, BIU, Compliance, Legal, Internal & External Auditors, Risk Control Unit, Technology, Vendor partners etc) on an ongoing basis to meet Barclays deliverables and Internal, external customer requirements. To act as a role model for all our values as well as inspire, motivate the team, drive for results, and communicate powerfully and prolifically. To conduct periodic assessments of the Control environment by analysing existing controls and issue around timeliness accuracy and completeness of risk information. Identify missing or weak controls, and work with risk reporting teams and other infrastructure teams to improve the control environment. Key Accountabilities Credit Reporting: Management of Operations support activities: Timely follow-up with Internal stakeholders for data input and timely escalation. Timely contribute to decks submitted to banks Governance forums. Maintain effective and standard operational processes and documentation. Assist in preparing any other documentation as may be required from time to time. Partner with support functions to drive excellence, continuous improvement, and simplification of processes in a timely and professional manner. Regulatory Reporting: Ensure that all returns and reports are delivered timely and accurately, SLAs are met, measured, and reported to stakeholders on agreed frequency. Accountable for preparation and production of 100+ Regulatory Returns like CRILC, RLC, LEF, RAQ, DSB XII, PSL, Non Resident Guarantee and Invocation, CIC Reporting, FTD, GPB, LCR Reporting, DSB Return - I, DEAF -Form I and II, DEAF -Form III, DEAF -Form IV, BAL Statement, R Return, DEAF -Form V, FC-TRS form, , Quarterly Investment Reconciliation Certificate, Short Sale Reporting, Pvt Placement Data, Basel III Liquidity Return (BLR6), Quarterly Review of Investment, RBS – (Tranche I, IA,IB, IC, ID, IE, IF, IG, IH II, III, Bank Profile), Half Yearly Review of Investment, LRA2, DICGC Premium, QCCP Exposure Report, Cross currency derivative statement , Past Performance Report, Commodity Hedging and any other return as assigned from time to time. Timely issue management. Escalate open and aging issues as per the bank’s escalation metrics and follow-up for resolution. Timely contribute to decks submitted to banks various Governance forums. Ensuring that the regulatory filings are in line with the Regulatory guidelines and Barclays standards and policy. Manage RBI ADF automation project for the returns owned by Operations. Clearly understanding the Returns automation requirements, interacting with the Stakeholders, and preparing BRDs. User Acceptance Testing from a functional point of view, raising defects if any and following up for closure. Collaborating with stakeholders like Credit Risk, Compliance, Finance, Technology teams and vendor partners in the automation cycle. Serve as an in-house subject matter expert in issues arising out of functional areas. Maintain effective and standard operational processes and documentation. Assist in preparing any other documentation as may be required from time to time. Partner with support functions to drive excellence, continuous improvement, and simplification of processes in a timely and professional manner. Contribute to regulatory reporting compliance framework. Stakeholder management and leadership. Stakeholder Management and Leadership skillsare critical components to the successful delivery of many activities required within thisrole. Stakeholder Management Liaising with Technology on automation of Regulatory returns, preparation of BRDs and defining of logics. Liaising with Credit Risk and Coverage team catering to various data and information requirements. Liaising with the BIU team for obtaining of various reports for internal or regulatory requirements. Liaising with the Compliance and Legal teams towards new Regulations and changes in process notes, regulatory submissions, and compliance requirements. Liaising with Corporate & Investment Operation teams. Liaising with RCU for assistance on recording their borrower’s static data in CFMS & Regulatory submissions Liaising with internal Audit teams for any audit requirements / change in existing processes. Liaising with external vendors (IT support / Auditors) as and when the requirement arises. Work with the wider risk reporting and risk management teams to ensure controls are fit for purpose, with agreed schedule to implement missing or weak controls. Leadership: Being proactive and to provide a strong sense of ownership to be demonstrated by the team. Decision making and problem solving. Effective problem-solving skills with a deeper, broader, and clear understanding key concerns challenging the team and driving control improvements. Ensure efficiency by highlighting areas that could cause potential risk to the bank and developing solutions to enhance current on-going processes and controls. Create strong partnerships with the Monitoring team within RCU, Trade Ops, Payments Ops, Investment Bank Ops and other divisions within Operations. Support business areas in deciphering upcoming regulatory & reporting changes and help them implement appropriate controls to meet these requirements. Strong analytical skills to enable good decision making. Incumbent should be able to provide guidance to other team members/colleagues on the specific areas of expertise. Demonstrate ability to manage, motivate and develop the team by way of proper planning and execution thereof. Flexibility to adapt to rapidly changing business events; Ability to work well under pressure, working accurately with attention to detail, and meeting deadlines. Active multi-tasking skills to analyse in detail and react quickly to problems performance related issues, coordination with other teams and task prioritization conflicts. Risk and Control Objective Take ownership for managing risk and strengthening controls in relation to the work you do Skills Skills and Qualifications will include. Basic understanding of Group Policy Guidelines, Credit Risk, Country Grades and Exposure Guidelines General knowledge and understanding of the Bank’s Products and Services is required to assist with proposed or existing transactions. IT Skills are required to extract and analyse a wide variety of reports. Management & Leadership skills Including people development. Person Specification This position requires an analytics professional specializing in Regulatory reporting and Credit reporting in financial services industry especially related to Corporate and Investment banking products and Operations. Sound knowledge of financial accounting concepts and banking applications. Experience working in Regulatory Reporting and Reconciliation function. Clear understanding of Regulatory reporting guidelines and Change Management principles, within a banking environment. Highly motivated, results-oriented, stakeholder -focused with strong people management skills. Good communication skills – should have fluent oral and written English skills. Strong analytical skills and the ability to correlate general ledger, data and reporting impacts across different interfacing applications and data flows. Should be able to visualize, implement and generate improvements in the current process, deliver efficiencies, strengthen the process framework and controls while making sure that the quality of reporting is immaculate. Ability to analyse and interpret large volumes of data, aggregation, and analysis of data on MS Excel to produce reports. Understand key performance measures and indicators that drive reporting and analytics. Proficient in MS Office. Strong interpersonal, analytical, facilitating, decision making and organization skills. Proactive, independent, and self-managing; Organized, detail Oriented & results driven. Change and transformation experience will be a plus. Desirable Skills/Preferred Qualifications: Fluent written and spoken English. Eye for detail in Document Vetting and Facility documentation. Customer-centric attitude Relationship Management Skills Communication Skills Personal Organisation Information Gathering Ability Problem Solving/Decision Making Skills Proactive Person with high Integrity Essential Skills/Basic Qualifications: Experience in Ops support function related activities like preparation of various regulatory returns, MIS, system knowledge MBA/Post-Graduate/Graduate. Desirable Skills/Preferred Qualifications: Knowledge of Barclays business areas, key priorities, and challenges Banking and Financial sector experience and knowledge of the types of activities that Ops function does. Job location is Mumbai Purpose of the role To support business areas with day-to-day processing, reviewing, reporting, trading and issue resolution. Accountabilities Support various business areas with day-to-day initiatives including processing, reviewing, reporting, trading, and issue resolution. Collaboration with teams across the bank to align and integrate operational processes. Identification of areas for improvement and providing recommendations in operational processes. Development and implementation of operational procedures and controls to mitigate risks and maintain operational efficiency. Development of reports and presentations on operational performance and communicate findings to internal senior stakeholders. Identification of industry trends and developments to implement best practice in banking operations. Participation in projects and initiatives to improve operational efficiency and effectiveness. Analyst Expectations To meet the needs of stakeholders/ customers through specialist advice and support Perform prescribed activities in a timely manner and to a high standard which will impact both the role itself and surrounding roles. Likely to have responsibility for specific processes within a team They may lead and supervise a team, guiding and supporting professional development, allocating work requirements and coordinating team resources. They supervise a team, allocate work requirements and coordinate team resources. If the position has leadership responsibilities, People Leaders are expected to demonstrate a clear set of leadership behaviours to create an environment for colleagues to thrive and deliver to a consistently excellent standard. The four LEAD behaviours are: L – Listen and be authentic, E – Energise and inspire, A – Align across the enterprise, D – Develop others. OR for an individual contributor, they manage own workload, take responsibility for the implementation of systems and processes within own work area and participate on projects broader than direct team. Execute work requirements as identified in processes and procedures, collaborating with and impacting on the work of closely related teams. Check work of colleagues within team to meet internal and stakeholder requirements. Provide specialist advice and support pertaining to own work area. Take ownership for managing risk and strengthening controls in relation to the work you own or contribute to. Deliver your work and areas of responsibility in line with relevant rules, regulation and codes of conduct. Maintain and continually build an understanding of how all teams in area contribute to the objectives of the broader sub-function, delivering impact on the work of collaborating teams. Continually develop awareness of the underlying principles and concepts on which the work within the area of responsibility is based, building upon administrative / operational expertise. Make judgements based on practise and previous experience. Assess the validity and applicability of previous or similar experiences and evaluate options under circumstances that are not covered by procedures. Communicate sensitive or difficult information to customers in areas related specifically to customer advice or day to day administrative requirements. Build relationships with stakeholders/ customers to identify and address their needs. All colleagues will be expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence and Stewardship – our moral compass, helping us do what we believe is right. They will also be expected to demonstrate the Barclays Mindset – to Empower, Challenge and Drive – the operating manual for how we behave. Back to nav Share job X(Opens in new tab or window) Facebook(Opens in new tab or window) LinkedIn(Opens in new tab or window)
Posted 4 days ago
4.0 - 7.0 years
3 - 5 Lacs
Pune
Work from Office
Position: SQL Developer Employment Type: Full Time Location: Pune, India Salary: TBC Work Experience: Applicant for this position should have 4+ years working as a SQL developer. Project Overview: The project will use a number of Microsoft SQL Server technologies and include development and maintenance of reports, APIs and other integrations with external financial systems. The successful applicant will liaise with other members of the team and will be expected to work on projects where they are the sole developer as well as part of a team on larger projects. The applicant will report to the SQL Development Manager Job Description: Ability to understand requirements clearly and communicate technical ideas to both technical stakeholders and business end users. Investigate and resolve issues quickly. Communication with end users. Working closely with other team members to understand business requirements. Complete structure analysis and systematic testing of the data. Skills: Microsoft SQL Server 2016 2022. T-SQL programming (4+ years) experience. Query/Stored Procedure performance tuning. SQL Server Integration Service. SQL Server Reporting Services. Experience in database design. Experience with source control. Knowledge of software engineering life cycle. Previous experience in designing, developing, testing, implementing and supporting software. 3rd Level IT Qualification. SQL MSCA or MSCE preferable. Knowledge of data technologies such as SnowFlake, Airflow, ADF desirable Other skills Ability to work on own initiative and as part of a team. Excellent time management and decision making skills. Excellent communication skills in both English written and verbal. Background in the financial industry preferable. Academic Qualification: Any graduation or post graduate. Any specialization in IT.
Posted 4 days ago
10.0 - 15.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Designation: Data Architect Location: Pune Experience: 10-15 years Skills Azure Expertise: The architect should have experience in architecting large scale analytics solutions using native services such as Azure Synapse, Data Lake, Data Factory, HDInsight, Databricks, Azure Cognitive Services, Azure ML, Azure Event Hub. Architecture Creation: Assist with creation of a robust, sustainable architecture that supports requirements and provides for expansion with secured access. BFSI Experience: Experience in building/running large data environment for BFSI clients. Collaboration: Work with customers, end users, technical architects, and application designers to define the data requirements and data structure for BI/Analytic solutions. Data Modeling: Designs conceptual and logical models for the data lake, data warehouse, data mart, and semantic layer (data structure, storage, and integration). Lead the database analysis, design, and build effort. Communication: Communicates physical database designs to lead data architect/database administrator. Data Model Evolution: Evolves data models to meet new and changing business requirements. Business Analysis: Work with business analysts to identify and understand requirements and source data systems. Big Data Technologies: Expert in big data technologies on Azure/GCP. ETL Platforms: Experience with ETL platforms like ADF, Glue, Ab Initio, Informatica, Talend, Airflow. Data Visualization: Experience in data visualization tools like Tableau, Power BI, etc. Data Engineering & Management: Experience in a data engineering, metadata management, database modeling and development role. Streaming Data Handling: Strong experience in handling streaming data with Kafka. Data API Understanding: Understanding of Data APIs, Web services. Data Security: Experience in Data security and Data Archiving/Backup, Encryption and define the standard processes for same. DataOps/MLOps: Experience in setting up DataOps and MLOps. Database Design: Ensure that the database designs fulfill the requirements, including data volume, frequency, and long-term BI/Analytics growth requirements. Integration: Work with other architects to ensure that all components work together to meet objectives and performance goals as defined in the requirements. System Performance: Improve system performance by conducting tests, troubleshooting, and integrating new elements. Data Science Coordination: Coordinate with the Data Science Teams to identify future data needs and requirements and creating pipelines for them. Soft Skills: Soft skills such as communication, leading the team, taking ownership and accountability to successful engagement. Quality Management: Participate in quality management reviews. Customer Management: Managing customer expectation and business user interactions. Research and Development: Deliver key research (MVP, POC) with an efficient turn-around time to help make strong product decisions. Mentorship: Demonstrate key understanding and expertise on modern technologies, architecture, and design. Mentor the team to deliver modular, scalable, and high-performance code. Innovation: Be a change agent on key innovation and research to keep the product, team at the cutting edge of technical and product innovation.
Posted 4 days ago
0 years
0 Lacs
Greater Kolkata Area
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Microsoft Management Level Senior Associate Job Description & Summary At PwC, our people in software and product innovation focus on developing cutting-edge software solutions and driving product innovation to meet the evolving needs of clients. These individuals combine technical experience with creative thinking to deliver innovative software products and solutions. Those in software engineering at PwC will focus on developing innovative software solutions to drive digital transformation and enhance business performance. In this field, you will use your knowledge to design, code, and test cutting-edge applications that revolutionise industries and deliver exceptional user experiences. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. Job Description & Summary: We are seeking a Data Engineer to design, develop, and maintain data ingestion processes to a data platform built using Microsoft Technologies, ensuring data quality and integrity. The role involves collaborating with data architects and business analysts to implement solutions using tools like ADF, Azure Databricks, and requires strong SQL skills. Responsibilities: Key responsibilities include developing, testing, and optimizing ETL workflows and maintaining documentation. ETL development experience in Microsoft data track are required. Work with business team to translate the business requirement to technical requirements. Demonstrated expertise in Agile methodologies, including Scrum, Kanban, or SAFe. Mandatory skill sets: · Strong proficiency in Azure Databricks, including Spark and Delta Lake. · Experience with Azure Data Factory, Azure Data Lake Storage, and Azure SQL Database. · Proficiency in data integration and ETL processes and T-SQL. · Experienced working in Python for data engineering · Experienced working in Postgres Database · Experienced working in graph database · Experienced in architecture design and data modelling Good To Have Skill Sets: · Unity Catalog / Purview · Familiarity with Fabric/Snowflake service offerings · Visualization tool – PowerBI Preferred skill sets: Hands on knowledge of python, Pyspark and strong SQL knowledge. ETL and data warehousing is must. Relevant certifications (Any one) (e.g., Databricks Data Engineer Associate Microsoft Certified: Azure Data Engineer Associate Azure Solution Architect) are mandatory Years of experience required: 5+yrs Education qualification: Bachelor's degree in Computer Science, IT, or a related field. Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor Degree Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Data Engineering Optional Skills Acceptance Test Driven Development (ATDD), Acceptance Test Driven Development (ATDD), Accepting Feedback, Active Listening, Analytical Thinking, Android, API Management, Appian (Platform), Application Development, Application Frameworks, Application Lifecycle Management, Application Software, Business Process Improvement, Business Process Management (BPM), Business Requirements Analysis, C#.NET, C++ Programming Language, Client Management, Code Review, Coding Standards, Communication, Computer Engineering, Computer Science, Continuous Integration/Continuous Delivery (CI/CD), Creativity {+ 46 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date
Posted 4 days ago
5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
About Us CodeVyasa is a mid-sized product engineering company working with top-tier product and solutions companies like McKinsey, Walmart, RazorPay, Swiggy, and others. We are a growing team of 550+ professionals , delivering cutting-edge solutions across Agentic AI, RPA, Full-stack development, Data Engineering, and various other GenAI areas. If you’re passionate about coding, problem-solving, innovation, and leading impactful data projects, we would love to hear from you! Key Responsibilities • Design, build, and manage scalable, high-performance data pipelines using Azure Data Factory and PySpark. • Lead data warehousing, data lake, and lakehouse architecture initiatives to enable advanced analytics and BI solutions . • Collaborate closely with business stakeholders to translate complex data requirements into effective technical solutions. • Build and maintain impactful dashboards and reports using Power BI. • Provide technical leadership, mentorship, and guidance to junior and mid-level engineers across data projects. • Oversee workflow management, job monitoring, pipeline health, and troubleshooting to ensure seamless data operations. • Ensure best practices in data governance, quality, security, and performance tuning. • Manage project timelines, task prioritization, and cross-team collaboration to ensure timely and high-quality delivery. Must-Have Skills • 5+ years of cumulative experience in data engineering or similar roles. • Strong hands-on experience with: • Azure Data Factory (ADF) • PySpar k, Spark SQL, and Python • SQL Se rver, SSIS, SSRS • Databr icks • Power BI • Deep u nderstanding of: • Data W arehousing, Data Lake, and Data Lakehouse architecture • Reference and Master Data Management, Data Governance, MLOps, and AI/ML solutions • Proficient in: • ETL/ELT processes, data modeling, performance tuning, and pipeline optimization • Strong experie nce with : • Workflow documentation, Jira, Confluence, and ServiceNow • Excellent s kills in: • Problem-solving, time management, stakeholder communication, task prioritization, and team collaboration • Ability to thrive under pressure while maintaining quality and accuracy . Why Join CodeVyasa? • Opportunity to work on innovative, high-impact projects alongside a team of top-tier profe ssionals. • Exposure to cutting-edge technologies and global clients. • Continuous learning and professional growth opportunities. • Flexible work environment and supportive company culture. • Competitive salary, comprehensive benefits, and free healthcare coverage. 📩 You can reach out to me at kumkum@codevyasa.com for more details.
Posted 4 days ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Microsoft Management Level Senior Associate Job Description & Summary At PwC, our people in software and product innovation focus on developing cutting-edge software solutions and driving product innovation to meet the evolving needs of clients. These individuals combine technical experience with creative thinking to deliver innovative software products and solutions. Those in software engineering at PwC will focus on developing innovative software solutions to drive digital transformation and enhance business performance. In this field, you will use your knowledge to design, code, and test cutting-edge applications that revolutionise industries and deliver exceptional user experiences. *Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Job Description & Summary: We are seeking a Data Engineer to design, develop, and maintain data ingestion processes to a data platform built using Microsoft Technologies, ensuring data quality and integrity. The role involves collaborating with data architects and business analysts to implement solutions using tools like ADF, Azure Databricks, and requires strong SQL skills. Responsibilities: Key responsibilities include developing, testing, and optimizing ETL workflows and maintaining documentation. ETL development experience in Microsoft data track are required. Work with business team to translate the business requirement to technical requirements. Demonstrated expertise in Agile methodologies, including Scrum, Kanban, or SAFe. Mandatory skill sets: · Strong proficiency in Azure Databricks, including Spark and Delta Lake. · Experience with Azure Data Factory, Azure Data Lake Storage, and Azure SQL Database. · Proficiency in data integration and ETL processes and T-SQL. · Experienced working in Python for data engineering · Experienced working in Postgres Database · Experienced working in graph database · Experienced in architecture design and data modelling Good To Have Skill Sets: · Unity Catalog / Purview · Familiarity with Fabric/Snowflake service offerings · Visualization tool – PowerBI Preferred skill sets: Hands on knowledge of python, Pyspark and strong SQL knowledge. ETL and data warehousing is must. Relevant certifications (Any one) (e.g., Databricks Data Engineer Associate Microsoft Certified: Azure Data Engineer Associate Azure Solution Architect) are mandatory Years of experience required: 5+yrs Education qualification: Bachelor's degree in Computer Science, IT, or a related field. Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Data Engineering Optional Skills Acceptance Test Driven Development (ATDD), Acceptance Test Driven Development (ATDD), Accepting Feedback, Active Listening, Analytical Thinking, Android, API Management, Appian (Platform), Application Development, Application Frameworks, Application Lifecycle Management, Application Software, Business Process Improvement, Business Process Management (BPM), Business Requirements Analysis, C#.NET, C++ Programming Language, Client Management, Code Review, Coding Standards, Communication, Computer Engineering, Computer Science, Continuous Integration/Continuous Delivery (CI/CD), Creativity {+ 46 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date
Posted 4 days ago
12.0 - 16.0 years
0 Lacs
Mulshi, Maharashtra, India
On-site
Area(s) of responsibility Experience: 12 to 16 Years Develop and maintain scalable architecture, data warehouse design and data pipelines and build out new Data Source integrations to support continuing increases in data volume and complexity Assist in designing end to end data and Analytics solution architecture and perform POCs within Azure Drive the design, sizing, estimation & POC activities for Azure Data environments and related services for the use cases and the solutions Reviews the solution requirements support architecture design to ensure the selection of appropriate technology, efficient use of resources and integration of multiple systems and technology. Provide technical guidance, mentoring and code review, design level technical best practices Cloud Architect with experience in Azure ADF, Databricks, PySpark Responsible for designing and implementing secure, scalable, and highly available cloud-based solutions and estimation on AWS and Azure Cloud Experince in Azure Databricks and ADF, Azure Synapse and PySpark Experience with integration of different data sources with Data Warehouse and Data Lake is required Experience in creating Data warehouse, data lakes for Reporting, AI and Machine Learning Understanding of data modelling and data architecture concepts To be able to clearly articulate pros and cons of various technologies and platforms Collaborate with clients to understand their business requirements and translate them into technical solutions that leverage AWS and Azure cloud platforms. Define and implement cloud governance and best practices. Identify and implement automation opportunities to increase operational efficiency. Conduct knowledge sharing and training sessions to educate clients and internal teams on cloud technologies.
Posted 4 days ago
8.0 years
0 Lacs
Hyderābād
On-site
Job Title: Senior Data Engineer (Full Stack) Experience Required: 8+ Years Work Mode: Hybrid Locations: Hyderabad / Noida / Bangalore / Indore Must have exoerience with : Pyspark, Data Bricks,ADF, Big Data, Hadoop, Hive Job Summary: We are seeking a highly skilled and experienced Senior Data Engineer with a strong background in big data technologies and full-stack data pipeline development. The ideal candidate will have deep expertise in PySpark, Databricks, ADF, and the Hadoop ecosystem. This is a hybrid role requiring on-site presence in one of our listed locations. Key Responsibilities: Design, build, and manage scalable and reliable data pipelines using PySpark and Databricks Develop and orchestrate data workflows using Azure Data Factory (ADF) Work with large datasets in distributed environments using Hadoop, Hive, and other Big Data tools Optimize data solutions for performance, scalability, and reliability Collaborate with cross-functional teams including Data Scientists, Analysts, and Software Engineers Ensure data quality and integrity across various stages of the data lifecycle Participate in code reviews, troubleshooting, and performance tuning Required Skills & Qualifications: Minimum 8 years of experience in Data Engineering Strong programming skills in PySpark and working experience with Databricks Hands-on experience with ADF for pipeline orchestration Proficiency with Hadoop, Hive, and other big data tools Experience working in hybrid or distributed teams Solid understanding of data architecture, ETL processes, and performance tuning Excellent problem-solving and communication skills Good to Have: Azure cloud experience Familiarity with DevOps practices and CI/CD for data pipelines Experience with Delta Lake or similar data lake architectures interested candidates may share their resume at humanresource[dot]professional693[at]gmail[dot]com Job Type: Contract Contract length: 6 months Application Question(s): What is your notice period in days? Experience: pyspark: 8 years (Required) databricks: 8 years (Required) ADF: 8 years (Required) Big data: 8 years (Required) Hadoop: 8 years (Required) Apache Hive: 8 years (Required) Work Location: In person
Posted 4 days ago
7.0 - 9.0 years
0 Lacs
Hyderābād
On-site
Category: Software Development/ Engineering Main location: India, Andhra Pradesh, Hyderabad Position ID: J0625-0925 Employment Type: Full Time Position Description: Azure data bricks developer with 7-9 years of experience We are seeking a skilled Azure Databricks Developer to design, develop, and optimize big data pipelines using Databricks on Azure. The ideal candidate will have strong expertise in PySpark, Azure Data Lake, and data engineering best practices in a cloud environment. Key Responsibilities: Design and implement ETL/ELT pipelines using Azure Databricks and PySpark. Work with structured and unstructured data from diverse sources (e.g., ADLS Gen2, SQL DBs, APIs). Optimize Spark jobs for performance and cost-efficiency. Collaborate with data analysts, architects, and business stakeholders to understand data needs. Develop reusable code components and automate workflows using Azure Data Factory (ADF). Implement data quality checks, logging, and monitoring. Participate in code reviews and adhere to software engineering best practices. Required Skills & Qualifications: 3-5 years of experience in Apache Spark / PySpark. 3-5 years working with Azure Databricks and Azure Data Services (ADLS Gen2, ADF, Synapse). Strong understanding of data warehousing, ETL, and data lake architectures. Proficiency in Python and SQL. Experience with Git, CI/CD tools, and version control practices. Skills: ETL SQL What you can expect from us: Together, as owners, let’s turn meaningful insights into action. Life at CGI is rooted in ownership, teamwork, respect and belonging. Here, you’ll reach your full potential because… You are invited to be an owner from day 1 as we work together to bring our Dream to life. That’s why we call ourselves CGI Partners rather than employees. We benefit from our collective success and actively shape our company’s strategy and direction. Your work creates value. You’ll develop innovative solutions and build relationships with teammates and clients while accessing global capabilities to scale your ideas, embrace new opportunities, and benefit from expansive industry and technology expertise. You’ll shape your career by joining a company built to grow and last. You’ll be supported by leaders who care about your health and well-being and provide you with opportunities to deepen your skills and broaden your horizons. Come join our team—one of the largest IT and business consulting services firms in the world.
Posted 4 days ago
0 years
0 Lacs
India
On-site
Primary Skills: ETL. C. Azure Cloud. Python. API. Azure Functions. ADF. Spark. Scala. Azure Data Bricks. snowflake. SQL Server Secondary Skills: C# Job Type: Full-time Schedule: Day shift Work Location: In person
Posted 4 days ago
15.0 years
3 - 9 Lacs
Indore
On-site
Date: Jun 23, 2025 Job Requisition Id: 59383 Location: Hyderabad, IN Indore, MP, IN Hyderabad, TG, IN Indore, IN Indore, MP, IN Indore, IN Indore, MP, IN, 452001 YASH Technologies is a leading technology integrator specializing in helping clients reimagine operating models, enhance competitiveness, optimize costs, foster exceptional stakeholder experiences, and drive business transformation. At YASH, we’re a cluster of the brightest stars working with cutting-edge technologies. Our purpose is anchored in a single truth – bringing real positive changes in an increasingly virtual world and it drives us beyond generational gaps and disruptions of the future. We are looking forward to hire Project Management Professionals in the following areas : Technical skills Should have 15 + years of working experience handling end to end DWH projects. Experience handling ETL Migration / Visualization projects includes technologies like AWS Glue /Redshift, Power BI/Tableau, Azure ADF/Data bricks , Lead technical design and architecture discussions across cross-functional teams Oversee software requirements (including design, architecture, and testing) Manage through agile methodologies, such as Scrum Decipher technical needs of other departments within the organization and translate them across stakeholder groups. Leadership skills Act as a communications liaison between technical and non-technical audiences Develop and maintain productive internal relationships Facilitate cross-collaboration and understanding between IT and other departments Generate targeted reports for different internal and/or external audiences Stay current on the latest news, information, and trends about program management and the organization’s industry. Business responsibilities Organize and track jobs, clarify project scopes, proactively manage risks, deal with project escalations, ruthlessly prioritize tasks and dependencies, and problem solve Meet specific business objectives and metrics Support the roadmap planning process Develop strategies and implement tactics to follow through on those strategies Solve complex business problems within allocated timelines and budget Represent company management to technical teams and vice versa At YASH, you are empowered to create a career that will take you to where you want to go while working in an inclusive team environment. We leverage career-oriented skilling models and optimize our collective intelligence aided with technology for continuous learning, unlearning, and relearning at a rapid pace and scale. Our Hyperlearning workplace is grounded upon four principles Flexible work arrangements, Free spirit, and emotional positivity Agile self-determination, trust, transparency, and open collaboration All Support needed for the realization of business goals, Stable employment with a great atmosphere and ethical corporate culture
Posted 4 days ago
10.0 years
0 Lacs
New Delhi, Delhi, India
On-site
Job Summary: We are seeking an experienced and hands-on Data Lead with deep expertise in Microsoft Azure Data Analytics ecosystem. The ideal candidate will lead the design, development, and implementation of scalable data pipelines and analytics solutions using Azure Data Factory (ADF), Synapse Analytics, Microsoft Fabric, Apache Spark, and modern data modeling techniques. A strong grasp of CDC mechanisms, performance tuning, and cloud-native architecture is essential. Key Responsibilities: Lead the architecture and implementation of scalable data integration and analytics solutions in Azure. Design and build end-to-end data pipelines using ADF, Azure Synapse Analytics, Azure Data Lake, and Microsoft Fabric. Implement and manage large-scale data processing using Apache Spark within Synapse or Fabric. Develop and maintain data models using Star and Snowflake schema for optimal reporting and analytics performance. Implement Change Data Capture (CDC) strategies to ensure near real-time or incremental data processing. Collaborate with stakeholders to translate business requirements into technical data solutions. Manage and mentor a team of data engineers and analysts. Monitor, troubleshoot, and optimize performance of data workflows and queries. Ensure best practices in data governance, security, lineage, and documentation. Stay updated with the latest developments in the Azure data ecosystem and recommend enhancements. Required Skills and Qualifications: 8–10 years of overall experience in data engineering and analytics, with at least 3+ years in a data lead role. Strong expertise in Azure Data Factory, Azure Synapse, and Microsoft Fabric. Hands-on experience with Apache Spark for large-scale data processing. Proficient in SQL, Python, or PySpark for data transformation and automation. Solid experience with CDC patterns (e.g., using ADF, or SQL-based approaches). In-depth understanding of data warehousing concepts and data modeling (Star, Snowflake). Knowledge of Power BI integration with Synapse/Fabric is a plus. Familiarity with DevOps for data pipelines, version control (Git), and CI/CD for data solutions. Strong problem-solving skills and ability to lead architecture discussions and POCs. Excellent communication and stakeholder management skills. Preferred Qualifications: Microsoft Certifications in Azure Data Engineering or Analytics. Experience with Delta Lake, Databricks, or Snowflake (as source/target). Knowledge of data privacy and compliance standards like GDPR, HIPAA. What We Offer: Opportunity to lead strategic data initiatives on the latest Azure stack. A dynamic and collaborative work environment. Access to continuous learning, certifications, and upskilling programs. Competitive compensation and benefits package.
Posted 4 days ago
5.0 years
0 Lacs
Vishakhapatnam, Andhra Pradesh, India
On-site
Position: Azure Data Engineer Experience: 5+ Years Shift Timings: Should be flexible for (UK Timings) Skills Required: Azure Synapse Analytics, Azure Data Factory (ADF) and Databricks Work Location: Rushi Konda, Visakhapatnam Key Responsibilities: Design, build, and maintain efficient and scalable data pipelines using Azure Synapse, ADF, and Databricks . Implement data modeling and transformation logic using DBT (Data Build Tool) to meet reporting and analytics needs. Collaborate with data architects, analysts, and business stakeholders to understand data requirements and translate them into technical solutions. Optimize data workflows for performance and cost-efficiency within the Azure ecosystem. Monitor and troubleshoot data pipelines to ensure accuracy, completeness, and timeliness of data. Maintain data quality, documentation, and governance standards. Participate in code reviews, best practices, and performance tuning of complex SQL and PySpark workflows. Automate workflows and support CI/CD processes for data engineering deployments. Develop Power BI reports & dashboards Develop Scalable Data models for Power BI reporting. Required Skills and Qualifications: 5+ years of experience in data engineering or a related field. Strong hands-on experience with Azure Synapse Analytics and Azure Data Factory (ADF) . Proven experience with Databricks , including development in PySpark or Scala . Proficiency in DBT for data modeling and transformation. Expert in Analytics and reporting Power BI expert who can Develop power BI models and develop interactive BI reports Setting up RLS in Power BI reports Expertise in SQL and performance tuning techniques. Solid understanding of data warehousing concepts and ETL/ELT design patterns. Experience working in Agile environments and familiarity with Git-based version control. Strong communication and collaboration skills. Preferred Qualifications: Experience with CI/CD tools and DevOps for data engineering. Familiarity with Delta Lake and Lakehouse architecture. Exposure to other Azure services such as Azure Data Lake Storage (ADLS) , Azure Key Vault , and Azure DevOps. Experience with data quality frameworks or tools.
Posted 4 days ago
7.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Kanerika Who we are: Kanerika Inc. is a premier global software products and services firm that specializes in providing innovative solutions and services for data-driven enterprises. Our focus is to empower businesses to achieve their digital transformation goals and maximize their business impact through the effective use of data and AI. We leverage cutting-edge technologies in data analytics, data governance, AI-ML, GenAI/ LLM and industry best practices to deliver custom solutions that help organizations optimize their operations, enhance customer experiences, and drive growth. Locations: We are located in Hyderabad, Indore and Ahmedabad (India). What You Will Do: As a Data Governance Lead at Kanerika, you will be responsible for defining, leading, and operationalizing the data governance framework, ensuring enterprise-wide alignment and regulatory compliance. Key Responsibilities: 1. Governance Strategy & Stakeholder Alignment - Develop and maintain enterprise data governance strategies, policies, and standards. - Align governance with business goals: compliance, analytics, and decision-making. - Collaborate across business, IT, legal, and compliance teams for role alignment. - Drive governance training, awareness, and change management programs. 2. Microsoft Purview Administration & Implementation - Manage Microsoft Purview accounts, collections, and RBAC aligned to org structure. - Optimize Purview setup for large-scale environments (50TB+). - Integrate with Azure Data Lake, Synapse, SQL DB, Power BI, Snowflake. - Schedule scans, set classification jobs, and maintain collection hierarchies. 3. Metadata & Lineage Management - Design metadata repositories and maintain business glossaries and data dictionaries. - Implement ingestion workflows via ADF, REST APIs, PowerShell, Azure Functions. - Ensure lineage mapping (ADF → Synapse → Power BI) and impact analysis. 4. Data Classification & Security Governance - Define classification rules and sensitivity labels (PII, PCI, PHI). - Integrate with MIP, DLP, Insider Risk Management, and Compliance Manager. - Enforce records management, lifecycle policies, and information barriers. 5. Data Quality & Policy Management - Define KPIs and dashboards to monitor data quality across domains. - Collaborate on rule design, remediation workflows, and exception handling. - Ensure policy compliance (GDPR, HIPAA, CCPA, etc.) and risk management. 6. Business Glossary & Stewardship - Maintain business glossary with domain owners and stewards in Purview. - Enforce approval workflows, standard naming, and steward responsibilities. - Conduct metadata audits for glossary and asset documentation quality. 7. Automation & Integration - Automate governance processes using PowerShell, Azure Functions, Logic Apps. - Create pipelines for ingestion, lineage, glossary updates, tagging. - Integrate with Power BI, Azure Monitor, Synapse Link, Collibra, BigID, etc. 8. Monitoring, Auditing & Compliance - Set up dashboards for audit logs, compliance reporting, metadata coverage. - Oversee data lifecycle management across its phases. - Support internal and external audit readiness with proper documentation. Tools & Technologies: - Microsoft Purview, Collibra, Atlan, Informatica Axon, IBM IG Catalog - Microsoft Purview capabilities: 1. Label creation & policy setup 2. Auto-labeling & DLP 3. Compliance Manager, Insider Risk, Records & Lifecycle Management 4. Unified Catalog, eDiscovery, Data Map, Audit, Compliance alerts, DSPM Required Qualifications: - 7+ years of experience in data governance and data management. - Proficient in Microsoft Purview and Informatica data governance tools. - Strong in metadata management, lineage mapping, classification, and security. - Experience with ADF, REST APIs, Talend, dbt, and automation via Azure tools. - Knowledge of GDPR, CCPA, HIPAA, SOX and related compliance needs. - Skilled in bridging technical governance with business and compliance goals.
Posted 4 days ago
0 years
0 Lacs
Trivandrum, Kerala, India
On-site
A contract opportunity for 6 months and then will be hired on a permanent role based on performance (MNC) Senior Data Engineer will be responsible for designing, implementing, and maintaining data solutions on the Microsoft Azure Data platform and SQL Server (SSIS, SSAS, UC4 Atomic) Collaborate with various stakeholders, and ensuring the efficient processing, storage, and retrieval of large volumes of data Technical Expertise and Responsibilities Design, build, and maintain scalable and reliable data pipelines. Should be able to design and build solutions in Azure data factory and Databricks to extract, transform and load data into different source and target systems. Should be able to design and build solutions in SSIS Should be able to analyze and understand the existing data landscape and provide recommendations/innovative ideas for rearchitecting / optimizing / streamlining to bring efficiency and scalability. Must be able to collaborate and effectively communicate with onshore counterparts to address technical gaps, requirement challenges, and other complex scenarios. Monitor and troubleshoot data systems to ensure high performance and reliability. Should be highly analytical and detail-oriented with extensive familiarity with database management principles. Optimize data processes for speed and efficiency. Ensure the data architecture supports business requirements and data governance policies. Define and execute the data engineering strategy in alignment with the company’s goals. Integrate data from various sources, ensuring data quality and consistency. Stay updated with emerging technologies and industry trends. Understand the big picture business process utilizing deep knowledge in banking industry and translate them to data requirements. Enabling and running data migrations across different databases and different servers Perform thorough testing and validation to support the accuracy of data transformations and data verification used in machine learning models. Analyze data and different systems to define data requirements. Should be well versed with Data Structures & algorithms. Define data mapping working along with business and digital team and data team. Data pipeline maintenance/testing/performance validation Assemble large, complex data sets that meet functional / non-functional business requirements. Analyze and identify gaps on data needs and work with business and IT to bring in alignment on data needs. Troubleshoot and resolve technical issues as they arise. Optimize data flow and collection for cross-functional teams. Work closely with Data counterparts at onshore, product owners, and business stakeholders to understand data needs and strategies. Collaborate with IT and DevOps teams to ensure data infrastructure aligns with overall IT architecture. Implement best practices for data security and privacy. Drive continuous improvement initiatives within the data engineering function Optimize data flow and collection for cross-functional teams. Understand impact of data conversions as they pertain to servicing operations. Manage higher volume and more complex cases with accuracy and efficiency. Role Expectations Design and develop warehouse solutions using Azure Synapse Analytics, ADLS, ADF, Databricks, Power BI, Azure Analysis Services Should be proficient in SSIS, SQL and Query optimization. Should have worked in onshore offshore model managing challenging scenarios. Expertise in working with large amounts of data (structured and unstructured), building data pipelines for ETL workloads and generate insights utilizing Data Science, Analytics. Expertise in Azure, AWS cloud services, and DevOps/CI/CD frameworks. Ability to work with ambiguity and vague requirements and transform them into deliverables. Good combination of technical and interpersonal skills with strong written and verbal communication; detail-oriented with the ability to work independently. Drive automation efforts across the data analytics team utilizing Infrastructure as Code (IaC) using Terraform, Configuration Management, and Continuous Integration (CI) / Continuous Delivery (CD) tools such as Jenkins. Help build define architecture frameworks, best practices & processes. Collaborate on Data warehouse architecture and technical design discussions. Expertise in Azure Data factory and should be familiar with building pipelines for ETL projects. Expertise in SQL knowledge and experience working with relational databases. Expertise in Python and ETL projects Experience in data bricks will be of added advantage. Should have expertise in data life cycle, data ingestion, transformation, data loading, validation, and performance tuning. Skillsets Required MUST TO HAVE SQL, PL/SQL SSIS SSAS TFS Azure Data Factory Prefer to have Azure Databricks Azure Synapse ADLS Lakehouse Architecture Python SCD Concepts and Implementation UC4 Power BI DevOps CI/CD Banking Domain
Posted 4 days ago
15.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Kanerika Who we are: Kanerika Inc. is a premier global software products and services firm that specializes in providing innovative solutions and services for data-driven enterprises. Our focus is to empower businesses to achieve their digital transformation goals and maximize their business impact through the effective use of data and AI. We leverage cutting-edge technologies in data analytics, data governance, AI-ML, GenAI/ LLM and industry best practices to deliver custom solutions that help organizations optimize their operations, enhance customer experiences, and drive growth. Locations: We are located in Hyderabad, Indore and Ahmedabad (India). What You Will Do: As a Data Governance Architect at Kanerika, you will play a pivotal role in shaping and executing the enterprise data governance strategy. Your responsibilities include: 1. Strategy, Framework, and Governance Operating Model - Develop and maintain enterprise-wide data governance strategies, standards, and policies. - Align governance practices with business goals like regulatory compliance and analytics readiness. - Define roles and responsibilities within the governance operating model. - Drive governance maturity assessments and lead change management initiatives. 2. Stakeholder Alignment & Organizational Enablement - Collaborate across IT, legal, business, and compliance teams to align governance priorities. - Define stewardship models and create enablement, training, and communication programs. - Conduct onboarding sessions and workshops to promote governance awareness. 3. Architecture Design for Data Governance Platforms - Design scalable and modular data governance architecture. - Evaluate tools like Microsoft Purview, Collibra, Alation, BigID, Informatica. - Ensure integration with metadata, privacy, quality, and policy systems. 4. Microsoft Purview Solution Architecture - Lead end-to-end implementation and management of Microsoft Purview. - Configure RBAC, collections, metadata scanning, business glossary, and classification rules. - Implement sensitivity labels, insider risk controls, retention, data map, and audit dashboards. 5. Metadata, Lineage & Glossary Management - Architect metadata repositories and ingestion workflows. - Ensure end-to-end lineage (ADF → Synapse → Power BI). - Define governance over business glossary and approval workflows. 6. Data Classification, Access & Policy Management - Define and enforce rules for data classification, access, retention, and sharing. - Align with GDPR, HIPAA, CCPA, SOX regulations. - Use Microsoft Purview and MIP for policy enforcement automation. 7. Data Quality Governance - Define KPIs, validation rules, and remediation workflows for enterprise data quality. - Design scalable quality frameworks integrated into data pipelines. 8. Compliance, Risk, and Audit Oversight - Identify risks and define standards for compliance reporting and audits. - Configure usage analytics, alerts, and dashboards for policy enforcement. 9. Automation & Integration - Automate governance processes using PowerShell, Azure Functions, Logic Apps, REST APIs. - Integrate governance tools with Azure Monitor, Synapse Link, Power BI, and third-party platforms. Required Qualifications: - 15+ years in data governance and management. - Expertise in Microsoft Purview, Informatica, and related platforms. - Experience leading end-to-end governance initiatives. - Strong understanding of metadata, lineage, policy management, and compliance regulations. - Hands-on skills in Azure Data Factory, REST APIs, PowerShell, and governance architecture. - Familiar with Agile methodologies and stakeholder
Posted 4 days ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Key Responsibilities • ARCHITECTURE AND DESIGN FOR DATA ENGINEERING AND MACHINE LEARNING PROJECTS Establishing architecture and target design for data engineering and machine learning projects. • REQUIREMENT ANALYSIS, PLANNING, EFFORT AND RESOURCE NEEDS ESTIMATION Current inventory analysis, review and formalize requirements, project planning and execution plan. • ADVISORY SERVICES AND BEST PRACTICES Troubleshooting, Performance Tuning, Cost Optimization, Operational Runbooks and Mentoring • LARGE MIGRATIONS Assist customers with large migrations to Databricks from Hadoop ecosystems, Data Warehouses (Teradata, DataStage, Netezza, Ab Initio), ETL engines (Informatica), SAS, SQL, DW, Cloud-based Data platforms like Redshift, Snowflake, EMR, etc • DESIGN, BUILD AND OPTIMIZE DATA PIPELINES The Databricks implementation will be best in class, with flexibility for future iterations. • PRODUCTION READINESS Assisting with production readiness for customers, including exception handling, production cutover, capture analysis, alert scheduling and monitoring • MACHINE LEARNING (ML) – MODEL REVIEW, TUNING, ML OPERATIONS AND OPTIMIZATION Build and review ML models, ML best practices, model lifecycle, ML frameworks and deploying of models in production. Must Have: ▪ Pre- Sales experience is a must. ▪ Hands on experience with distributed computing framework like DataBricks, Spark Ecosystem (Spark Core, PySpark, Spark Streaming, SparkSQL) ▪ Willing to work with product teams to best optimize product features/functions. ▪ Experience on Batch workloads and real time streaming with high volume data frequency. ▪ Performance optimization on Spark workloads ▪ Environment setup, user management, Authentication and cluster management on Databricks ▪ Professional curiosity and the ability to enable yourself in new technologies and tasks. ▪ Good understanding of SQL and a good grasp of relational and analytical database management theory and practice. Key Skills: • Python, SQL and Pyspark • Big Data Ecosystem (Hadoop, Hive, Sqoop, HDFS, Hbase) • Spark Ecosystem (Spark Core, Spark Streaming, Spark SQL) / Databricks • Azure (ADF, ADB, Logic Apps, Azure SQL database, Azure Key Vaults, ADLS, Synapse) • AWS (Lambda,AWS Glue, S3, Redshift) • Data Modelling, ETL Methodology
Posted 4 days ago
2.0 - 4.0 years
0 Lacs
Greater Hyderabad Area
On-site
About Us: Join our stealth-mode AI startup on a mission to revolutionize AI and data solutions. Headquartered in Hyderabad, we are a well-funded startup with a world-class team and a passion for innovation in AI, NLP, Computer Vision, and Speech Recognition. We are looking for a highly motivated Data Engineer with 2 to 4 years of experience to join our team and work on cutting-edge projects in AI and big data technologies. Role Overview: As a Data Engineer , you will design, build, and optimize scalable data pipelines and platforms to support our AI-driven solutions. You’ll collaborate with cross-functional teams to enable real-time data processing and insights for enterprise-level applications. Key Responsibilities: Develop and maintain robust data pipelines using tools like PySpark , Kafka , and Airflow . Design and optimize data workflows for high scalability and performance using Hadoop , HDFS , and Hive . Integrate structured and unstructured data from diverse sources into a centralized platform. Leverage big data technologies for real-time processing and streaming using Spark Streaming and Nifi . Work on cloud-based platforms such as AWS , Azure , and GCP to deploy and monitor scalable data solutions. Collaborate with AI/ML teams to deploy machine learning models using MLflow and integrate AI capabilities into data pipelines. Automate and monitor workflows to ensure seamless operations using CI/CD pipelines , Kubernetes , and Docker . Implement data validation, performance testing, and troubleshooting of large-scale datasets. Prepare and share actionable insights through BI tools like Tableau and Grafana . Required Skills and Qualifications: Education: Bachelor’s or Master’s degree in Computer Science, Data Engineering, or related fields. Experience: 2 to 4 years in data engineering roles, working with big data ecosystems. Technical Proficiency: Big Data Tools: Hadoop, HDFS, PySpark, Hive, Sqoop, Kafka, Spark Streaming, Airflow, Presto, Nifi. Cloud Platforms: AWS (Glue, S3, EMR), Azure (ADF, HDInsight), GCP (BigQuery, Pub/Sub). Programming Languages: Python, SQL, Scala. DevOps & Automation: Jenkins, Ansible, Kubernetes, Docker. Databases: MySQL, Oracle, HBase, Redis. Visualization Tools: Tableau, Grafana, Zeppelin. Knowledge of machine learning models, AI tools (e.g., TensorFlow, H2O), and feature engineering is a plus. Strong problem-solving skills with attention to detail and ability to manage multiple projects. Excellent communication and collaboration skills in a fast-paced environment. What We Offer: Opportunity to work on innovative AI projects with a global impact. Collaborative work culture with access to cutting-edge technologies.
Posted 5 days ago
5.0 - 31.0 years
16 - 17 Lacs
Hyderabad
On-site
Job Title: Cloud Migration Consultant – (AWS to Azure) Experience: 4+ years in application assessment and migration About the Role We’re looking for a Cloud Migration Consultant with hands-on experience assessing and migrating complex applications to Azure. You'll work closely with Microsoft business units, participating in Intake & Assessment and Planning & Design phases, creating migration artifacts, and leading client interactions. You’ll also support application modernization efforts in Azure, with exposure to AWS as needed. Key Responsibilities Assess application readiness and document architecture, dependencies, and migration strategy. Conduct interviews with stakeholders and generate discovery insights using tools like Azure Migrate, CloudockIt, PowerShell. Create architecture diagrams, migration playbooks, and maintain Azure DevOps boards. Set up applications both on-premises and in cloud environments (primarily Azure). Support proof-of-concepts (PoCs) and advise on migration options. Collaborate with application, database, and infrastructure teams to enable smooth transition to migration factory teams. Track progress, blockers, and risks, reporting timely status to project leadership. Required Skills 4+ years of experience in cloud migration and assessment Strong expertise in Azure IaaS/PaaS (VMs, App Services, ADF, etc.) Familiarity with AWS IaaS/PaaS (EC2, RDS, Glue, S3) Experience with Java (SpringBoot)/C#, .Net/Python, Angular/React.js, REST APIs Working knowledge of Kafka, Docker/Kubernetes, Azure DevOps Network infrastructure understanding (VNets, NSGs, Firewalls, WAFs) IAM knowledge: OAuth, SAML, Okta/SiteMinder Experience with Big Data tools like Databricks, Hadoop, Oracle, DocumentDB Preferred Qualifications Azure or AWS certifications Prior experience with enterprise cloud migrations (especially in Microsoft ecosystem) Excellent communication and stakeholder management skills Educational qualification: B.E/B.Tech/MCA
Posted 5 days ago
10.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Title: Azure Solutions Architect or Lead. Skills: Data warehousing, SQL, ETL, Python/PySpark, Understanding of Data governance, data security & Compliances. Azure Data Factory, Azure Synapse Analytics, Azure Data Lake Storage Experience Required: 10 - 15 Years. Job Location: Hyderabad, Pune & Greater Noida. We at Coforge are hiring Azure Solutions Architect or Lead with following skillset: Design and build data architecture frameworks leveraging Azure services (Azure Data Factory, Azure Synapse Analytics, Azure Data Lake Storage, Azure SQL Database, ADLS Gen2, Synapse Engineering, Fabric Notebook, Pyspark, Scala, Python etc.). Define and implement reference architectures and architecture blueprinting Experience demonstrating and ability to talk about wide variety of data engineering tools, architectures across cloud providers Especially on Azure platform Experience in building Data Product, data processing frameworks, Metadata Driven ETL pipelines , Data Security, Data Standardization, Data Quality and Data Reconciliation workflows Vast experience on building data product on MS AZURE / Fabric platform, Azure Managed instance, Microsoft Fabrics, Lakehouse, Synapse Engineering, MS onelake Hands-on development experience on above technologies Experience on implementing Devops Implement data modeling best practices (dimensional modeling, data vault). Ensure data security and compliance using Azure security tools (Azure Active Directory, Azure Key Vault). Implement data governance and data quality processes. Utilize version control tools (specify tools, example Git). Work with Infrastructure as Code (specify tools, example Terraform, ARM templates). Work within an Agile environment (specify agile method, example Scrum). Effectively communicate with stakeholders at all levels. 10+ years of experience in Data Warehousing and Azure Cloud technologies. Strong hands-on experience with Azure Fabrics, Synapse, ADf, SQL, Python/PySpark. Proven expertise in designing and implementing data architectures on Azure using Microsoft fabric, azure synapse, ADF, MS fabric notebook Exposure to Azure DevOps and Business Intelligence. Solid understanding of data governance, data security, and compliance. Excellent communication and collaboration skills. Ability to work effectively in a UK shift (1 PM IST to 9:30 PM IST). Ability to work in a hybrid environment, with 3 days/week in office. Please share your CV on Gaurav.2.Kumar@coforge.com or WhatsApp 9667427662 for any queries.
Posted 5 days ago
15.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
YASH Technologies is a leading technology integrator specializing in helping clients reimagine operating models, enhance competitiveness, optimize costs, foster exceptional stakeholder experiences, and drive business transformation. At YASH, we’re a cluster of the brightest stars working with cutting-edge technologies. Our purpose is anchored in a single truth – bringing real positive changes in an increasingly virtual world and it drives us beyond generational gaps and disruptions of the future. We are looking forward to hire Project Management Professionals in the following areas : Technical skills Should have 15 + years of working experience handling end to end DWH projects. Experience handling ETL Migration / Visualization projects includes technologies like AWS Glue /Redshift, Power BI/Tableau, Azure ADF/Data bricks, Lead technical design and architecture discussions across cross-functional teams Oversee software requirements (including design, architecture, and testing) Manage through agile methodologies, such as Scrum Decipher technical needs of other departments within the organization and translate them across stakeholder groups. Leadership skills Act as a communications liaison between technical and non-technical audiences Develop and maintain productive internal relationships Facilitate cross-collaboration and understanding between IT and other departments Generate targeted reports for different internal and/or external audiences Stay current on the latest news, information, and trends about program management and the organization’s industry. Business responsibilities Organize and track jobs, clarify project scopes, proactively manage risks, deal with project escalations, ruthlessly prioritize tasks and dependencies, and problem solve Meet specific business objectives and metrics Support the roadmap planning process Develop strategies and implement tactics to follow through on those strategies Solve complex business problems within allocated timelines and budget Represent company management to technical teams and vice versa At YASH, you are empowered to create a career that will take you to where you want to go while working in an inclusive team environment. We leverage career-oriented skilling models and optimize our collective intelligence aided with technology for continuous learning, unlearning, and relearning at a rapid pace and scale. Our Hyperlearning workplace is grounded upon four principles Flexible work arrangements, Free spirit, and emotional positivity Agile self-determination, trust, transparency, and open collaboration All Support needed for the realization of business goals, Stable employment with a great atmosphere and ethical corporate culture
Posted 5 days ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Project Description: Our client is an EU subsidiary of a Global Financial Bank working in multiple markets and asset classes. The Bank's Data Store has been transformed to a Data warehouse (DWH) which is the central source for Regulatory Reporting. It is also intended to be the core data integration platform which not only provide date for regulatory reporting but also provide data for Risk Modelling, Portfolio Analysis, Ad Hoc Analysis & Reporting (Finance, Risk, other), MI Reporting, Data Quality Management, etc. Due to high demand of regulatory requirements, a lot of regulatory projects are in progress to reflect regulatory requirements on existing regulatory reports and to develop new regulatory reports on MDS. Examples are IFRS9, AnaCredit, IRBB, the new Deposit Guarantee Directive (DGSD), Bank Data Retrieval Portal (BDRP) and the Fundamental Review of the Trading Book (FRTB). DWH / ETL Tester will work closely with the Development Team to design, build interfaces and integrate data from a variety from internal and external data sources into the new Enterprise Data Warehouse environment. The ETL Tester will be primarily responsible for testing Enterprise Data Warehouse using Automation within industry recognized ETL standards, architecture, and best practices. Responsibilities: Testing the Bank's data warehouse system changes, testing the changes (user stories), support IT integration testing in TST and support business stakeholders with User Acceptance Testing. It is hands-on position: you will be required to write and execute test cases, build test automation where it is applicable. Overall Purpose of Job - Test the MDS data warehouse system - Validate regulatory reports - Supporting IT and Business stakeholders during UAT phase - Contribute to improvement of testing and development processes - Work as part of a cross-functional team and take ownership of tasks - Contribute in Testing Deliverables. - Ensure the implementation of test standards and best practices for the agile model & contributes to their development. - Engage with internal stakeholders in various areas of the organization to seek alignment and collaboration. - Deals with external stakeholders / Vendors. - Identify risks / issues and present associated mitigating actions taking into account the critically of the domain of the underlying business. - Contribute to continuous improvement of testing standard processes. Additional responsibilities include work closely with the systems analysts and the application developers, utilize functional design documentation and technical specifications to facilitate the creation and execution of manual and automated test scripts, perform data analysis and creation of test data, track and help resolve defects and ensure that all testing is conducted and documented in adherence with the bank's standard. Mandatory Skills Description: Must have experience/expertise : Tester, Test Automation, Data Warehouse, Banking Technical: - At least 5 years of testing experience of which at least 2 years in the finance industry with good level knowledge on Data Warehouse, RDBMS concepts. - Strong SQL scripting knowledge and hands-on experience and experience with ETL & Databases. - Expertise on new age cloud based Data Warehouse solutions - ADF, SnowFlake, GCP etc. - Hands-On expertise in writing complex SQL using multiple JOINS and highly complex functions to test various transformations and ETL requirements. - Knowledge and Experience on creating Test Automation for Database and ETL Testing Regression Suite. - Automation using Selenium with Python (or Java Script), Python Scripts, Shell Script. - Knowledge of framework designing, REST API Testing of databases using Python. - Experience using Atlassian tool set, Azure DevOps and code & Version Management - GIT, Bitbucket, Azure Repos etc. - Help and provide inputs for the creation of Test Plan to address the needs of Cloud Based ETL Pipelines. Non-Technical: - Able to work in an agile environment - Experience in working in high priority projects (high pressure on delivery) - Some flexibility outside 9-5 working hours (Netherlands Time zone) - Able to work in demanding environment and have pragmatic approach with "can do" attitude. - Able to work independently and also to collaborate across the organization - Highly developed problem-solving skills with minimal supervision - Able to easily adapt to new circumstances / technologies / procedures. - Stress resistant and constructive - whatever the context. - Able to align with existing standards and acting with attention to detail.
Posted 5 days ago
5.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Job Family Data Science & Analysis (India) Travel Required None Clearance Required None What You Will Do Design, develop, and maintain robust, scalable, and efficient data pipelines and ETL/ELT processes. Lead and execute data engineering projects from inception to completion, ensuring timely delivery and high quality. Build and optimize data architectures for operational and analytical purposes. Collaborate with cross-functional teams to gather and define data requirements. Implement data quality, data governance, and data security practices. Manage and optimize cloud-based data platforms (Azure\AWS). Develop and maintain Python/PySpark libraries for data ingestion, Processing and integration with both internal and external data sources. Design and optimize scalable data pipelines using Azure data factory and Spark(Databricks) Work with stakeholders, including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs. Develop frameworks for data ingestion, transformation, and validation. Mentor junior data engineers and guide best practices in data engineering. Evaluate and integrate new technologies and tools to improve data infrastructure. Ensure compliance with data privacy regulations (HIPAA, etc.). Monitor performance and troubleshoot issues across the data ecosystem. Automated deployment of data pipelines using GIT hub actions \ Azure devops What You Will Need Bachelors or master’s degree in computer science, Information Systems, Statistics, Math, Engineering, or related discipline. Minimum 5 + years of solid hands-on experience in data engineering and cloud services. Extensive working experience with advanced SQL and deep understanding of SQL. Good Experience in Azure data factory (ADF), Databricks , Python and PySpark. Good experience in modern data storage concepts data lake, lake house. Experience in other cloud services (AWS) and data processing technologies will be added advantage. Ability to enhance , develop and resolve defects in ETL process using cloud services. Experience handling large volumes (multiple terabytes) of incoming data from clients and 3rd party sources in various formats such as text, csv, EDI X12 files and access database. Experience with software development methodologies (Agile, Waterfall) and version control tools Highly motivated, strong problem solver, self-starter, and fast learner with demonstrated analytic and quantitative skills. Good communication skill. What Would Be Nice To Have AWS ETL Platform – Glue , S3 One or more programming languages such as Java, .Net Experience in US health care domain and insurance claim processing. What We Offer Guidehouse offers a comprehensive, total rewards package that includes competitive compensation and a flexible benefits package that reflects our commitment to creating a diverse and supportive workplace. About Guidehouse Guidehouse is an Equal Opportunity Employer–Protected Veterans, Individuals with Disabilities or any other basis protected by law, ordinance, or regulation. Guidehouse will consider for employment qualified applicants with criminal histories in a manner consistent with the requirements of applicable law or ordinance including the Fair Chance Ordinance of Los Angeles and San Francisco. If you have visited our website for information about employment opportunities, or to apply for a position, and you require an accommodation, please contact Guidehouse Recruiting at 1-571-633-1711 or via email at RecruitingAccommodation@guidehouse.com. All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodation. All communication regarding recruitment for a Guidehouse position will be sent from Guidehouse email domains including @guidehouse.com or guidehouse@myworkday.com. Correspondence received by an applicant from any other domain should be considered unauthorized and will not be honored by Guidehouse. Note that Guidehouse will never charge a fee or require a money transfer at any stage of the recruitment process and does not collect fees from educational institutions for participation in a recruitment event. Never provide your banking information to a third party purporting to need that information to proceed in the hiring process. If any person or organization demands money related to a job opportunity with Guidehouse, please report the matter to Guidehouse’s Ethics Hotline. If you want to check the validity of correspondence you have received, please contact recruiting@guidehouse.com. Guidehouse is not responsible for losses incurred (monetary or otherwise) from an applicant’s dealings with unauthorized third parties. Guidehouse does not accept unsolicited resumes through or from search firms or staffing agencies. All unsolicited resumes will be considered the property of Guidehouse and Guidehouse will not be obligated to pay a placement fee.
Posted 5 days ago
6.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Title: Azure Certified AI Engineer / Data Scientist Experience: 4–6 Years Engagement: Contract to Hire (C2H) Location: Pune (Onsite – 5 Days a Week) Company: Optimum Data Analytics (ODA) About Optimum Data Analytics (ODA) Optimum Data Analytics is a fast-growing data and AI consulting firm delivering innovative solutions to enterprise clients across industries. We specialize in data engineering, machine learning, and AI/GenAI-based platforms on cloud ecosystems. Role Overview We are looking for an Azure Certified AI Engineer or Data Scientist with 4–6 years of experience to join our Pune office on a full-time, onsite C2H engagement. The ideal candidate should be hands-on with building and deploying AI/ML solutions using Azure cloud services, and must hold an active Azure AI Engineer Associate or Azure Data Scientist Associate certification. Key Responsibilities Design and deploy AI/ML models using Azure AI/ML Studio, Azure Machine Learning, and Azure Cognitive Services. Implement and manage data pipelines, model training workflows, and ML lifecycle in the Azure ecosystem. Work with business stakeholders to gather requirements, analyze data, and deliver predictive insights. Collaborate with data engineers and product teams to deliver scalable and production-ready AI solutions. Ensure model monitoring, versioning, governance, and responsible AI practices are in place. Contribute to solution documentation and technical architecture. Required Skills & Qualifications 4–6 years of hands-on experience in AI/ML, data science, or machine learning engineering. Mandatory Certification: Microsoft Azure AI Engineer Associate OR Microsoft Azure Data Scientist Associate Strong knowledge of Azure services: Azure Machine Learning, Cognitive Services, Azure Functions, Data Factory, and Azure Storage. Proficient in Python, with experience using ML libraries such as scikit-learn, TensorFlow, PyTorch, or similar. Solid understanding of data science lifecycle, model evaluation, and performance optimization. Experience with version control tools like Git and deployment through CI/CD pipelines. Excellent problem-solving and communication skills. Good To Have Familiarity with LLMs, prompt engineering, or GenAI tools (Azure OpenAI, Hugging Face). Experience with Power BI or other data visualization tools. Exposure to MLOps tools and practices. Skills: machine learning,azure,scikit-learn,open ai,pytorch,azure machine learning,cognitive services,git,azure ai engineer associate,python,data science,tensorflow,communication,azure functions,azure storage,adf,data factory,artificial intelligence,azure data scientist associate,problem-solving,ci/cd pipelines,mlops
Posted 5 days ago
3.0 - 6.0 years
5 - 8 Lacs
Indore, Hyderabad, Ahmedabad
Work from Office
Role Overview As a Data Governance Developer at Kanerika, you will be responsible for developing and managing robust metadata, lineage, and compliance frameworks using Microsoft Purview and other leading tools. Youll work closely with engineering and business teams to ensure data integrity, regulatory compliance, and operational transparency. Key Responsibilities Set up and manage Microsoft Purview: accounts, collections, RBAC, and policies. Integrate Purview with Azure Data Lake, Synapse, SQL DB, Power BI, Snowflake. Schedule & monitor metadata scanning, classification, and lineage tracking jobs. Build ingestion workflows for technical, business, and operational metadata. Tag, enrich, and organize assets with glossary terms and metadata. Automate lineage, glossary, and scanning processes via REST APIs, PowerShell, ADF, and Logic Apps. Design and enforce classification rules for PII, PCI, PHI. Collaborate with domain owners for glossary and metadata quality governance. Generate compliance dashboards and lineage maps in Power BI. Tools & Technologies Governance Platforms: Microsoft Purview, Collibra, Atlan, Informatica Axon, IBM IG Catalog Integration Tools: Azure Data Factory, dbt, Talend Automation & Scripting: PowerShell, Azure Functions, Logic Apps, REST APIs Compliance Areas in Purview: Sensitivity Labels, Policy Management, Auto-labeling Data Loss Prevention (DLP), Insider Risk Mgmt, Records Management Compliance Manager, Lifecycle Mgmt, eDiscovery, Audit DSPM, Information Barriers, Unified Catalog Required Qualifications 46 years of experience in Data Governance / Data Management. Hands-on with Microsoft Purview, especially lineage and classification workflows. Strong understanding of metadata management, glossary governance, and data classification. Familiarity with Azure Data Factory, dbt, Talend. Working knowledge of data compliance regulations: GDPR, CCPA, SOX, HIPAA. Strong communication skills to collaborate across technical and non-technical teams.
Posted 5 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
20312 Jobs | Dublin
Wipro
11977 Jobs | Bengaluru
EY
8165 Jobs | London
Accenture in India
6667 Jobs | Dublin 2
Uplers
6464 Jobs | Ahmedabad
Amazon
6352 Jobs | Seattle,WA
Oracle
5993 Jobs | Redwood City
IBM
5803 Jobs | Armonk
Capgemini
3897 Jobs | Paris,France
Tata Consultancy Services
3776 Jobs | Thane