Jobs
Interviews

1444 Adf Jobs - Page 13

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Functional Responsibility Having sound knowledge of banking domain (Wholesale, retail, core banking, trade finance) In-depth understanding of RBI Regulatory reporting and guidelines including RBI ADF approach document. Should have an understanding of balance sheet and P&L. Supporting clients by providing user manuals, trainings, conducting workshops and preparing case studies. Process Adherence Review the initial and ongoing development of product Responsible for documenting, validating, communicating and coordinating requirements. Provide support to business development by preparing proposals, concept presentations and outreach activities Maintaining and updating tracker, reviewing test cases, providing training to internal as well as external stakeholders Client Management / Stakeholder Management Interact with clients in relation to assignment execution and manage operational relationships effectively Interact with client for requirement gathering, issue tracking, change request discussion, FRD writing and preparing project status reports People Development Co-ordinate with assignment-specific team of consultants, developers, QA and monitor performance to ensure timely and effective delivery

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Mysore, Karnataka, India

On-site

Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology Your Role And Responsibilities As an Data Engineer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In This Role, Your Responsibilities May Include Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise search applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviours. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modelling results Preferred Education Master's Degree Required Technical And Professional Expertise We are seeking a skilled Azure Data Engineer with 5+ years of experience Including 3+ years of hands-on experience with ADF/Databricks The ideal candidate Data bricks,Data Lake, Phyton programming skills. The candidate will also have experience for deploying to data bricks. Familiarity with Azure Data Factory Preferred Technical And Professional Experience Good communication skills. 3+ years of experience with ADF/DB/DataLake. Ability to communicate results to technical and non-technical audiences

Posted 2 weeks ago

Apply

7.0 - 10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Title: Databricks Engieer- Lead Primary skills: Databricks, PySpark, SQL Secondary skills: Advanced SQL, Azure Data Factory, and Azure Datalake. Mode of Work: Work from Office Location: Hyderabad Experience: 7 to 10 Years Responsibilities · Design and develop ETL pipelines using ADF for data ingestion and transformation. · Collaborate with Azure stack modules like Data Lakes and SQL DW to build robust data solutions. · Write SQL, Python, and PySpark code for efficient data processing and transformation. · Understand and translate business requirements into technical designs. · Develop mapping documents and transformation rules as per project scope. · Communicate project status with stakeholders, ensuring smooth project execution. Requirements · 7-10 years of experience in data ingestion, data processing, and analytical pipelines for big data and relational databases. · Hands-on experience with Azure services: ADLS, Azure Databricks, Data Factory, Synapse, Azure SQL DB. · Experience in SQL, Python, and PySpark for data transformation and processing. · Familiarity with DevOps and CI/CD deployments. · Strong communication skills and attention to detail in high-pressure situations. · Experience in the insurance or financial industry is preferred.

Posted 2 weeks ago

Apply

7.0 years

0 Lacs

India

On-site

Job Summary: We are seeking a technically strong and well-rounded ETL Developer with proven experience in Azure Data Factory (ADF) and Oracle Fusion ERP systems. The ideal candidate will play a key role in migrating legacy SSIS packages, integrating complex enterprise data sources (including Oracle Fusion and Microsoft CRM), and preparing data pipelines for Power BI dashboards and AI-driven analytics . Key Responsibilities: Migrate and rebuild existing SSIS packages into modern Azure Data Factory pipelines Design, develop, and optimize end-to-end ETL solutions using ADF Integrate and extract data from Oracle Fusion ERP , Oracle EBS , and Microsoft CRM Create and manage reusable components such as pipelines, datasets, linked services, triggers Collaborate with business analysts and Power BI developers to ensure clean and accurate data flow Perform complex SQL scripting and transformation logic Monitor, troubleshoot, and tune ETL performance Maintain proper documentation of data sources, flows, and mappings Must-Have Skills: 7+ years of hands-on experience in ETL development 4+ years with Azure Data Factory (ADF) : pipelines, dataflows, triggers, integration runtimes Solid understanding of SSIS and experience in migration to ADF Deep knowledge of Oracle Fusion ERP data models, especially Finance, SCM, and HCM modules Experience with FBDI , HDL , OTBI , and BI Publisher reporting Strong SQL and PL/SQL development skills Familiarity with Azure SQL Database , Data Lake , Blob Storage Knowledge of how ADF pipelines feed Power BI datasets Experience working with CI/CD pipelines (preferably Azure DevOps) Nice to Have: Microsoft Certified: Azure Data Engineer Associate or equivalent Exposure to OIC (Oracle Integration Cloud) or similar iPaaS tools Experience with REST/SOAP APIs , JSON/XML , and Microsoft Dynamics CRM Prior experience supporting AI/ML analytics pipelines

Posted 2 weeks ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job description Job Name: Senior Data Engineer DBT & Snowflake Years of Experience: 5 Job Description: We are looking for a skilled and experienced DBT-Snowflake Developer to join our team! As part of the team, you will be involved in the implementation of the ongoing and new initiatives for our company. If you love learning, thinking strategically, innovating, and helping others, this job is for you! Primary Skills: DBT,Snowflake Secondary Skills: ADF,Databricks,Python,Airflow,Fivetran,Glue Role Description: Data engineering role requires creating and managing technological infrastructure of a data platform, be in-charge / involved in architecting, building, and managing data flows / pipelines and construct data storages (noSQL, SQL), tools to work with big data (Hadoop, Kafka), and integration tools to connect sources or other databases. Role Responsibility: Translate functional specifications and change requests into technical specifications Translate business requirement document, functional specification, and technical specification to related coding Develop efficient code with unit testing and code documentation Ensuring accuracy and integrity of data and applications through analysis, coding, documenting, testing, and problem solving Setting up the development environment and configuration of the development tools Communicate with all the project stakeholders on the project status Manage, monitor, and ensure the security and privacy of data to satisfy business needs Contribute to the automation of modules, wherever required To be proficient in written, verbal and presentation communication (English) Co-ordinating with the UAT team Role Requirement: Proficient in basic and advanced SQL programming concepts (Procedures, Analytical functions etc.) Good Knowledge and Understanding of Data warehouse concepts (Dimensional Modeling, change data capture, slowly changing dimensions etc.) Knowledgeable in Shell / PowerShell scripting Knowledgeable in relational databases, nonrelational databases, data streams, and file stores Knowledgeable in performance tuning and optimization Experience in Data Profiling and Data validation Experience in requirements gathering and documentation processes and performing unit testing Understanding and Implementing QA and various testing process in the project Knowledge in any BI tools will be an added advantage Sound aptitude, outstanding logical reasoning, and analytical skills Willingness to learn and take initiatives Ability to adapt to fast-paced Agile environment Additional Requirement: • Design, develop, and maintain scalable data models and transformations using DBT in conjunction with Snowflake, ensure the effective transformation and load data from diverse sources into data warehouse or data lake. • Implement and manage data models in DBT, guarantee accurate data transformation and alignment with business needs. • Utilize DBT to convert raw, unstructured data into structured datasets, enabling efficient analysis and reporting. • Write and optimize SQL queries within DBT to enhance data transformation processes and improve overall performance. • Establish best DBT processes to improve performance, scalability, and reliability. • Expertise in SQL and a strong understanding of Data Warehouse concepts and Modern Data Architectures. • Familiarity with cloud-based platforms (e.g., AWS, Azure, GCP). • Migrate legacy transformation code into modular DBT data models

Posted 2 weeks ago

Apply

4.0 years

0 Lacs

Kolkata metropolitan area, West Bengal, India

On-site

Role: AWS Data Engineer Experience: 4+ years Work Location: TCS Kolkata Responsibilities AWS Data engineer having experience in building data pipeline with Glue, Lambda, EMR, S3. Having experience in PySpark and Python programming. Should have PySpark, SQL, Azure Services (ADF, DataBricks, Synapse) Designing and implementing data ingestion pipelines from multiple sources using Azure Databricks. Developing scalable and re-usable frameworks for ingesting data sets Integrating the end-to-end data pipeline - to take data from source systems to target data repositories ensuring the quality and consistency of data is always maintained. Working with event based / streaming technologies to ingest and process data. Working with other members of the project team to support delivery of additional project components (API interfaces, Search) Evaluating the performance and applicability of multiple tools against customer requirements Have knowledge on deployment framework such as CI/CD, GitHub check in process Able to perform data analytics, data analysis and data profiling Good communication Qualifications 10+2+3 years of regular education is must Minimum 4+ years of relevant experience is a must Note: Candidate should be willing to join in Third party payroll Immediate to 30 days joiners are preferred

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

India

Remote

Job Description As an Azure DevOps Engineer, you will be responsible for: Design, develop, and manage Infrastructure as Code (IaC) using Terraform to automate the provisioning of Azure resources. Build and maintain CI/CD pipelines leveraging Azure DevOps Pipelines and GitHub Actions to support efficient code integration, testing, and deployment. Administer and configure core Azure services including Networking, Storage, Compute, and Security. Collaborate with data engineering teams to support and integrate Azure Data services such as Azure Databricks and Azure Data Factory . Manage and deploy containerized applications using Docker , with orchestration via Kubernetes . Write and maintain automation scripts using Bash and/or Python for system management and DevOps workflows. Profile Requirements For this position of Azure DevOps Engineer, we are looking for someone with: 5+ years of experience in DevOps , with proven success in designing and implementing CI/CD workflows. Expert in Terraform for infrastructure automation. Strong hands-on experience with Azure DevOps and GitHub Actions . Solid understanding of core Azure services including Networking, Compute, Storage, and Security. Familiarity with Azure Data services such as Databricks and ADF (Azure Data Factory) . Proficient in Docker and Kubernetes . Strong working knowledge of Linux and scripting with Bash/Python . Excellent problem-solving, communication, and collaboration skills. Benefits For this position of Azure DevOps Engineer , we plan to offer you: Starting Gross Monthly Salary = Negotiable Depending on Your Skills and Experience Other Ad-Hoc Bonuses (per company internal policy) 100% Petty Cash Reimbursements 30-40 Days Paid Absence 500+ Lifelong Learning Courses (and new on Demand) Corporate Laptop 100% Flexible Working Hours on Project Demand Work & Travel Opportunities in EU and Canada Adastra APAM Culture Manifesto Servant Leadership Managers are servants to employees. Managers are elected to make sure that employees have all the processes, resources, and information they need to provide services to clients in an efficient manner. Any manager up to the CEO is visible and reachable for a chat regardless their title. Decisions are taken with a consent in an agile manner and executed efficiently in no overdue time. We accept that wrong decisions happen and we appreciate the learning before we adjust the process for a continuous improvement. Employees serve clients. Employees listen attentively to client needs and collaborate internally as a team to cater to them. Managers and employees work together to get things done and are accountable to each other. Corporate KPIs are transparently reviewed on monthly company events with all employees. Performance Driven Compensation We recognize and accept that some of us are more ambitious, more gifted, or more hard-working. We also recognize that some of us look for a stable income and lesser hassle at a different stage of their careers. There is a place for everyone, we embrace and need this diversity. Grades in our company are not based on number of years of experience, they are value driven based on everyone’s ability to deliver independently their work to clients and/or lead others. There is no “annual indexation” of salaries, you may be upgraded several times within the year, or none, based on your own pace of progress, ambitions, relevant skillset and recognition by clients. Work-Life Integration We challenge the notion of work-life balance, we embrace the notion of work-life integration instead. This philosophy looks into our lives a single whole where we serve ourselves, our families and our clients in an integrated manner. We encourage 100% flexible working hours where you arrange your day. This means you are free when you have little work, but this also means extra effort if you are behind schedule. Working for clients that may be in different time zones means we give you the flexibility to design how your day will look like in accordance to personal and project preferences and needs. We appreciate time and we minimize time spent on Adastra meetings. We are also a remote-first company. While we have our collaboration offices and social events, we encourage people to work 100% remote from home whenever possible. This means saving time and money on commute, staying home with elderly and little ones, not missing the special moments in life. This also means you can work from any of our other offices in Europe, North America or Australia, or move to a place with lower cost of living without impacting your income. We trust you by default until you fail our trust. Global Diversity Adastra is an international organization. We hire globally and our biggest partners and clients are in Europe, North America and Australia. We work on teams with individuals from different culture, ethnicity, sexual preference, political views or religion. We have zero tolerance to anyone who doesn’t pay respect to others or is abusive in any way. We speak different languages to one another, but we speak English when we are together or with clients. Our company is a safe space where communication is encouraged but boundaries regarding sensitive topics are respected. We accept and converge together to serve our teams and clients and ultimately have good time at work. Lifelong Learning On annual average we invest 25% of our working hours to personal development and upskilling outside project work, regardless of seniority or role. We feature hundreds of courses on our Training Repo, and we continue to actively purchase or tailor hands-on content. We certify people on our expense. We like to say we are technology agnostic; we learn the principles of data management and we apply it on different use cases and different technology stacks. We believe that the juniors today are the seniors tomorrow, we treat everyone with respect and mentor them into the roles they deserve. We encourage seniors to give back to the IT community through leadership and mentorship. On your last day with us we may give you an open-dated job offer so that you feel welcome to return home as others did before you. More About Adastra: Visit http://adastragrp.com and/or contact us: HRIN@adastragrp.com FRAUD ALERT: Be cautious of fake job postings and individuals posing as Adastra employees. HOW TO VERIFY IT'S US: Our employees will only use email addresses ending in @adastragrp.com . Any other domains, even if similar, are not legitimate. We will never request any form of payment, including but not limited to fees, certification costs, or deposits. Please reach out to HRIN@adastragrp.com only in case you have any questions.

Posted 2 weeks ago

Apply

0 years

5 - 9 Lacs

Bengaluru

On-site

Req ID: 330864 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Senior Dev Ops Engineer to join our team in Bangalore, Karnātaka (IN-KA), India (IN). "Job Duties: -DEVOps Exp in Establishing and managing CI/CD pipelines to automate the build, test, and deployment processes. Exp in Provision and manage infrastructure resources in the cloud using tools like Terraform. Exp in Azure Databricks, Azure DevOps tools, Terraform / Azure Resource Manager , Containerization and Orchestration with Docker and Kubernetes. Version control Exp - Git or Azure Repos Scripting automation - Azure CLI/Powershell Must have: Proficiency in Cloud Technologies Azure, Azure Databricks, ADF, CI/CD pipelines, terraform, Hashicorp Vault, Github, Git Preferred: Containerization and Orchestration with Docker and Kubernetes,IAM, RBAC, OAuth, Change Managment, SSL certificates Knowledge of security best practices and compliance frameworks like GDPR or HIPAA. Minimum Skills Required: -DEVOps Exp in Establishing and managing CI/CD pipelines to automate the build, test, and deployment processes. Exp in Provision and manage infrastructure resources in the cloud using tools like Terraform. Exp in Azure Databricks, Azure DevOps tools, Terraform / Azure Resource Manager , Containerization and Orchestration with Docker and Kubernetes. Version control Exp - Git or Azure Repos Scripting automation - Azure CLI/Powershell Must have: Proficiency in Cloud Technologies Azure, Azure Databricks, ADF, CI/CD pipelines, terraform, Hashicorp Vault, Github, Git Preferred: Containerization and Orchestration with Docker and Kubernetes,IAM, RBAC, OAuth, Change Managment, SSL certificates Knowledge of security best practices and compliance frameworks like GDPR or HIPAA." About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at us.nttdata.com NTT DATA endeavors to make https://us.nttdata.com accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at https://us.nttdata.com/en/contact-us. This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click here. If you'd like more information on your EEO rights under the law, please click here. For Pay Transparency information, please click here.

Posted 2 weeks ago

Apply

10.0 - 15.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Designation : Data Architect. Location : Pune. Experience : 10-15 years. Job Description Role & Responsibilities : The architect should have experience in architecting large scale analytics solutions using native services such as Azure Synapse, Data Lake, Data Factory, HDInsight, Databricks, Azure Cognitive Services, Azure ML, Azure Event Hub. Assist with creation of a robust, sustainable architecture that supports requirements and provides for expansion with secured access. Experience in building/running large data environment for BFSI clients. Work with customers, end users, technical architects, and application designers to define the data requirements and data structure for BI/Analytic solutions. Designs conceptual and logical models for the data lake, data warehouse, data mart, and semantic layer (data structure, storage, and integration). Lead the database analysis, design, and build effort. Communicates physical database designs to lead data architect/database administrator. Evolves data models to meet new and changing business requirements. Work with business analysts to identify and understand requirements and source data systems. Skills Required Big Data Technologies : Expert in big data technologies on Azure/GCP. ETL Platforms : Experience with ETL platforms like ADF, Glue, Ab Initio, Informatica, Talend, Airflow. Data Visualization : Experience in data visualization tools like Tableau, Power BI, etc. Data Engineering & Management : Experience in a data engineering, metadata management, database modeling and development role. Streaming Data Handling : Strong experience in handling streaming data with Kafka. Data API Understanding : Understanding of Data APIs, Web services. Data Security : Experience in Data security and Data Archiving/Backup, Encryption and define the standard processes for same. DataOps/MLOps : Experience in setting up DataOps and MLOps. Integration : Work with other architects to ensure that all components work together to meet objectives and performance goals as defined in the requirements. Data Science Coordination : Coordinate with the Data Science Teams to identify future data needs and requirements and creating pipelines for them. Soft Skills Soft skills such as communication, leading the team, taking ownership and accountability to successful engagement. Participate in quality management reviews. Managing customer expectation and business user interactions. Deliver key research (MVP, POC) with an efficient turn-around time to help make strong product decisions. Demonstrate key understanding and expertise on modern technologies, architecture, and design. Mentor the team to deliver modular, scalable, and high-performance code. Innovation : Be a change agent on key innovation and research to keep the product, team at the cutting edge of technical and product innovation. (ref:hirist.tech)

Posted 2 weeks ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Primary skills: Technology->AWS->Devops Technology->Cloud Integration->Azure Data Factory (ADF),Technology->Cloud Platform->AWS Database, Technology->Cloud Platform->Azure Devops->Azure Pipelines, Technology->DevOps->Continuous integration - Mainframe A day in the life of an Infoscion As part of the Infosys consulting team, your primary role would be to actively aid the consulting team in different phases of the project including problem definition, effort estimation, diagnosis, solution generation and design and deployment You will explore the alternatives to the recommended solutions based on research that includes literature surveys, information available in public domains, vendor evaluation information, etc. and build POCs You will create requirement specifications from the business needs, define the to-be-processes and detailed functional designs based on requirements. You will support configuring solution requirements on the products; understand if any issues, diagnose the root-cause of such issues, seek clarifications, and then identify and shortlist solution alternatives You will also contribute to unit-level and organizational initiatives with an objective of providing high quality value adding solutions to customers. If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Ability to work with clients to identify business challenges and contribute to client deliverables by refining, analyzing, and structuring relevant data Awareness of latest technologies and trends Logical thinking and problem solving skills along with an ability to collaborate Ability to assess the current processes, identify improvement areas and suggest the technology solutions One or two industry domain knowledge

Posted 2 weeks ago

Apply

10.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Description Customer Success Services (CSS) Are you passionate about problem solving? If you are enthusiastic to learn cutting edge technologies, you have interest in innovation and you are customer-centric- we want you with us! Oracle is a technology leader that’s changing how the world does business – and our Customer Success Services (CSS) team supports over 6,000 companies around the world. We’re looking for an experienced and self-motivated Sr. / Sr. Principal Support Engineer - EBS Apps Developer. Join the team of highly skilled technical experts who build and maintain our clients’ technical landscapes through tailored support services. The EBS Oracle Applications developer is an experienced technical professional, who has an understanding of business solutions, industry best practices, multiple business processes and technology designs within the Oracle Applications supporting products and technologies. The candidate should have experience in implementation or support of large to medium Oracle Applications implementation projects. He or She should be able to operate independently to provide quality work products, and perform varied and complex duties and tasks that need independent judgment. Your Opportunity We are looking for flexible and open-minded experts, able to work with different technologies, and address complex architectures, on premises, cloud, or Hybrid environments. We look for engineers who can quickly learn and who are willing to work with new and innovative products and solutions, and who are capable to interact and collaborate with people in different teams globally to provide always the best-tailored solution to Oracle customers. CSS offers a professional context where engineers can develop themselves constantly and where they can always be in touch with the most innovative technologies both in on-prem and in cloud environments. SKILLS: Strong technical knowledge in Oracle applications, SQL and PL-SQL is a must. Strong knowledge in OAF, XML, Oracle Forms and Reports, AME, WF, APEX is a must. Java, ADF, JET and PaaS skills. Oracle relevant technical Certification. Good understanding of functional parts of the developed code (Preferably in Oracle Financials and HRMS). Strong analytical and problem solving skills Technical troubleshooting experience. Our Ideal Candidate In addition to the technical capabilities, our ideal candidate is a person who: The job involves working with customers in different time zones and resource should be flexible to work in shifts including night shifts Resource should be able to independently work on CEMLI objects - Design, develop and test Technically good in development and experience on EBS Financial Modules Resource should be able to investigate, analyze, design and develop solution for enhancements/developments related to CEMLI’s Resource should be able to identify the impact of patches and determine functional and technical steps required to minimize the disruption to business Report progress/status/risk/issues on development at regular basis. Resource should be able to manage the complete development pipeline and manage the scope, time and cost and delivery of all the CEMLIs Resource should be able to lead the support team in Incident and Problem Management and come up with innovative solutions in short span of time. Resource should be able to understand customer requirements/user stories and implement practical solutions. Resource should have hands on knowledge and expertise on Oracle EBS R12 and Fusion/SaaS modules Resource should have Good knowledge of business processes and application setups and the impacts of one setups to another. REQUIREMENTS: Minimum 10 years of relevant experience. Excellent problem-solving skills and troubleshooting skills. Ability to work effectively in a team, collaborating with stakeholders to solve business needs. Strong communication and teamwork skills. Self driven and result oriented Collaborate with product owners, QA teams, and stakeholders to understand requirements, work on user stories/backlog items, and ensure high-quality delivery. Ability to keep track of schedules and ensure on-time delivery of assigned tasks, optimizing pace and meeting deadlines. Participate in standup meetings and provide progress updates regularly. Experience in understanding customer requirement. Good knowledge of business processes and application setups. Good Technical expertise on EBS/integrations architecture Fluent English (other additional languages will be also valued) Availability to travel and work onsite at customers by not less than 50% Availability to work 24x7 (on-call) RESPONSIBILITIES: Work on developing technical solutions to meet business requirements gathered and documented by functional consultant Identify and resolve key issues related to code change requirements and bug fixes Support Oracle ERP products and services from the technical aspect in line with the contractual agreement Works with support to resolve Customers SRs. Conduct knowledge transfer sessions both within the Oracle team and to end users. Work closely with the functional team and delivery leaders to provide development work estimates and drive excellence in technical work. To develop and manage the technical relationship with a designated account(s) in order to maximize the value of CSS to the customer, To develop and maintain trusted relationships with the other Oracle contacts within designated account(s) and relevant third parties, To act as the technical primary point of contact for Oracle Support To safeguard customer satisfaction, and renewal, through quality delivery and added value. Engage directly in architectural tasks and collaborate with colleagues to implement best practices specific to the projects. Detect and address performance challenges, security issues, and other technical concerns proactively. Analyze, troubleshoot and solve whenever feasible, the issues the customer may face using Oracle products. Identify required/recommended actions on Customer systems as main output of service delivery, based on own knowledge and experience; Escalate at the right time customer issues to Technical Account Manager where relevant; Ensure adherence to internal methodology, tools and quality standards; Actively participate on Services development. Actively collaborate with other engineers in the team or in other teams, to share knowledge, experiences and others, which can benefit CSS Business results. Career Level - IC4 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.

Posted 2 weeks ago

Apply

6.0 years

0 Lacs

Kolkata, West Bengal, India

Remote

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity We are seeking a highly skilled and motivated Senior DataOps Engineer with strong expertise in the Azure data ecosystem. You will play a crucial role in managing and optimizing data workflows across Azure platforms such as Azure Data Factory, Data Lake, Databricks, and Synapse. Your primary focus will be on building, maintaining, and monitoring data pipelines, ensuring high data quality, and supporting critical data operations. You'll also support visualization, automation, and CI/CD processes to streamline data delivery and reporting. Your Key Responsibilities Data Pipeline Management: Build, monitor, and optimize data pipelines using Azure Data Factory (ADF), Databricks, and Azure Synapse for efficient data ingestion, transformation, and storage. ETL Operations: Design and maintain robust ETL processes for batch and real-time data processing across cloud and on-premise sources. Data Lake Management: Organize and manage structured and unstructured data in Azure Data Lake, ensuring performance and security best practices. Data Quality & Validation: Perform data profiling, validation, and transformation using SQL, PySpark, and Python to ensure data integrity. Monitoring & Troubleshooting: Use logging and monitoring tools to troubleshoot failures in pipelines and address data latency or quality issues. Reporting & Visualization: Work with Power BI or Tableau teams to support dashboard development, ensuring the availability of clean and reliable data. DevOps & CI/CD: Support data deployment pipelines using Azure DevOps, Git, and CI/CD practices for version control and automation. Tool Integration: Collaborate with cross-functional teams to integrate Informatica CDI or similar ETL tools with Azure components for seamless data flow. Collaboration & Documentation: Partner with data analysts, engineers, and business stakeholders, while maintaining SOPs and technical documentation for operational efficiency. Skills And Attributes For Success Strong hands-on experience in Azure Data Factory, Azure Data Lake, Azure Synapse, and Databricks Solid understanding of ETL/ELT design and implementation principles Strong SQL and PySpark skills for data transformation and validation Exposure to Python for automation and scripting Familiarity with DevOps concepts, CI/CD workflows, and source control systems (Azure DevOps preferred) Experience in working with Power BI or Tableau for data visualization and reporting support Strong problem-solving skills, attention to detail, and commitment to data quality Excellent communication and documentation skills to interface with technical and business teamsStrong knowledge of asset management business operations, especially in data domains like securities, holdings, benchmarks, and pricing. To qualify for the role, you must have 4–6 years of experience in DataOps or Data Engineering roles Proven expertise in managing and troubleshooting data workflows within the Azure ecosystem Experience working with Informatica CDI or similar data integration tools Scripting and automation experience in Python/PySpark Ability to support data pipelines in a rotational on-call or production support environment Comfortable working in a remote/hybrid and cross-functional team setup Technologies and Tools Must haves Azure Databricks: Experience in data transformation and processing using notebooks and Spark. Azure Data Lake: Experience working with hierarchical data storage in Data Lake. Azure Synapse: Familiarity with distributed data querying and data warehousing. Azure Data factory: Hands-on experience in orchestrating and monitoring data pipelines. ETL Process Understanding: Knowledge of data extraction, transformation, and loading workflows, including data cleansing, mapping, and integration techniques. Good to have Power BI or Tableau for reporting support Monitoring/logging using Azure Monitor or Log Analytics Azure DevOps and Git for CI/CD and version control Python and/or PySpark for scripting and data handling Informatica Cloud Data Integration (CDI) or similar ETL tools Shell scripting or command-line data SQL (across distributed and relational databases) What We Look For Enthusiastic learners with a passion for data op’s and practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 2 weeks ago

Apply

0 years

6 - 8 Lacs

Bengaluru

On-site

 Manage and Maintain the RBI ADF/reporting system to ensure the timely and accurate submission of regulatory returns as and when required.  Act as Money Laundering Reporting Officer and perform all duties and responsibilities to ensure adherence with RBI rules and regulatory bodies.  Liaise with the RBI / FIU and other regulatory bodies as required to ensure compliance with RBI rules and regulations and other requirements of a legal nature.  Provide managers of the other teams with appropriate and up-to-date information or data immediately as requested.  Authorize and release payment order filtered by OFAC Filtering System.  Make a return of Bank Audit posted by Audit Company.  Work closely with the Chief Executive Officer in overseeing compliance procedures and advise on risk management.  Assist the Chief Executive Officer with the development of the entity-wide budget for compliance efforts including identifying resource gaps and direct resources appropriately whether within the department or in other areas of the Bank.  Create process and manuals according to KEB Hana Bank policy and periodically to be reviewed.  Manage daily & monthly Audit.  Manage audit set up by H.O.  Act in the capacity of Internal Auditor ensuring that regular audits are performed of all departments of the branch.  Train all staffs for internal control & AML and report to H.O.  Establish and execute yearly Compliance Plan and report results to H.O.  Monitor internal control process and submit Monthly Compliance Report to H.O.  Preview and assess new / renewal of contracts, proposals of launching new banking products / services and submissions of bank’s internal data to external parties.  Manage and Maintain intimate relationship with regulators for cooperations. Job Type: Full-time Pay: ₹650,000.00 - ₹800,000.00 per year Schedule: Day shift Work Location: In person

Posted 2 weeks ago

Apply

10.0 years

26 - 30 Lacs

Chennai

On-site

We are looking for Associate Division Manager for one of our Major Client .This role includes designing and building AI/ML products at scale to improve customer Understanding & Sentiment analysis, recommend customer requirements, recommend optimal inputs, Improve efficiency of Process. This role will collaborate with product owners and business owners Key Responsibilities: - Leading a team of junior and experienced data scientists Lead and participate in end-to-end ML projects deployments that require feasibility analysis, design, development, validation, and application of state-of-the art data science solutions. Push the state of the art in terms of the application of data mining, visualization, predictive modelling, statistics, trend analysis, and other data analysis techniques to solve complex business problems including lead classification, recommender systems, product life-cycle modelling, Design Optimization problems, Product cost & weigh optimization problems.Functional Responsibilities :- Leverage and enhance applications utilizing NLP, LLM, OCR, image based models and Deep Learning Neural networks for use cases including text mining, speech and object recognition Identify future development needs, advance new emerging ML and AI technology, and set the strategy for the data science team Cultivate a product-centric, results-driven data science organization Write production ready code and deploy real time ML models; expose ML outputs through APIs Partner with data/ML engineers and vendor partners for input data pipes development and ML models automation Provide leadership to establish world-class ML lifecycle management processes.Qualification :- MTech / BE / BTech / MSc in CS Exp:- Over 10 years of Applied Machine learning experience in the fields of Machine Learning, Statistical Modelling, Predictive Modelling, Text Mining, Natural Language Processing (NLP), LLM, OCR, Image based models, Deep learning Expert Python Programmer: SQL, C#, extremely proficient with the SciPy stack (e.g. numpy, pandas, sci-kit learn, matplotlib) Proficiency in work with open source deep learning platforms like TensorFlow, Keras, Pytorch Knowledge of the Big Data Ecosystem: (Apache Spark, Hadoop, Hive, EMR, MapReduce) Proficient in Cloud Technologies and Service (Azure Databricks, ADF, Databricks MLflow).Functional Competencies :- A demonstrated ability to mentor junior data scientists and proven experience in collaborative work environments with external customers Proficient in communicating technical findings to non-technical stakeholders Holding routine peer code review of ML work done by the team Experience in leading and / or collaborating with small to midsized teams Experienced in building scalable / highly available distribute systems in production Experienced in ML lifecycle mgmt. and ML Ops tools & frameworks.Job type:- FTE Location:- Chennai Job Type: Contractual / Temporary Pay: ₹2,633,123.63 - ₹3,063,602.96 per year Schedule: Monday to Friday Education: Bachelor's (Preferred) Work Location: In person

Posted 2 weeks ago

Apply

0 years

8 - 9 Lacs

Chennai

On-site

Avant de postuler à un emploi, sélectionnez votre langue de préférence parmi les options disponibles en haut à droite de cette page. Découvrez votre prochaine opportunité au sein d'une organisation qui compte parmi les 500 plus importantes entreprises mondiales. Envisagez des opportunités innovantes, découvrez notre culture enrichissante et travaillez avec des équipes talentueuses qui vous poussent à vous développer chaque jour. Nous savons ce qu’il faut faire pour diriger UPS vers l'avenir : des personnes passionnées dotées d’une combinaison unique de compétences. Si vous avez les qualités, de la motivation, de l'autonomie ou le leadership pour diriger des équipes, il existe des postes adaptés à vos aspirations et à vos compétences d'aujourd'hui et de demain. Fiche de poste : About UPS UPS is a global leader in logistics, offering a broad range of solutions that include transportation, distribution, supply chain management, and e-commerce. Founded in 1907, UPS operates in over 220 countries and territories, delivering packages and providing specialized services worldwide. Our mission is to enable commerce by connecting people, places, and businesses, with a strong focus on sustainability and innovation. About UPS Supply Chain Symphony™ The UPS Supply Chain Symphony™ platform is a cloud-based solution that seamlessly integrates key supply chain components, including shipping, warehousing, and inventory management, into a unified platform. This solution empowers businesses by offering enhanced visibility, advanced analytics, and customizable dashboards to streamline global supply chain operations and decision-making. About the role We are seeking Data Developer to join our data engineering team responsible for building and maintaining complex data solutions using Azure Data Factory (ADF), Azure Databricks , and Cosmos DB . The role involves designing and developing scalable data pipelines, implementing data transformations, and ensuring high data quality and performance. Work closely with data architects, testers, and analysts to deliver robust data solutions that support strategic business initiatives. The ideal candidate should possess deep expertise in big data technologies, data integration, and cloud-native data engineering solutions on Microsoft Azure. Primary Skills Data Engineering: Azure Data Factory (ADF), Azure Databricks. Cloud Platform: Microsoft Azure (Data Lake Storage, Cosmos DB). Data Modeling: NoSQL data modeling, Data warehousing concepts. Performance Optimization: Data pipeline performance tuning and cost optimization. Programming Languages: Python, SQL, PySpark Secondary Skills DevOps and CI/CD: Azure DevOps, CI/CD pipeline design and automation. Security and Compliance: Implementing data security and governance standards. Agile Methodologies: Experience in Agile/Scrum environments. Soft Skills Strong problem-solving abilities and attention to detail. Excellent communication skills, both verbal and written. Effective time management and organizational capabilities. Ability to work independently and within a collaborative team environment. Strong interpersonal skills to engage with cross-functional teams. Educational Qualifications Bachelor's degree in Computer Science, Engineering, Information Technology, or a related field. Relevant certifications in Azure and Data Engineering, such as: Microsoft Certified: Azure Data Engineer Associate Microsoft Certified: Azure Solutions Architect Expert Databricks Certified Data Engineer Associate or Professional Type de contrat: en CDI Chez UPS, égalité des chances, traitement équitable et environnement de travail inclusif sont des valeurs clefs auxquelles nous sommes attachés.

Posted 2 weeks ago

Apply

5.0 - 8.0 years

15 - 25 Lacs

Gurugram, Bengaluru

Hybrid

Warm Greetings from SP Staffing!! Role: Azure Data Engineer Experience Required : 5 to 8 yrs Work Location : Bangalore/Gurgaon Required Skills, Azure Databricks, ADF, Pyspark/SQL Interested candidates can send resumes to nandhini.spstaffing@gmail.com

Posted 2 weeks ago

Apply

6.0 years

0 Lacs

Kanayannur, Kerala, India

Remote

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity We are seeking a highly skilled and motivated Senior DataOps Engineer with strong expertise in the Azure data ecosystem. You will play a crucial role in managing and optimizing data workflows across Azure platforms such as Azure Data Factory, Data Lake, Databricks, and Synapse. Your primary focus will be on building, maintaining, and monitoring data pipelines, ensuring high data quality, and supporting critical data operations. You'll also support visualization, automation, and CI/CD processes to streamline data delivery and reporting. Your Key Responsibilities Data Pipeline Management: Build, monitor, and optimize data pipelines using Azure Data Factory (ADF), Databricks, and Azure Synapse for efficient data ingestion, transformation, and storage. ETL Operations: Design and maintain robust ETL processes for batch and real-time data processing across cloud and on-premise sources. Data Lake Management: Organize and manage structured and unstructured data in Azure Data Lake, ensuring performance and security best practices. Data Quality & Validation: Perform data profiling, validation, and transformation using SQL, PySpark, and Python to ensure data integrity. Monitoring & Troubleshooting: Use logging and monitoring tools to troubleshoot failures in pipelines and address data latency or quality issues. Reporting & Visualization: Work with Power BI or Tableau teams to support dashboard development, ensuring the availability of clean and reliable data. DevOps & CI/CD: Support data deployment pipelines using Azure DevOps, Git, and CI/CD practices for version control and automation. Tool Integration: Collaborate with cross-functional teams to integrate Informatica CDI or similar ETL tools with Azure components for seamless data flow. Collaboration & Documentation: Partner with data analysts, engineers, and business stakeholders, while maintaining SOPs and technical documentation for operational efficiency. Skills And Attributes For Success Strong hands-on experience in Azure Data Factory, Azure Data Lake, Azure Synapse, and Databricks Solid understanding of ETL/ELT design and implementation principles Strong SQL and PySpark skills for data transformation and validation Exposure to Python for automation and scripting Familiarity with DevOps concepts, CI/CD workflows, and source control systems (Azure DevOps preferred) Experience in working with Power BI or Tableau for data visualization and reporting support Strong problem-solving skills, attention to detail, and commitment to data quality Excellent communication and documentation skills to interface with technical and business teamsStrong knowledge of asset management business operations, especially in data domains like securities, holdings, benchmarks, and pricing. To qualify for the role, you must have 4–6 years of experience in DataOps or Data Engineering roles Proven expertise in managing and troubleshooting data workflows within the Azure ecosystem Experience working with Informatica CDI or similar data integration tools Scripting and automation experience in Python/PySpark Ability to support data pipelines in a rotational on-call or production support environment Comfortable working in a remote/hybrid and cross-functional team setup Technologies and Tools Must haves Azure Databricks: Experience in data transformation and processing using notebooks and Spark. Azure Data Lake: Experience working with hierarchical data storage in Data Lake. Azure Synapse: Familiarity with distributed data querying and data warehousing. Azure Data factory: Hands-on experience in orchestrating and monitoring data pipelines. ETL Process Understanding: Knowledge of data extraction, transformation, and loading workflows, including data cleansing, mapping, and integration techniques. Good to have Power BI or Tableau for reporting support Monitoring/logging using Azure Monitor or Log Analytics Azure DevOps and Git for CI/CD and version control Python and/or PySpark for scripting and data handling Informatica Cloud Data Integration (CDI) or similar ETL tools Shell scripting or command-line data SQL (across distributed and relational databases) What We Look For Enthusiastic learners with a passion for data op’s and practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 2 weeks ago

Apply

6.0 years

0 Lacs

Trivandrum, Kerala, India

Remote

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity We are seeking a highly skilled and motivated Senior DataOps Engineer with strong expertise in the Azure data ecosystem. You will play a crucial role in managing and optimizing data workflows across Azure platforms such as Azure Data Factory, Data Lake, Databricks, and Synapse. Your primary focus will be on building, maintaining, and monitoring data pipelines, ensuring high data quality, and supporting critical data operations. You'll also support visualization, automation, and CI/CD processes to streamline data delivery and reporting. Your Key Responsibilities Data Pipeline Management: Build, monitor, and optimize data pipelines using Azure Data Factory (ADF), Databricks, and Azure Synapse for efficient data ingestion, transformation, and storage. ETL Operations: Design and maintain robust ETL processes for batch and real-time data processing across cloud and on-premise sources. Data Lake Management: Organize and manage structured and unstructured data in Azure Data Lake, ensuring performance and security best practices. Data Quality & Validation: Perform data profiling, validation, and transformation using SQL, PySpark, and Python to ensure data integrity. Monitoring & Troubleshooting: Use logging and monitoring tools to troubleshoot failures in pipelines and address data latency or quality issues. Reporting & Visualization: Work with Power BI or Tableau teams to support dashboard development, ensuring the availability of clean and reliable data. DevOps & CI/CD: Support data deployment pipelines using Azure DevOps, Git, and CI/CD practices for version control and automation. Tool Integration: Collaborate with cross-functional teams to integrate Informatica CDI or similar ETL tools with Azure components for seamless data flow. Collaboration & Documentation: Partner with data analysts, engineers, and business stakeholders, while maintaining SOPs and technical documentation for operational efficiency. Skills And Attributes For Success Strong hands-on experience in Azure Data Factory, Azure Data Lake, Azure Synapse, and Databricks Solid understanding of ETL/ELT design and implementation principles Strong SQL and PySpark skills for data transformation and validation Exposure to Python for automation and scripting Familiarity with DevOps concepts, CI/CD workflows, and source control systems (Azure DevOps preferred) Experience in working with Power BI or Tableau for data visualization and reporting support Strong problem-solving skills, attention to detail, and commitment to data quality Excellent communication and documentation skills to interface with technical and business teamsStrong knowledge of asset management business operations, especially in data domains like securities, holdings, benchmarks, and pricing. To qualify for the role, you must have 4–6 years of experience in DataOps or Data Engineering roles Proven expertise in managing and troubleshooting data workflows within the Azure ecosystem Experience working with Informatica CDI or similar data integration tools Scripting and automation experience in Python/PySpark Ability to support data pipelines in a rotational on-call or production support environment Comfortable working in a remote/hybrid and cross-functional team setup Technologies and Tools Must haves Azure Databricks: Experience in data transformation and processing using notebooks and Spark. Azure Data Lake: Experience working with hierarchical data storage in Data Lake. Azure Synapse: Familiarity with distributed data querying and data warehousing. Azure Data factory: Hands-on experience in orchestrating and monitoring data pipelines. ETL Process Understanding: Knowledge of data extraction, transformation, and loading workflows, including data cleansing, mapping, and integration techniques. Good to have Power BI or Tableau for reporting support Monitoring/logging using Azure Monitor or Log Analytics Azure DevOps and Git for CI/CD and version control Python and/or PySpark for scripting and data handling Informatica Cloud Data Integration (CDI) or similar ETL tools Shell scripting or command-line data SQL (across distributed and relational databases) What We Look For Enthusiastic learners with a passion for data op’s and practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 2 weeks ago

Apply

6.0 years

0 Lacs

Pune, Maharashtra, India

Remote

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity We are seeking a highly skilled and motivated Senior DataOps Engineer with strong expertise in the Azure data ecosystem. You will play a crucial role in managing and optimizing data workflows across Azure platforms such as Azure Data Factory, Data Lake, Databricks, and Synapse. Your primary focus will be on building, maintaining, and monitoring data pipelines, ensuring high data quality, and supporting critical data operations. You'll also support visualization, automation, and CI/CD processes to streamline data delivery and reporting. Your Key Responsibilities Data Pipeline Management: Build, monitor, and optimize data pipelines using Azure Data Factory (ADF), Databricks, and Azure Synapse for efficient data ingestion, transformation, and storage. ETL Operations: Design and maintain robust ETL processes for batch and real-time data processing across cloud and on-premise sources. Data Lake Management: Organize and manage structured and unstructured data in Azure Data Lake, ensuring performance and security best practices. Data Quality & Validation: Perform data profiling, validation, and transformation using SQL, PySpark, and Python to ensure data integrity. Monitoring & Troubleshooting: Use logging and monitoring tools to troubleshoot failures in pipelines and address data latency or quality issues. Reporting & Visualization: Work with Power BI or Tableau teams to support dashboard development, ensuring the availability of clean and reliable data. DevOps & CI/CD: Support data deployment pipelines using Azure DevOps, Git, and CI/CD practices for version control and automation. Tool Integration: Collaborate with cross-functional teams to integrate Informatica CDI or similar ETL tools with Azure components for seamless data flow. Collaboration & Documentation: Partner with data analysts, engineers, and business stakeholders, while maintaining SOPs and technical documentation for operational efficiency. Skills And Attributes For Success Strong hands-on experience in Azure Data Factory, Azure Data Lake, Azure Synapse, and Databricks Solid understanding of ETL/ELT design and implementation principles Strong SQL and PySpark skills for data transformation and validation Exposure to Python for automation and scripting Familiarity with DevOps concepts, CI/CD workflows, and source control systems (Azure DevOps preferred) Experience in working with Power BI or Tableau for data visualization and reporting support Strong problem-solving skills, attention to detail, and commitment to data quality Excellent communication and documentation skills to interface with technical and business teamsStrong knowledge of asset management business operations, especially in data domains like securities, holdings, benchmarks, and pricing. To qualify for the role, you must have 4–6 years of experience in DataOps or Data Engineering roles Proven expertise in managing and troubleshooting data workflows within the Azure ecosystem Experience working with Informatica CDI or similar data integration tools Scripting and automation experience in Python/PySpark Ability to support data pipelines in a rotational on-call or production support environment Comfortable working in a remote/hybrid and cross-functional team setup Technologies and Tools Must haves Azure Databricks: Experience in data transformation and processing using notebooks and Spark. Azure Data Lake: Experience working with hierarchical data storage in Data Lake. Azure Synapse: Familiarity with distributed data querying and data warehousing. Azure Data factory: Hands-on experience in orchestrating and monitoring data pipelines. ETL Process Understanding: Knowledge of data extraction, transformation, and loading workflows, including data cleansing, mapping, and integration techniques. Good to have Power BI or Tableau for reporting support Monitoring/logging using Azure Monitor or Log Analytics Azure DevOps and Git for CI/CD and version control Python and/or PySpark for scripting and data handling Informatica Cloud Data Integration (CDI) or similar ETL tools Shell scripting or command-line data SQL (across distributed and relational databases) What We Look For Enthusiastic learners with a passion for data op’s and practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 2 weeks ago

Apply

6.0 years

0 Lacs

Noida, Uttar Pradesh, India

Remote

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity We are seeking a highly skilled and motivated Senior DataOps Engineer with strong expertise in the Azure data ecosystem. You will play a crucial role in managing and optimizing data workflows across Azure platforms such as Azure Data Factory, Data Lake, Databricks, and Synapse. Your primary focus will be on building, maintaining, and monitoring data pipelines, ensuring high data quality, and supporting critical data operations. You'll also support visualization, automation, and CI/CD processes to streamline data delivery and reporting. Your Key Responsibilities Data Pipeline Management: Build, monitor, and optimize data pipelines using Azure Data Factory (ADF), Databricks, and Azure Synapse for efficient data ingestion, transformation, and storage. ETL Operations: Design and maintain robust ETL processes for batch and real-time data processing across cloud and on-premise sources. Data Lake Management: Organize and manage structured and unstructured data in Azure Data Lake, ensuring performance and security best practices. Data Quality & Validation: Perform data profiling, validation, and transformation using SQL, PySpark, and Python to ensure data integrity. Monitoring & Troubleshooting: Use logging and monitoring tools to troubleshoot failures in pipelines and address data latency or quality issues. Reporting & Visualization: Work with Power BI or Tableau teams to support dashboard development, ensuring the availability of clean and reliable data. DevOps & CI/CD: Support data deployment pipelines using Azure DevOps, Git, and CI/CD practices for version control and automation. Tool Integration: Collaborate with cross-functional teams to integrate Informatica CDI or similar ETL tools with Azure components for seamless data flow. Collaboration & Documentation: Partner with data analysts, engineers, and business stakeholders, while maintaining SOPs and technical documentation for operational efficiency. Skills And Attributes For Success Strong hands-on experience in Azure Data Factory, Azure Data Lake, Azure Synapse, and Databricks Solid understanding of ETL/ELT design and implementation principles Strong SQL and PySpark skills for data transformation and validation Exposure to Python for automation and scripting Familiarity with DevOps concepts, CI/CD workflows, and source control systems (Azure DevOps preferred) Experience in working with Power BI or Tableau for data visualization and reporting support Strong problem-solving skills, attention to detail, and commitment to data quality Excellent communication and documentation skills to interface with technical and business teamsStrong knowledge of asset management business operations, especially in data domains like securities, holdings, benchmarks, and pricing. To qualify for the role, you must have 4–6 years of experience in DataOps or Data Engineering roles Proven expertise in managing and troubleshooting data workflows within the Azure ecosystem Experience working with Informatica CDI or similar data integration tools Scripting and automation experience in Python/PySpark Ability to support data pipelines in a rotational on-call or production support environment Comfortable working in a remote/hybrid and cross-functional team setup Technologies and Tools Must haves Azure Databricks: Experience in data transformation and processing using notebooks and Spark. Azure Data Lake: Experience working with hierarchical data storage in Data Lake. Azure Synapse: Familiarity with distributed data querying and data warehousing. Azure Data factory: Hands-on experience in orchestrating and monitoring data pipelines. ETL Process Understanding: Knowledge of data extraction, transformation, and loading workflows, including data cleansing, mapping, and integration techniques. Good to have Power BI or Tableau for reporting support Monitoring/logging using Azure Monitor or Log Analytics Azure DevOps and Git for CI/CD and version control Python and/or PySpark for scripting and data handling Informatica Cloud Data Integration (CDI) or similar ETL tools Shell scripting or command-line data SQL (across distributed and relational databases) What We Look For Enthusiastic learners with a passion for data op’s and practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 2 weeks ago

Apply

6.0 years

0 Lacs

Greater Kolkata Area

On-site

Job Title : Lead ADF - Microsoft BI & Data Warehouse Lead Skills : Minimum 6+Years into ADF in ETL , Microsoft BI, DW, T-SQL / SQL, procedure writing, Azure/SQL Job Description : (Azure Cloud - API Cloud Integration is Mandate) with 3+Years Overview We are looking for an experienced Microsoft BI & Data Warehouse Lead to design, develop, and maintain robust data warehouse and ETL solutions using the Microsoft technology stack. The ideal candidate will have extensive expertise in SQL Server development, Azure Data Factory (ADF), and other Microsoft-based data integration tools such as Azure Functions, and other related tools. As a lead, he will play a crucial role in driving data strategy, ensuring system performance, and delivering scalable data solutions that meet organizational needs. Responsibilities Data Warehouse Development : Design and implement scalable and efficient data warehouse solutions. Develop complex SQL Server-based solutions, including T-SQL queries, stored procedures, and performance tuning. Optimize SQL Server databases, develop T-SQL scripts, and improve query performance. ETL Development And Maintenance Build and optimize ETL workflows using Azure Data Factory (ADF) for data integration from multiple sources. Ensure high-performance data pipelines for large-scale data processing. Integrate and automate data processes using Azure Functions to extend ETL capabilities. Cloud Integration Implement cloud-native solutions leveraging Azure SQL Database, Azure Functions, and Synapse Analytics. Support hybrid data integration scenarios combining on-premises and Azure services. Data Governance And Quality Establish and maintain robust data quality frameworks and governance standards. Ensure consistency, accuracy, and security of data across all platforms. Leadership And Collaboration Lead a team of BI and data professionals, providing mentorship and technical direction. Partner with stakeholders to understand business requirements and deliver data-driven solutions. Define project goals, timelines, and resources for successful execution. Should be flexible to support multiple IT platforms Managing Day to day activities - Jira request, SQL execution, access request, resolving alerts and updating Tickets Requirements Education : Bachelors degree in computer science, Information Technology, or related field. B.Tech / B.E Skills (Mandatory) (Note : list of skills needed to perform the role) : Strong experience with SQL Server development (T-SQL, indexing, optimization). In-depth knowledge of BI-DW concepts and data modelling. Extensive experience with Azure Data Factory (ADF) for ETL tools Advanced knowledge of Azure SQL Database and cloud technologies. Excellent problem-solving, analytical, and leadership skills. Strong communication and teamwork abilities. Ability to translate business requirements into technical solutions Experience with cloud-based data platforms and migrations. Familiarity with DevOps for CI/CD in data integration pipelines. Skills (Good to Have) (Note : list of skills that will add value to the role) : Experience with additional cloud platforms (e.g., Azure, AWS, Google Cloud). Advanced skills in data visualization tools (e.g., Power BI, SSRS, Tableau). Proficiency in Python or other scripting languages. Experience with Databricks is a plus. Certifications Relevant certifications in SQL, BI-DW, or cloud platforms are highly desirable. Microsoft Certified : Azure Data Fundamentals /Any Cloud Certifications (preferred). Skills (Mandatory) (Note : list of skills needed to perform the Lead role) : Client Facing - US Based Clients - Need Strong Communication skill (ref:hirist.tech)

Posted 2 weeks ago

Apply

0 years

0 Lacs

Greater Kolkata Area

On-site

Job Title : Sr Lead ADF - Cloud API Integration Skills : Minimum 8 yrs into ADF in ETL, Microsoft BI, DW, T-SQL / SQL, procedure writing, Azure/SQL Job Description Overview : We are looking for an experienced Microsoft BI & Data Warehouse Lead to design, develop, and maintain robust data warehouse and ETL solutions using the Microsoft technology stack. The ideal candidate will have extensive expertise in SQL Server development, Azure Data Factory (ADF), and other Microsoft-based data integration tools such as Azure Functions, and other related tools. As a lead, he will play a crucial role in driving data strategy, ensuring system performance, and delivering scalable data solutions that meet organizational needs. Responsibilities Data Warehouse Development : Design and implement scalable and efficient data warehouse solutions. Develop complex SQL Server-based solutions, including T-SQL queries, stored procedures, and performance tuning. Optimize SQL Server databases, develop T-SQL scripts, and improve query performance. ETL Development And Maintenance Build and optimize ETL workflows using Azure Data Factory (ADF) for data integration from multiple sources. Ensure high-performance data pipelines for large-scale data processing. Integrate and automate data processes using Azure Functions to extend ETL capabilities. Cloud Integration Implement cloud-native solutions leveraging Azure SQL Database, Azure Functions, and Synapse Analytics. Support hybrid data integration scenarios combining on-premises and Azure services. Data Governance And Quality Establish and maintain robust data quality frameworks and governance standards. Ensure consistency, accuracy, and security of data across all platforms. Leadership And Collaboration Lead a team of BI and data professionals, providing mentorship and technical direction. Partner with stakeholders to understand business requirements and deliver data-driven solutions. Define project goals, timelines, and resources for successful execution. Should be flexible to support multiple IT platforms Managing Day to day activities - Jira request, SQL execution, access request, resolving alerts and updating Tickets Requirements Education : Bachelor's degree in computer science, Information Technology, or related field. B.Tech / B.E Skills (Mandatory) (Note : list of skills needed to perform the role) : Strong experience with SQL Server development (T-SQL, indexing, optimization). In-depth knowledge of BI-DW concepts and data modelling. Extensive experience with Azure Data Factory (ADF) for ETL tools Advanced knowledge of Azure SQL Database and cloud technologies. Excellent problem-solving, analytical, and leadership skills. Strong communication and teamwork abilities. Ability to translate business requirements into technical solutions Experience with cloud-based data platforms and migrations. Familiarity with DevOps for CI/CD in data integration pipelines. Skills (Good to Have) (Note : list of skills that will add value to the role) : Experience with additional cloud platforms (e.g., Azure, AWS, Google Cloud). Advanced skills in data visualization tools (e.g., Power BI, SSRS, Tableau). Proficiency in Python or other scripting languages. Experience with Databricks is a plus. Certifications Relevant certifications in SQL, BI-DW, or cloud platforms are highly desirable. Microsoft Certified : Azure Data Fundamentals /Any Cloud Certifications (preferred). Notice Period : Less then 15 Days Time / Or Immediate Joiners (Only) (ref:hirist.tech)

Posted 2 weeks ago

Apply

7.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Role Description Job Title: Senior Data Engineer Experience: 7+ Years Employment Type: Full-time Job Summary We are seeking a skilled Senior Data Engineer with strong experience in building scalable data pipelines and transforming complex data across enterprise platforms. The ideal candidate should have hands-on expertise in Databricks, PySpark, SQL , and ETL/ELT tools such as Informatica, AWS Glue, or DataProc . Experience with cloud data warehouses like Snowflake, BigQuery, or Delta Lake , and a strong understanding of data security, compliance, and DevOps is essential. Domain knowledge in banking, financial services, or cybersecurity is highly desirable. Key Responsibilities Design, build, and optimize secure data pipelines for large-scale data processing. Develop ETL/ELT jobs and implement Data Quality (DQ) rules within Databricks and Aurora platforms. Collaborate with Data Architects, DQ Analysts, and Cyber SMEs in Agile POD teams. Manage data modeling, performance tuning, and infrastructure cost optimization. Support data governance, DQ controls (e.g., BCBS 239, DUSE, DMOVE), and compliance reporting. Document architecture, test strategies, and ensure code quality and scalability. Required Skills Strong proficiency in Databricks, PySpark, SQL Experience with ETL tools (e.g., Glue, DataProc, ADF, Informatica) Cloud experience with AWS, Azure, or GCP Hands-on with data modeling, DQ implementation, and performance tuning Understanding of data security, encryption, and risk controls Excellent communication and stakeholder collaboration skills Preferred Qualifications Bachelor’s degree in Computer Science, Engineering, or a related field Experience in banking, financial services, or cybersecurity domains Familiarity with DUSE/DMOVE frameworks and cybersecurity metrics reporting Certification in cloud or data engineering tools is a plus Skills Databricks,Pyspark,Sql,Etl

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Greetings from TCS !!! TCS is Hiring for Azure Data Engineer Job Role: Azure Data Engineer Experience Range: 8+ Job Location: Noida / Chennai Interview Mode: Virtual (MS Teams) Responsibilities of / Expectations from the job: Developing, managing and optimizing robust and reliable data pipelines using azure native capabilities. Implement ADF workflows that perform data ingestion, data integration/ETL, statistical model executions etc. Creating architecture for data solutions with high performance characteristics Bringing data and analytics products to production Implement CI/CD pipelines for data solutions Build dashboards for data stewards and business reporting Design & build RDBMS data models Added Advantage: Python Azure data engineer certification TCS Eligibility Criteria: *BE/B.tech/MCA/M.Sc./MS with minimum 3 years of relevant IT-experience post Qualification. *Only Full-Time courses would be considered. Referrals are always welcome!!! Kindly don't apply if already attended interview within 1 month. Thanks & Regards, Jerin L Varghese

Posted 2 weeks ago

Apply

4.0 - 9.0 years

13 - 23 Lacs

Pune, Chennai, Bengaluru

Hybrid

Skill - ADF, Snowflake, SQL Interested candidates please share resume on juisagars@hexaware.com with below details - Total exp Relevant exp Current company Current CTC Expected CTC Notice period/LWD

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies