Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 years
0 Lacs
Gurgaon
On-site
Gurgaon 1 3+ Years Full Time We are seeking a skilled Data Engineer with strong expertise in Azure Data Factory (ADF), Snowflake, and Kafka (Confluent Platform). The ideal candidate will be responsible for designing, building, and maintaining scalable data pipelines and streaming solutions. Key Responsibilities: Design, develop, and maintain ADF pipelines for data ingestion, transformation, and orchestration. Monitor and troubleshoot pipeline failures and ensure smooth data flow. Write efficient and optimized SQL queries and create complex views in Snowflake. Integrate and manage Kafka producers/consumers and implement real-time data stream processing using the Confluent Platform. Collaborate with data analysts, architects, and software developers to deliver end-to-end data solutions. (Optional) Support backend development using Java 17 and Spring Boot, if required. Candidate Requirements: 3 to 5 years of hands-on experience in: Azure Data Factory (ADF): Pipeline creation, monitoring, troubleshooting. Snowflake: Complex SQL queries, view creation, query optimization. Apache Kafka and Confluent Platform: Stream processing, producer/consumer integration. Good understanding of data modeling, ETL concepts, and cloud-based architectures. Nice to have: Experience in Java 17 and Spring Boot.
Posted 2 weeks ago
12.0 years
1 - 3 Lacs
Hyderābād
On-site
Overview: Seeking a Manager, Data Operations, to support our growing data organization. In this role, you will play a key role in maintaining data pipelines and corresponding platforms (on-prem and cloud) while collaborating with global teams on DataOps initiatives. Manage the day-to-day operations of data pipelines, ensuring governance, reliability, and performance optimization on Microsoft Azure. This role requires hands-on experience with Azure Data Factory (ADF), Azure Synapse Analytics, Azure Databricks, real-time streaming architectures, and DataOps methodologies. Ensure availability, scalability, automation, and governance of enterprise data pipelines supporting analytics, AI/ML, and business intelligence. Support DataOps programs, ensuring alignment with business objectives, data governance standards, and enterprise data strategy. Assist in implementing real-time data observability, monitoring, and automation frameworks to improve data reliability, quality, and operational efficiency. Contribute to the development of governance models and execution roadmaps to optimize efficiency across data platforms, including Azure, AWS, GCP, and on-prem environments. Work on CI/CD integration, data pipeline automation, and self-healing capabilities to enhance enterprise-wide data operations. Collaborate on building and supporting next-generation Data & Analytics platforms while fostering an agile and high-performing DataOps team. Support the adoption of Data & Analytics technology transformations, ensuring full sustainment capabilities and automation for proactive issue identification and resolution. Partner with cross-functional teams to drive process improvements, best practices, and operational excellence within DataOps. Responsibilities: Support the implementation and optimization of enterprise-scale data pipelines using Azure Data Factory (ADF), Azure Synapse Analytics, Azure Databricks, and Azure Stream Analytics. Assist in managing end-to-end data ingestion, transformation, orchestration, and storage workflows, ensuring data reliability, integrity, and availability. Ensure seamless batch, real-time, and streaming data processing while focusing on high availability and fault tolerance. Contribute to DataOps automation initiatives, including CI/CD for data pipelines, automated testing, and version control using Azure DevOps, Terraform, and Infrastructure-as-Code (IaC). Collaborate with Data Engineering, Analytics, AI/ML, CloudOps, and Business Intelligence teams to enable data-driven decision-making. Work with IT, data stewards, and compliance teams to align DataOps practices with regulatory and security requirements. Support data operations and sustainment efforts, including testing and monitoring processes to support global products and projects. Assist in data capture, storage, integration, governance, and analytics initiatives, collaborating with cross-functional teams. Manage day-to-day DataOps activities, ensuring adherence to service-level agreements (SLAs) and business requirements. Engage with SMEs and business stakeholders to align data platform capabilities with business needs. Participate in the Agile work intake and management process to support execution excellence for data platform teams. Collaborate with cross-functional teams to troubleshoot and resolve issues related to cloud infrastructure and data services. Assist in developing and automating operational policies and procedures to improve efficiency and service resilience. Support incident response and root cause analysis, contributing to self-healing mechanisms and mitigation strategies. Foster a customer-centric environment, advocating for operational excellence and continuous service improvements. Contribute to building a collaborative, high-performing team culture focused on automation and efficiency in DataOps. Adapt to shifting priorities and support cross-functional teams in maintaining productivity while meeting business goals. Leverage technical expertise in cloud and data operations to improve service reliability and scalability. Qualifications: 12+ years of technology work experience in a large-scale global organization, with CPG industry experience preferred. 12+ years of experience in Data & Analytics roles, with hands-on expertise in data operations and governance. 8+ years of experience working within a cross-functional IT organization, collaborating with multiple teams. 5+ years of experience in a management or lead role, with a focus on DataOps execution and delivery. Hands-on experience with Azure Data Factory (ADF) for orchestrating data pipelines and ETL workflows. Proficiency in Azure Synapse Analytics, Azure Data Lake Storage (ADLS), and Azure SQL Database. Familiarity with Azure Databricks for large-scale data processing (basic troubleshooting or support scope is sufficient if not engineering-focused). Exposure to cloud environments (AWS, Azure, GCP) and understanding of CI/CD pipelines for data operations. Knowledge of structured and semi-structured data storage formats (e.g., Parquet, JSON, Delta). Excellent communication skills, with the ability to empathize with stakeholders and articulate technical concepts to non-technical audiences. Strong problem-solving abilities, prioritizing customer needs and advocating for operational improvements. Customer-focused mindset, ensuring high-quality service delivery and operational excellence. Growth mindset, with a willingness to learn and adapt to new technologies and methodologies in a fast-paced environment. Experience in supporting mission-critical solutions in a Microsoft Azure environment, including data pipeline automation. Familiarity with Site Reliability Engineering (SRE) practices, such as automated issue remediation and scalability improvements. Experience driving operational excellence in complex, high-availability data environments. Ability to collaborate across teams, fostering strong relationships with business and IT stakeholders. Experience in data management concepts, including master data management, data governance, and analytics. Knowledge of data acquisition, data catalogs, data standards, and data management tools. Strong analytical and strategic thinking skills, with the ability to execute plans effectively and drive results. Proven ability to work in a fast-changing, complex environment, adapting to shifting priorities while maintaining productivity.
Posted 2 weeks ago
5.0 - 10.0 years
0 Lacs
Hyderābād
On-site
Category: Software Development/ Engineering Main location: India, Andhra Pradesh, Hyderabad Position ID: J0725-0450 Employment Type: Full Time Position Description: At CGI, we’re a team of builders. We call our employees members because all who join CGI are building their own company - one that has grown to 72,000 professionals located in 40 countries. Founded in 1976, CGI is a leading IT and business process services firm committed to helping clients succeed. We have the global resources, expertise, stability and dedicated professionals needed to achieve. At CGI, we’re a team of builders. We call our employees members because all who join CGI are building their own company - one that has grown to 72,000 professionals located in 40 countries. Founded in 1976, CGI is a leading IT and business process services firm committed to helping clients succeed. We have the global resources, expertise, stability and dedicated professionals needed to achieve results for our clients - and for our members. Come grow with us. Learn more at www.cgi.com. This is a great opportunity to join a winning team. CGI offers a competitive compensation package with opportunities for growth and professional development. Benefits for full-time, permanent members start on the first day of employment and include a paid time-off program and profit participation and stock purchase plans. We wish to thank all applicants for their interest and effort in applying for this position, however, only candidates selected for interviews will be contacted. No unsolicited agency referrals please. Job Title: Azure Databricks Developer Position: Senior Software Engineer Experience: 5-10 Years Category: Software Development/ Engineering Main location: India, Bangalore / Hyderabad / Chennai Position ID: J0725-0450 Employment Type: Full Time Your future duties and responsibilities: Azure data bricks developer with 5-10 years of experience We are seeking a skilled Azure Databricks Developer to design, develop, and optimize big data pipelines using Databricks on Azure. The ideal candidate will have strong expertise in PySpark, Azure Data Lake, and data engineering best practices in a cloud environment. Key Responsibilities: Design and implement ETL/ELT pipelines using Azure Databricks and PySpark. Work with structured and unstructured data from diverse sources (e.g., ADLS Gen2, SQL DBs, APIs). Optimize Spark jobs for performance and cost-efficiency. Collaborate with data analysts, architects, and business stakeholders to understand data needs. Develop reusable code components and automate workflows using Azure Data Factory (ADF). Implement data quality checks, logging, and monitoring. Participate in code reviews and adhere to software engineering best practices. Required Skills & Qualifications: 3-5 years of experience in Apache Spark / PySpark. 3-5 years working with Azure Databricks and Azure Data Services (ADLS Gen2, ADF, Synapse). Strong understanding of data warehousing, ETL, and data lake architectures. Proficiency in Python and SQL. Experience with Git, CI/CD tools, and version control practices Required qualifications to be successful in this role: Experience: 5 to 10 Yrs Location: Bangalore /Hyderabad / Chennai Education: BE / B.Tech / MCA / BCA Skills: Azure Databricks, Azure Data Factory, SQL, PySpark, Python Skills: ETL SQL What you can expect from us: Together, as owners, let’s turn meaningful insights into action. Life at CGI is rooted in ownership, teamwork, respect and belonging. Here, you’ll reach your full potential because… You are invited to be an owner from day 1 as we work together to bring our Dream to life. That’s why we call ourselves CGI Partners rather than employees. We benefit from our collective success and actively shape our company’s strategy and direction. Your work creates value. You’ll develop innovative solutions and build relationships with teammates and clients while accessing global capabilities to scale your ideas, embrace new opportunities, and benefit from expansive industry and technology expertise. You’ll shape your career by joining a company built to grow and last. You’ll be supported by leaders who care about your health and well-being and provide you with opportunities to deepen your skills and broaden your horizons. Come join our team—one of the largest IT and business consulting services firms in the world.
Posted 2 weeks ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Principle Duties And Responsibilities User Management Perform user management for Workday ERP database including Role and Permission management. Workday Cloud Support Manage and implement basic system configuration. Provide functional guidance to developers and the QA team in execution of business processes related to system functions and behaviors. Provide data guidance to developers and the QA team regarding questions on tables and data elements. Manage the database changes associated with upgrades to the Workday ERP software. Define and manage a Disaster Recovery process. Continuously monitor the databases for performance issues. Work with GHX support organizations to remediate database issues. Processes Define and implement a data refresh process and strategy. Define and implement a data de-identification process. Define and implement multiple test environments. Operational Duties Adhere to Change Management guidelines for all system changes and enhancements. Manage database user access. Knowledge And Skills Required Qualifications Bachelor’s degree in Computer Science/Information Technology/Systems or related field or demonstrated equivalent experience. 5+ years of hands-on Workday Cloud Administration and system support experience with mid to large market sized companies. Experience with the following Workday Cloud applications: GL, AP, AR, FA, Cash, Procurement, SSP, BI, SmartView and ADF. 2+ years of hands-on experience in Workday Cloud Database Administration. 2+ years of hands-on experience in a support organization or capacity. 2+ years experience with data refresh and de-identification strategies and implementation. Understanding of Quality Assurance testing practices. Hands on knowledge of Workday Cloud SCM Workflow Required Skills Possess strong business acumen to communicate with and support Sales, Sales Operations, Customer Support, Finance, Accounting, Revenue, Purchasing, and HR as needed in a functional capacity. Possess reasonable technical acumen to allow learning/working in a basic technical/functional capacity in all Corporate Systems platforms. Advanced PC skills including MS Excel, PowerPoint, Outlook, Basic SQL Strong analytical and problem-solving abilities. Strong interpersonal and communication skills. Familiarity of current project management/execution methodologies. Must be tasked oriented with strong organizational and time management skills. Flexible and able to quickly adapt to a dynamic business environment. Ability to effectively communicate (written and verbal) complex solutions and ideas at a level suitable for any level of personnel from basic business users to highly technical developers. Ability to provide excellent customer service and collaborate between teams. Ability to handle workload under time pressure and meet strict deadlines. Flexible and able to quickly adapt to a dynamic business environment. Ability to keep highly sensitive information confidential and be familiar with HIPPA and GDPR regulations. Must be able to manage time using a work queue comprised of ‘issue tickets’ across multiple platforms and perform to published service levels (SLA) and key results (KR) KEY DIFFERENTIATORS Certifications GHX: It's the way you do business in healthcare Global Healthcare Exchange (GHX) enables better patient care and billions in savings for the healthcare community by maximizing automation, efficiency and accuracy of business processes. GHX is a healthcare business and data automation company, empowering healthcare organizations to enable better patient care and maximize industry savings using our world class cloud-based supply chain technology exchange platform, solutions, analytics and services. We bring together healthcare providers and manufacturers and distributors in North America and Europe - who rely on smart, secure healthcare-focused technology and comprehensive data to automate their business processes and make more informed decisions. It is our passion and vision for a more operationally efficient healthcare supply chain, helping organizations reduce - not shift - the cost of doing business, paving the way to delivering patient care more effectively. Together we take more than a billion dollars out of the cost of delivering healthcare every year. GHX is privately owned, operates in the United States, Canada and Europe, and employs more than 1000 people worldwide. Our corporate headquarters is in Colorado, with additional offices in Europe. Disclaimer Global Healthcare Exchange, LLC and its North American subsidiaries (collectively, “GHX”) provides equal employment opportunities (EEO) to all employees and applicants for employment without regard to race, color, national origin, sex, sexual orientation, gender identity, religion, age, genetic information, disability, veteran status or any other status protected by applicable law. All qualified applicants will receive consideration for employment without regard to any status protected by applicable law. This EEO policy applies to all terms, conditions, and privileges of employment, including hiring, training and development, promotion, transfer, compensation, benefits, educational assistance, termination, layoffs, social and recreational programs, and retirement. GHX believes that employees should be provided with a working environment which enables each employee to be productive and to work to the best of his or her ability. We do not condone or tolerate an atmosphere of intimidation or harassment based on race, color, national origin, sex, sexual orientation, gender identity, religion, age, genetic information, disability, veteran status or any other status protected by applicable law. GHX expects and requires the cooperation of all employees in maintaining a discrimination and harassment-free atmosphere. Improper interference with the ability of GHX’s employees to perform their expected job duties is absolutely not tolerated.
Posted 2 weeks ago
0 years
6 - 9 Lacs
Calcutta
On-site
Ready to shape the future of work? At Genpact, we don’t just adapt to change—we drive it. AI and digital innovation are redefining industries, and we’re leading the charge. Genpact’s AI Gigafactory , our industry-first accelerator, is an example of how we’re scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to agentic AI , our breakthrough solutions tackle companies’ most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that’s shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions – we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at genpact.com and on LinkedIn , X , YouTube , and Facebook . Inviting applications for the role of Senior Principal Consultant - Sr. Integration architect & Lead (Snowflake, Seeburger, ADF) Responsibilities Architect modern data solutions using Snowflake, Seeburger, and Azure Data Factory (ADF) Create and review solution design artifacts for Snowflake and ADF platforms Promote modular architecture by designing reusable ADF pipelines and Snowflake models Ensure compliance with data governance and Seeburger-based B2B integration standards Guide teams on best practices for ADF orchestration and Snowflake performance optimization Validate Seeburger message flows and ADF workflows during data infrastructure build phases Define operational roles for managing Snowflake environments and Seeburger integrations Identify and address architectural risks across ADF pipelines and Snowflake compute layers Evaluate design options across Snowflake and ADF for performance, cost, and scalability Lead technical strategy to build reusable components using ADF, Snowflake, and Seeburger Qualifications we seek in you! Minimum Qualifications Bachelor's degree in information science , data management , computer science or related field preferred Should have experience in Data Engineering domain Should have experience on Snowflake, ADF, Seeburger and components Strong technical architecture skills with proven experience designing end-to-end solutions using Snowflake, Seeburger, and Azure Data Factory Proven experience architecting end-to-end solutions using Snowflake, Seeburger, and Azure Data Factory Strong communication skills to lead global teams delivering Snowflake platforms and Seeburger integrations Experience in designing data solutions for supply chain management and EDI transactions In-depth knowledge of IT delivery models and cloud lifecycles for ADF and Snowflake deployments Hands-on expertise with Seeburger message flows and operational ADF pipeline management Leadership in defining architectural direction and promoting reuse across Snowflake and ADF components Should have designed the E2E architecture of unified data platform covering Data Ingestion, Transformation, Serve, and Consumption using tools like Snowflake, Seeburger, and Azure Data Factory Should have designed and implemented at least 2-3 projects end-to-end in Snowflake and ADF Should have hands-on experience in Snowflake workflows orchestration, security management, platform governance, and data security Why join Genpact? Be a transformation leader – Work at the cutting edge of AI, automation, and digital innovation Make an impact – Drive change for global enterprises and solve business challenges that matter Accelerate your career – Get hands-on experience, mentorship, and continuous learning opportunities Work with the best – Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture – Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let’s build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training. Job Senior Principal Consultant Primary Location India-Kolkata Schedule Full-time Education Level Master's / Equivalent Job Posting Jul 8, 2025, 11:37:30 PM Unposting Date Ongoing Master Skills List Digital Job Category Full Time
Posted 2 weeks ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Design, deploy and manage Azure infrastructure including virtual machines, storage accounts, virtual networks, and other resources. Assist teams by deploying applications to AKS clusters using containerization technologies such as Docker and Kubernetes, Container Registry, etc.. Familiarity with the Azure CLI and ability to use PowerShell say to scan Azure resources, make modifications, spit out a report or a dump, etc.. Setting up a 2 or 3 tier application on Azure. VMs, web apps, load balancers, proxies, etc.… Well versed with security, AD, MI SPN, firewalls Networking: NSGs, VNETs, private end points, express routes, Bastion, etc.… Familiarity with a scripting language like Python for automation. Leveraging Terraform (or Bicep) for automating infrastructure deployment. Cost tracking, analysis, reporting, and management at the resource groups level. Experience with Azure DevOps Experience with Azure monitor Strong hands-on experience in ADF, Linked Service/IR (Self-hosted/managed), LogicApp, ServiceBus, Databricks, SQL Server Strong understanding of Python, Spark, and SQL (Nice to have) Ability to work in fast paced environments as we have tight SLAs for tickets. Self-driven and should possess exploratory mindset as the work requires a good amount of research (within and outside the application)
Posted 2 weeks ago
3.0 - 5.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
We are seeking a skilled Data Engineer with strong expertise in Azure Data Factory (ADF), Snowflake, and Kafka (Confluent Platform). The ideal candidate will be responsible for designing, building, and maintaining scalable data pipelines and streaming solutions. Key Responsibilities Design, develop, and maintain ADF pipelines for data ingestion, transformation, and orchestration. Monitor and troubleshoot pipeline failures and ensure smooth data flow. Write efficient and optimized SQL queries and create complex views in Snowflake. Integrate and manage Kafka producers/consumers and implement real-time data stream processing using the Confluent Platform. Collaborate with data analysts, architects, and software developers to deliver end-to-end data solutions. (Optional) Support backend development using Java 17 and Spring Boot, if required. Candidate Requirements 3 to 5 years of hands-on experience in: Azure Data Factory (ADF): Pipeline creation, monitoring, troubleshooting. Snowflake: Complex SQL queries, view creation, query optimization. Apache Kafka and Confluent Platform: Stream processing, producer/consumer integration. Good understanding of data modeling, ETL concepts, and cloud-based architectures. Nice to have: Experience in Java 17 and Spring Boot. APPLY NOW
Posted 2 weeks ago
5.0 years
0 Lacs
Greater Kolkata Area
On-site
Skill required: Tech for Operations - Microsoft Azure Cloud Services Designation: App Automation Eng Senior Analyst Qualifications: Any Graduation/12th/PUC/HSC Years of Experience: 5 to 8 years About Accenture Accenture is a global professional services company with leading capabilities in digital, cloud and security.Combining unmatched experience and specialized skills across more than 40 industries, we offer Strategy and Consulting, Technology and Operations services, and Accenture Song— all powered by the world’s largest network of Advanced Technology and Intelligent Operations centers. Our 699,000 people deliver on the promise of technology and human ingenuity every day, serving clients in more than 120 countries. We embrace the power of change to create value and shared success for our clients, people, shareholders, partners and communities.Visit us at www.accenture.com What would you do? Accenture is a global professional services company with leading capabilities in digital, cloud and security. Combining unmatched experience and specialized skills across more than 40 industries, we offer Strategy and Consulting, Technology and Operations services, and Accenture Song— all powered by the world s largest network of Advanced Technology and Intelligent Operations centers. Our 699,000 people deliver on the promise of technology and human ingenuity every day, serving clients in more than 120 countries. We embrace the power of change to create value and shared success for our clients, people, shareholders, partners and communities. Visit us at www.accenture.com. In our Service Supply Chain offering, we leverage a combination of proprietary technology and client systems to develop, execute, and deliver BPaaS (business process as a service) or Managed Service solutions across the service lifecycle: Plan, Deliver, and Recover. In this role, you will partner with business development and act as a Business Subject Matter Expert (SME) to help build resilient solutions that will enhance our clients supply chains and customer experience. The Senior Azure Data factory (ADF) Support Engineer Il will be a critical member of our Enterprise Applications Team, responsible for designing, supporting & maintaining robust data solutions. The ideal candidate is proficient in ADF, SQL and has extensive experience in troubleshooting Azure Data factory environments, conducting code reviews, and bug fixing. This role requires a strategic thinker who can collaborate with cross-functional teams to drive our data strategy and ensure the optimal performance of our data systems. What are we looking for? Bachelor s or Master s degree in Computer Science, Information Technology, or a related field. Proven experience (5+ years) as a Azure Data Factory Support Engineer Il Expertise in ADF with a deep understanding of its data-related libraries. Strong experience in Azure cloud services, including troubleshooting and optimizing cloud-based environments. Proficient in SQL and experience with SQL database design. Strong analytical skills with the ability to collect, organize, analyze, and disseminate significant amounts of information with attention to detail and accuracy. Experience with ADF pipelines. Excellent problem-solving and troubleshooting skills. Experience in code review and debugging in a collaborative project setting. Excellent verbal and written communication skills. Ability to work in a fast-paced, team-oriented environment. Strong understanding of the business and a passion for the mission of Service Supply Chain Hands on with Jira, Devops ticketing, ServiceNow is good to have Roles and Responsibilities: Innovate. Collaborate. Build. Create. Solve ADF & associated systems Ensure systems meet business requirements and industry practices. Integrate new data management technologies and software engineering tools into existing structures. Recommend ways to improve data reliability, efficiency, and quality. Use large data sets to address business issues. Use data to discover tasks that can be automated. Fix bugs to ensure robust and sustainable codebase. Collaborate closely with the relevant teams to diagnose and resolve issues in data processing systems, ensuring minimal downtime and optimal performance. Analyze and comprehend existing ADF data pipelines, systems, and processes to identify and troubleshoot issues effectively. Develop, test, and implement code changes to fix bugs and improve the efficiency and reliability of data pipelines. Review and validate change requests from stakeholders, ensuring they align with system capabilities and business objectives. Implement robust monitoring solutions to proactively detect and address issues in ADF data pipelines and related infrastructure. Coordinate with data architects and other team members to ensure that changes are in line with the overall architecture and data strategy. Document all changes, bug fixes, and updates meticulously, maintaining clear and comprehensive records for future reference and compliance. Provide technical guidance and support to other team members, promoting a culture of continuous learning and improvement. Stay updated with the latest technologies and practices in ADF to continuously improve the support and maintenance of data systems. Flexible Work Hours to include US Time Zones Flexible working hours however this position may require you to work a rotational On-Call schedule, evenings, weekends, and holiday shifts when need arises Participate in the Demand Management and Change Management processes. Work in partnership with internal business, external 3rd party technical teams and functional teams as a technology partner in communicating and coordinating delivery of technology services from Technology For Operations (TfO), Any Graduation,12th/PUC/HSC
Posted 2 weeks ago
2.0 years
0 Lacs
Kochi, Kerala, India
On-site
Job Title - + + Management Level: Location: Kochi, Coimbatore, Trivandrum Must have skills: Python/Scala, Pyspark/Pytorch Good to have skills: Redshift Job Summary You’ll capture user requirements and translate them into business and digitally enabled solutions across a range of industries. Your responsibilities will include: Responsibilities Roles and Responsibilities Designing, developing, optimizing, and maintaining data pipelines that adhere to ETL principles and business goals Solving complex data problems to deliver insights that helps our business to achieve their goals. Source data (structured→ unstructured) from various touchpoints, format and organize them into an analyzable format. Creating data products for analytics team members to improve productivity Calling of AI services like vision, translation etc. to generate an outcome that can be used in further steps along the pipeline. Fostering a culture of sharing, re-use, design and operational efficiency of data and analytical solutions Preparing data to create a unified database and build tracking solutions ensuring data quality Create Production grade analytical assets deployed using the guiding principles of CI/CD. Professional And Technical Skills Expert in Python, Scala, Pyspark, Pytorch, Javascript (any 2 at least) Extensive experience in data analysis (Big data- Apache Spark environments), data libraries (e.g. Pandas, SciPy, Tensorflow, Keras etc.), and SQL. 2-3 years of hands-on experience working on these technologies. Experience in one of the many BI tools such as Tableau, Power BI, Looker. Good working knowledge of key concepts in data analytics, such as dimensional modeling, ETL, reporting/dashboarding, data governance, dealing with structured and unstructured data, and corresponding infrastructure needs. Worked extensively in Microsoft Azure (ADF, Function Apps, ADLS, Azure SQL), AWS (Lambda,Glue,S3), Databricks analytical platforms/tools, Snowflake Cloud Datawarehouse. Additional Information Experience working in cloud Data warehouses like Redshift or Synapse Certification in any one of the following or equivalent AWS- AWS certified data Analytics- Speciality Azure- Microsoft certified Azure Data Scientist Associate Snowflake- Snowpro core- Data Engineer Databricks Data Engineering About Our Company | Accenture , Experience: 3.5 -5 years of experience is required Educational Qualification: Graduation (Accurate educational details should capture)
Posted 2 weeks ago
0.0 - 2.0 years
5 - 12 Lacs
Pune, Maharashtra
On-site
Company name: PibyThree consulting Services Pvt Ltd. Location : Baner, Pune Start date : ASAP Job Description : We are seeking an experienced Data Engineer to join our team. The ideal candidate will have hands-on experience with Azure Data Factory (ADF), Snowflake, and data warehousing concepts. The Data Engineer will be responsible for designing, developing, and maintaining large-scale data pipelines and architectures. Key Responsibilities: Design, develop, and deploy data pipelines using Azure Data Factory (ADF) Work with Snowflake to design and implement data warehousing solutions Collaborate with cross-functional teams to identify and prioritize data requirements Develop and maintain data architectures, data models, and data governance policies Ensure data quality, security, and compliance with regulatory requirements Optimize data pipelines for performance, scalability, and reliability Troubleshoot data pipeline issues and implement fixes Stay up-to-date with industry trends and emerging technologies in data engineering Requirements: 4+ years of experience in data engineering, with a focus on cloud-based data platforms (Azure preferred) 2+ years of hands-on experience with Azure Data Factory (ADF) 1+ year of experience working with Snowflake Strong understanding of data warehousing concepts, data modeling, and data governance Experience with data pipeline orchestration tools such as Apache Airflow or Azure Databricks Proficiency in programming languages such as Python, Java, or C# Experience with cloud-based data storage solutions such as Azure Blob Storage or Amazon S3 Strong problem-solving skills and attention to detail Job Type: Full-time Pay: ₹500,000.00 - ₹1,200,000.00 per year Schedule: Day shift Ability to commute/relocate: Pune, Maharashtra: Reliably commute or planning to relocate before starting work (Preferred) Education: Bachelor's (Preferred) Experience: total work: 4 years (Preferred) Pyspark: 2 years (Required) Azure Data Factory: 2 years (Required) Databricks: 2 years (Required) Work Location: In person
Posted 2 weeks ago
5.0 - 10.0 years
14 - 24 Lacs
Bengaluru
Work from Office
Role & Responsibilities: Within the technical team and under the guidance of the Team Manager, you will: Be in charge of installing, configuring, and upgrading/patching the Product applications internally Handle and follow up technical issues (wide diversity and complexity) and perform corrective actions Interact actively with the functional and technical teams (including development and architecture) located around the globe Provide advice for choices and implementation of interfaces / surrounds (inbound, and outbound), including advice and support on how to develop client reporting Propose solutions to address client challenges Provide on call support (24x7) on rotation basis and weekend/holiday support Support in shifts on rotation basis Contribute towards the Technical Knowledgebase (preparation of documents / presentations on related topics) Provide training, guidance and support to client IT teams Job Description: Good skills in Oracle Fusion Middleware 11g /12c (Forms & Report, ADF, BI Publisher, Oracle Identify Management (OID/OAM)) Good skills in handling middleware vulnerabilities and security (CVE) related queries Excellent analytical and logical skills Ability to address problems with methodology Ability to anticipate client needs and be proactive Strong motivation to continuously increase quality and efficiency Good presentation and communication skills Autonomous, rigorous, and well organized Capability to work within a global team (spread across geographies) and interact with different teams Willingness to work in rotational shifts Quick learner and keen to learn SQL, PL/SQL and Oracle database and new technologies Good to have Knowledge of SQL, PL/SQL and Oracle database
Posted 2 weeks ago
4.0 - 8.0 years
5 - 15 Lacs
Chennai, Delhi / NCR, Mumbai (All Areas)
Hybrid
Job Description (JD): Azure Databricks / ADF / Synapse , with strong emphasis on Python, SQL, Data Lake, and Data Warehouse : Job Title: Data Engineer Azure (Databricks / ADF / Synapse) Experience: 4 to 7 Years Location: Pan India Employment Type: Full-Time Notice Period: Immediate to 30 Days Job Summary: We are looking for a skilled and experienced Data Engineer with 4 to 8 years of experience in building scalable data solutions on the Microsoft Azure ecosystem . The ideal candidate must have strong hands-on experience with Azure Databricks , Azure Data Factory (ADF) , or Azure Synapse Analytics , along with Python and SQL expertise. Familiarity with Data Lake , Data Warehouse concepts, and end-to-end data pipelines is essential. Key Responsibilities: Requirement gathering and analysis Experience with different databases like Synapse, SQL DB, Snowflake etc. Design and implement data pipelines using Azure Data Factory, Databricks, Synapse Create and manage Azure SQL Data Warehouses and Azure Cosmos DB databases Extract, transform, and load (ETL) data from various sources into Azure Data Lake Storage Implement data security and governance measures Monitor and optimize data pipelines for performance and efficiency Troubleshoot and resolve data engineering issues Provide optimized solution for any problem related to data engineering Ability to work with a variety of sources like Relational DB, API, File System, Realtime streams, CDC etc. Strong knowledge on Databricks, Delta tables Required Skills: 48 years of experience in Data Engineering or related roles. Hands-on experience in Azure Databricks , ADF , or Synapse Analytics Proficiency in Python for data processing and scripting. Strong command over SQL writing complex queries, performance tuning, etc. Experience working with Azure Data Lake Storage and Data Warehouse concepts (e.g., dimensional modeling, star/snowflake schemas). Understanding CI/CD practices in a data engineering context. Excellent problem-solving and communication skills. Good to Have: Experienced in Delta Lake , Power BI , or Azure DevOps . Knowledge of Spark , Scala , or other distributed processing frameworks. Exposure to BI tools like Power BI , Tableau , or Looker . Familiarity with data security and compliance in the cloud. Experience in leading a development team.
Posted 2 weeks ago
6.0 - 8.0 years
12 - 18 Lacs
Hyderabad, Bengaluru
Work from Office
Job Title: Oracle Fusion Functional Consultant Cash Management & Lease Accounting Location: Hyderabad / Bangalore Experience: 6-8 Years Department: Oracle ERP Finance Job Summary We are seeking an experienced Oracle Fusion Functional Consultant specializing in Cash Management and Lease Accounting, with strong functional knowledge and hands-on expertise in Fusion Financials. The ideal candidate should have 23 end-to-end implementation and/or support cycles, with a preference for candidates who have previously held Financial Functional Lead roles. Familiarity with Oracle Cloud tools, workflow processes, and prior EBS experience is highly desirable. Key Responsibilities Lead or support implementations of Oracle Fusion Cash Management and Lease Accounting modules. Collaborate with business stakeholders to gather requirements and translate them into functional specifications. Write functional design documents (MD50), test scripts, and support OTBI reports & Fusion analytics. Work on Oracle workflow processes and assist technical teams with integrations and reporting needs. Leverage FSM (Functional Setup Manager) and ADF (Application Development Framework) for configurations and issue resolution. Use Data Integration (DI) tools for mass data uploads and validations. Engage in testing, data migration, UAT, and post-go-live support. Ensure compliance with Oracle Cloud best practices and security standards. Required Skills & Experience 23 implementations or support projects in Oracle Fusion Cash Management & Lease Accounting. Strong hands-on knowledge in Oracle Fusion Financials. Experience with writing functional specs, working on OTBI (Oracle Transactional Business Intelligence) and Fusion Analytics. Solid understanding of workflow processes and how to configure them in Oracle Cloud. Familiarity with Oracle FSM, ADF tools, and Data Integration (DI) tools. Prior experience in Oracle EBS (Financials). Proven ability to work with cross-functional teams and technical counterparts. Strong communication, documentation, and stakeholder management skills Preferred Qualifications Experience in a Financial Functional Lead role in past projects. Oracle Financials Cloud Certification preferred (e.g., General Ledger, Payables, Receivables). Exposure to multi-currency, intercompany, and bank reconciliation processes. Familiarity with Agile/Hybrid project methodologies.
Posted 2 weeks ago
6.0 - 11.0 years
25 - 35 Lacs
Bengaluru
Hybrid
We are hiring Azure Data Engineers for an active project-Bangalore location Interested candidates can share details on the mail with their updated resume. Total Exp? Rel exp in Azure Data Engineering? Current organization? Current location? Current fixed salary? Expected Salary? Do you have any offers? if yes mention the offer you have and reason for looking for more opportunity? Open to relocate Bangalore? Notice period? if serving/not working, mention your LWD? Do you have PF account ?
Posted 2 weeks ago
6.0 - 11.0 years
12 - 22 Lacs
Pune, Gurugram, Bengaluru
Work from Office
Warm welcome from SP Staffing Services! Reaching out to you regarding permanent opportunity !! Job Description: Exp: 6-12 yrs Location: PAN India Skill: Azure Data Factory/SSIS Interested can share your resume to sangeetha.spstaffing@gmail.com with below inline details. Full Name as per PAN: Mobile No: Alt No/ Whatsapp No: Total Exp: Relevant Exp in Data Factory: Rel Exp in Synapse: Rel Exp in SSIS: Rel Exp in Python/Pyspark: Current CTC: Expected CTC: Notice Period (Official): Notice Period (Negotiable)/Reason: Date of Birth: PAN number: Reason for Job Change: Offer in Pipeline (Current Status): Availability for virtual interview on weekdays between 10 AM- 4 PM(plz mention time): Current Res Location: Preferred Job Location: Whether educational % in 10th std, 12th std, UG is all above 50%? Do you have any gaps in between your education or Career? If having gap, please mention the duration in months/year:
Posted 2 weeks ago
6.0 - 8.0 years
12 - 18 Lacs
Hyderabad, Bengaluru
Work from Office
Job Title: Oracle Fusion Functional Consultant Cash Management & Lease Accounting Location: Hyderabad / Bangalore Experience: 6-8 Years Department: Oracle ERP – Finance Job Summary We are seeking an experienced Oracle Fusion Functional Consultant specializing in Cash Management and Lease Accounting, with strong functional knowledge and hands-on expertise in Fusion Financials. The ideal candidate should have 2–3 end-to-end implementation and/or support cycles, with a preference for candidates who have previously held Financial Functional Lead roles. Familiarity with Oracle Cloud tools, workflow processes, and prior EBS experience is highly desirable. Key Responsibilities Lead or support implementations of Oracle Fusion Cash Management and Lease Accounting modules. Collaborate with business stakeholders to gather requirements and translate them into functional specifications. Write functional design documents (MD50), test scripts, and support OTBI reports & Fusion analytics. Work on Oracle workflow processes and assist technical teams with integrations and reporting needs. Leverage FSM (Functional Setup Manager) and ADF (Application Development Framework) for configurations and issue resolution. Use Data Integration (DI) tools for mass data uploads and validations. Engage in testing, data migration, UAT, and post-go-live support. Ensure compliance with Oracle Cloud best practices and security standards. Required Skills & Experience 2–3 implementations or support projects in Oracle Fusion Cash Management & Lease Accounting. Strong hands-on knowledge in Oracle Fusion Financials. Experience with writing functional specs, working on OTBI (Oracle Transactional Business Intelligence) and Fusion Analytics. Solid understanding of workflow processes and how to configure them in Oracle Cloud. Familiarity with Oracle FSM, ADF tools, and Data Integration (DI) tools. Prior experience in Oracle EBS (Financials). Proven ability to work with cross-functional teams and technical counterparts. Strong communication, documentation, and stakeholder management skills. Preferred Qualifications Experience in a Financial Functional Lead role in past projects. Oracle Financials Cloud Certification preferred (e.g., General Ledger, Payables, Receivables). Exposure to multi-currency, intercompany, and bank reconciliation processes. Familiarity with Agile/Hybrid project methodologies.
Posted 2 weeks ago
5.0 - 10.0 years
7 - 17 Lacs
Pune
Hybrid
Azure Data Engineer Remote/Pune-Hybrid Full time Permanent Company: Academian Job Description: We are seeking a skilled Data Engineer with strong experience in Microsoft Azure Cloud services to design, build, and maintain robust data pipelines and architectures. In this role, you will design, implement, and maintain our data infrastructure, ensuring efficient data processing and availability throughout the organization. Key Responsibilities: Design, develop, and maintain scalable ETL/ELT pipelines in Azure. Work with Azure services such as Azure Data Factory, Azure Data Lake, Synapse Analytics, Azure SQL, and Databricks . Implement and optimize data storage and retrieval solutions in the cloud. Ensure data quality, consistency, and governance through robust validation and monitoring. Develop and manage CI/CD pipelines for data workflows using tools like Azure DevOps . Collaborate with cross-functional teams to understand data requirements and translate them into technical solutions. Support and troubleshoot data issues and ensure high availability of data infrastructure. Follow best practices in data security, privacy, and compliance Develop and maintain data architectures (data lakes, data warehouses). Integrate data from a wide variety of sources (APIs, logs, third-party platforms). Monitor data workflows and troubleshoot data-related issues. Required Skills & Experience Bachelors degree in computer science, Information Technology, or related field 5+ years of experience in data engineering or similar role Strong hands-on experience with Azure Data Factory, Azure Data Lake, Azure Synapse Analytics, and Databricks. Proficiency in SQL, Python, and PySpark. Experience with data modeling, schema design, and data warehousing. Familiarity with CI/CD processes, version control (e.g., Git), and deployment in Azure DevOps. Knowledge of data governance tools and practices (e.g., Azure Purview, RBAC). Strong SQL skills and experience with relational databases Proficiency with Apache Kafka and streaming data architectures Knowledge of ETL tools and processes Familiarity with DW-BI Tools PowerBI Strong knowledge of database systems (PostgreSQL, MySQL, NoSQL). Understanding of distributed systems like Kafka or MSK Preferred Skills: Experience of data visualization tools Experience with NoSQL databases Understanding of machine learning pipelines and workflows Regards Manisha Koul mkoul@academian.com www.linkedin.com/in/koul-manisha
Posted 2 weeks ago
3.0 - 7.0 years
4 - 7 Lacs
Mumbai, Delhi / NCR, Bengaluru
Work from Office
About the Role: We're hiring 2 Cloud & Data Engineering Specialists to join our fast-paced, agile team. These roles are focused on designing, developing, and scaling modern, cloud-based data engineering solutions using tools like Azure, AWS, GCP, Databricks, Kafka, PySpark, SQL, Snowflake, and ADF. Position 1: Cloud & Data Engineering Specialist Resource 1 Key Responsibilities: Develop and manage cloud-native solutions on Azure or AWS Build real-time streaming apps with Kafka Engineer services using Java and Python Deploy and manage Kubernetes-based containerized applications Process big data using Databricks Administer SQL Server and Snowflake databases, write advanced SQL Utilize Unix/Linux for system operations Must-Have Skills: Azure or AWS cloud experience Kafka, Java, Python, Kubernetes Databricks, SQL Server, Snowflake Unix/Linux commands Location- Remote, Delhi NCR,Bengaluru,Chennai,Pune,Kolkata,Ahmedabad, Mumbai, Hyderabad
Posted 2 weeks ago
4.0 years
0 Lacs
Greater Kolkata Area
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. Job Description & Summary: A career within…. Responsibilities: Job Description: · Analyses current business practices, processes, and procedures as well as identifying future business opportunities for leveraging Microsoft Azure Data & Analytics Services. · Provide technical leadership and thought leadership as a senior member of the Analytics Practice in areas such as data access & ingestion, data processing, data integration, data modeling, database design & implementation, data visualization, and advanced analytics. · Engage and collaborate with customers to understand business requirements/use cases and translate them into detailed technical specifications. · Develop best practices including reusable code, libraries, patterns, and consumable frameworks for cloud-based data warehousing and ETL. · Maintain best practice standards for the development or cloud-based data warehouse solutioning including naming standards. · Designing and implementing highly performant data pipelines from multiple sources using Apache Spark and/or Azure Databricks · Integrating the end-to-end data pipeline to take data from source systems to target data repositories ensuring the quality and consistency of data is always maintained · Working with other members of the project team to support delivery of additional project components (API interfaces) · Evaluating the performance and applicability of multiple tools against customer requirements · Working within an Agile delivery / DevOps methodology to deliver proof of concept and production implementation in iterative sprints. · Integrate Databricks with other technologies (Ingestion tools, Visualization tools). · Proven experience working as a data engineer · Highly proficient in using the spark framework (python and/or Scala) · Extensive knowledge of Data Warehousing concepts, strategies, methodologies. · Direct experience of building data pipelines using Azure Data Factory and Apache Spark (preferably in Databricks). · Hands on experience designing and delivering solutions using Azure including Azure Storage, Azure SQL Data Warehouse, Azure Data Lake, Azure Cosmos DB, Azure Stream Analytics · Experience in designing and hands-on development in cloud-based analytics solutions. · Expert level understanding on Azure Data Factory, Azure Synapse, Azure SQL, Azure Data Lake, and Azure App Service is required. · Designing and building of data pipelines using API ingestion and Streaming ingestion methods. · Knowledge of Dev-Ops processes (including CI/CD) and Infrastructure as code is essential. · Thorough understanding of Azure Cloud Infrastructure offerings. · Strong experience in common data warehouse modeling principles including Kimball. · Working knowledge of Python is desirable · Experience developing security models. · Databricks & Azure Big Data Architecture Certification would be plus Mandatory skill sets: ADE, ADB, ADF Preferred skill sets: ADE, ADB, ADF Years of experience required: 4-8 Years Education qualification: BE, B.Tech, MCA, M.Tech d Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering, Master of Engineering Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills ADF Business Components, ADL Assistance, Android Debug Bridge (ADB) Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline {+ 27 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date
Posted 2 weeks ago
3.0 years
3 - 5 Lacs
Gurgaon
On-site
#Freepost Designation: Middleware Administrator L2 Experiences: 3+ Years Qualification: BE/BTech/Diploma in IT Background Roles & Responsibilities: Application Monitoring Services ✓ Monitor application response times from the end-user perspective in real time and alert organizations when performance is unacceptable. By alerting the user to problems and intelligently segmenting response times, it should quickly expose problem sources and minimizes the time necessary for resolution. ✓ It should allow specific application transactions to be captured and monitored separately. This allows administrators to select the most important operations within business-critical applications to be measured and tracked individually. ✓ It should use baseline-oriented thresholds to raise alerts when application response times deviate from acceptable levels. This allows IT administrators to respond quickly to problems and minimize the impact on service delivery. ✓ It should automatically segment response-time information into network, server and local workstation components to easily identify the source of bottlenecks. ✓ Monitoring of applications, including Oracle Forms 10g ,Oracle SSO 10g ,OID 10g, Oracle Portal 10g ,Oracle Reports 10g ,Internet Application Server (OAS) 10.1.2.2.0, Oracle Web Server (OWS) 10.1.2.2.0, Oracle WebCenter Portal 12.2.1.3 ,Oracle Access Manager 12.2.1.3,Oracle Internet Directory 12.2.1.3,Oracle WebLogic Server 12.2.1.3,Oracle HTTP Server 12.2.1.3, Oracle ADF 12.2.1.3 (Fusion middleware) ,Oracle Forms 12.2.1.3,Oracle Reports12.2.1.3,mobile apps, Windows IIS, portal, web cache, BizTalk application and DNS applications, tomcat etc Job Type: Full-time Pay: ₹350,000.00 - ₹500,000.00 per year Benefits: Health insurance Provident Fund Work Location: In person
Posted 2 weeks ago
2.0 - 4.0 years
0 Lacs
Gurgaon
On-site
# Freepost Designation: Middleware administrator L1 Location: Gurgaon Experience: 2-4 years of experience Qualification: B.E. / B. Tech/BCA Required Key Skills: 1. Application Monitoring Services1. Real-Time Performance Monitoring Monitor application response times from the end-user perspective. Trigger alerts when performance is below acceptable thresholds. Segment response times to quickly identify problem sources and reduce resolution time. 2. Transaction-Level Monitoring Enable tracking of specific, business-critical application transactions. Allow targeted monitoring for selected operations for better visibility and control. 3. Baseline-Oriented Threshold Alerts Use dynamic baselines to raise alerts on deviation in application response times. Help administrators detect and address issues proactively. 4. Response Time Segmentation Automatically categorize response time into: Network Server Local Workstation Assist in pinpointing performance bottlenecks. 5. Supported Applications and Platforms Monitoring support includes: Oracle Forms 10g, 12.2.1.3 Oracle SSO 10g, Oracle Access Manager 12.2.1.3 Oracle Internet Directory (OID) 10g, 12.2.1.3 Oracle Portal 10g, WebCenter Portal 12.2.1.3 Oracle Reports 10g, 12.2.1.3 Oracle Web Server (OWS) 10.1.2.2.0 Oracle Internet Application Server (OAS) 10.1.2.2.0 Oracle WebLogic Server 12.2.1.3 Oracle HTTP Server 12.2.1.3 Oracle ADF (Fusion Middleware) 12.2.1.3 Mobile applications, Windows IIS, Web Cache BizTalk Applications, DNS Applications, Apache Tomcat, etc. 6. Operational Activities Application shutdown and startup MIS report generation Load and performance monitoring Script execution for user account management Event and error log monitoring Daily health checklist compliance Portal status and content updates 7. Logging and Reporting System events and incidents logging Update SR and Incident tickets in Symphony iServe Tool Application Release Management 1. Release Coordination Schedule, coordinate, and manage application releases across environments. 2. Deployment Management Perform pre-deployment activities including: Code backup New code placement Restarting services post-deployment Job Types: Full-time, Permanent Benefits: Health insurance Provident Fund Schedule: Morning shift Rotational shift Work Location: In person
Posted 2 weeks ago
3.0 years
1 - 9 Lacs
Noida
On-site
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities: Design, develop, and implement data models and ETL processes for Power BI solutions Be able to understand and create Test Scripts for data validation as it moves through various lifecycles in cloud-based technologies Be able to work closely with business partners and data SMEs to understand Healthcare Quality Measures and its related business requirements Conduct data validation after major/minor enhancements in project and determine the best data validation techniques to implement Communicate effectively with leadership and analysts across teams Troubleshoot and resolve issues with Jobs/pipelines/overhead Ensure data accuracy and integrity between sources and consumers Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Graduate degree or equivalent (B.Tech./MCA preferred) with overall 3+ years of work experience 3+ years of advanced understanding to at least one programming language - Python, Spark, Scala Experience of working with Cloud technologies preferably Snowflake, ADF and Databricks Experience of working with Agile Methodology (preferably in Rally) Knowledge of Unix Shell Scripting for automation & scheduling Batch Jobs Knowledge of Configuration Management - Github Knowledge of Relational Databases - SQL Server, Oracle, Teradata, IBM DB2, MySQL Knowledge of Messaging Queues - Kafka/ActiveMQ/RabbitMQ Knowledge of CI/CD Tools - Jenkins Understanding Relational Database Model & Entity Relation diagrams Proven solid communication and interpersonal skills Proven excellent written and verbal communication skills with ability to provide clear explanations and overviews to others (internal and external) of their work efforts Proven solid facilitation, critical thinking, problem solving, decision making and analytical skills Demonstrated ability to prioritize and manage multiple tasks At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.
Posted 2 weeks ago
5.0 years
0 Lacs
India
On-site
Job Title: Data Engineer Experience: 5+ Years Location: Pan India Mode: Hybrid Skill combination- Python AND AWS AND Databricks AND Pyspark AND Elastic Search We are looking for a Data Engineer to join our Team to build, maintain, and enhance scalable, high-performance data pipelines and cloud-native solutions. The ideal candidate will have deep experience in Databricks , Python , PySpark , Elastic Search , and SQL , and a strong understanding of cloud-based ETL services, data modeling, and data security best practices. Key Responsibilities: Design, implement, and maintain scalable data pipelines using Databricks , PySpark , and SQL . Develop and optimize ETL processes leveraging services like AWS Glue , GCP DataProc/DataFlow , Azure ADF/ADLF , and Apache Spark . Build, manage, and monitor Airflow DAGs to orchestrate data workflows. Integrate and manage Elastic Search for data indexing, querying, and analytics. Write advanced SQL queries using window functions and analytics techniques. Design data schemas and models that align with various business domains and use cases. Optimize data warehousing performance and storage using best practices. Ensure data security, governance, and compliance across all environments. Apply data engineering design patterns and frameworks to build robust solutions. Collaborate with Product, Data, and Engineering teams; support executive data needs. Participate in Agile ceremonies and follow DevOps/DataOps/DevSecOps practices. Respond to critical business issues as part of an on-call rotation. Must-Have Skills: Databricks (3+ years): Development and orchestration of data workflows. Python & PySpark (3+ years): Hands-on experience in distributed data processing. Elastic Search (3+ years): Indexing and querying large-scale datasets. SQL (3+ years): Proficiency in analytical SQL including window functions . ETL Services : AWS Glue GCP DataProc/DataFlow Azure ADF / ADLF Airflow : Designing and maintaining data workflows. Data Warehousing : Expertise in performance tuning and optimization. Data Modeling : Understanding of data schemas and business-oriented data models. Data Security : Familiarity with encryption, access control, and compliance standards. Cloud Platforms : AWS (must), GCP and Azure (preferred). Skills Python,Databricks,Pyspark,Elastic Search
Posted 2 weeks ago
4.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Location HYDERABAD OFFICE INDIA Job Description Are you looking to take your career to the next level? We’re looking for a Junior Software Engineer to join our Data & Analytics Core Data Lake Platform engineering team. We are searching for self-motivated candidates, who will demonstrate modern Agile and DevOps practices to craft, develop, test and deploy IT systems and applications, delivering global projects in multinational teams. P&G Core Data Lake Platform is a central component of P&G data and analytics ecosystem. CDL Platform is used to deliver a broad scope of digital products and frameworks used by data engineers and business analysts. In this role you will have an opportunity to use data engineering skills to deliver solutions enriching data cataloging and data discoverability for our users. With our approach to building solutions that would fit the scale P&G business is operating, we combine software engineering standard methodologies (Databricks) with modern software engineering standards (Azure, DevOps, SRE) to deliver value for P&G. RESPONSIBILITIES: Writing and testing code for Data & Analytics applications and building E2E cloud native (Azure) solutions. Engineering applications throughout its entire lifecycle from development, deployment, upgrade, and replacement/termination Ensuring that development and architecture carry out to established standards, including modern software engineering practices (CICD, Agile, DevOps) Collaborate with internal technical specialists and vendors to develop final products to improve overall performance, efficiency and/or to enable adaptation of new business processes. Qualifications Job Qualifications Bachelor’s degree in computer science or related technical field. 4+ years of experience working as Software Engineer (with focus on developing in Python, PySpark, Databricks, ADF) Fullstack engineering experience (Python/React/Javascript/APIs) Experience demonstrating modern software engineering practices (code standards, Gitflow, automated testing, CICD, DevOps) Experience working with Cloud infrastructure (Azure preferred) Strong verbal, written, and interpersonal communication skills. A strong desire to produce high quality software through multi-functional teamwork, testing, code reviews, and other best practices. YOU ALSO SHOULD HAVE: Strong written and verbal English communication skills to influence others Proven use of data and tools Ability to prioritize multiple priorities Ability to work collaboratively across different functions and geographies We produce globally recognized brands and we grow the best business leaders in the industry. With a portfolio of trusted brands as diverse as ours, it is paramount our leaders are able to lead with courage the vast array of brands, categories and functions. We serve consumers around the world with one of the strongest portfolios of trusted, quality, leadership brands, including Always®, Ariel®, Gillette®, Head & Shoulders®, Herbal Essences®, Oral-B®, Pampers®, Pantene®, Tampax® and more. Our community includes operations in approximately 70 countries worldwide. Visit http://www.pg.com to know more. We are an equal opportunity employer and value diversity at our company. We do not discriminate against individuals on the basis of race, color, gender, age, national origin, religion, sexual orientation, gender identity or expression, marital status, citizenship, disability, HIV/AIDS status, or any other legally protected factor. "At P&G, the hiring journey is personalized every step of the way, thereby ensuring equal opportunities for all, with a strong foundation of Ethics & Corporate Responsibility guiding everything we do. All the available job opportunities are posted either on our website - pgcareers.com, or on our official social media pages, for the convenience of prospective candidates, and do not require them to pay any kind of fees towards their application.” Job Schedule Full time Job Number R000134973 Job Segmentation Experienced Professionals (Job Segmentation)
Posted 2 weeks ago
5.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Job Title: Azure Data Engineer Experience: 5+ Years About the Company: EY is a leading global professional services firm offering a broad range of services in assurance, tax, transaction, and advisory services. We're looking to hire a skilled ADF Developer who has proficiency in Python and Microsoft Power Platform. Job Responsibilities: Analyse and translate business requirements into technical requirements and architecture. Design, develop, test, and deploy Azure Data Factory (ADF) pipelines for ETL processes. Create, maintain, and optimize Python scripts that interface with ADF and support data operations. Utilize Microsoft Power Platform for designing intuitive user interfaces, automating workflows, and creating effective database solutions. Implements Power Apps, Power Automate, and Power BI to support better business decisions. Collaborate with diverse teams to ensure seamless integration of ADF solutions with other software components. Debug and resolve technical issues related to data transformations and processing. Implement robust data validation and error handling routines to ensure data consistency and accuracy. Maintain documentation of all systems and processes developed, promoting transparency, and consistency. Monitor and optimize performance of ADF solutions regularly. Proactively stay up-to-date with the latest technologies and techniques in data handling and solutions. Required Skills: Proven working experience as an ADF Developer. Hands-on experience with data architectures including complex data models and data governance. Strong proficiency in Python and demonstrated experience with ETL processes. Proficient knowledge of Microsoft Power Platform (Power BI, Power Apps, and Power Automate). Understanding of SQL and relational database concepts. Familiarity with cloud technologies, particularly Microsoft Azure. Excellent problem-solving skills and ability to debug complex systems. Preferred Skills: Knowledge of standard authentication and authorization protocols such as OAuth, SAML, and LDAP. Education : BS/MS degree in Computer Science, Engineering, or a related subject is required. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
31458 Jobs | Dublin
Wipro
16542 Jobs | Bengaluru
EY
10788 Jobs | London
Accenture in India
10711 Jobs | Dublin 2
Amazon
8660 Jobs | Seattle,WA
Uplers
8559 Jobs | Ahmedabad
IBM
7988 Jobs | Armonk
Oracle
7535 Jobs | Redwood City
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi
Capgemini
6091 Jobs | Paris,France