Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 years
0 Lacs
Greater Kolkata Area
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. Job Description & Summary: A career within…. Responsibilities: Job Description: · Analyses current business practices, processes, and procedures as well as identifying future business opportunities for leveraging Microsoft Azure Data & Analytics Services. · Provide technical leadership and thought leadership as a senior member of the Analytics Practice in areas such as data access & ingestion, data processing, data integration, data modeling, database design & implementation, data visualization, and advanced analytics. · Engage and collaborate with customers to understand business requirements/use cases and translate them into detailed technical specifications. · Develop best practices including reusable code, libraries, patterns, and consumable frameworks for cloud-based data warehousing and ETL. · Maintain best practice standards for the development or cloud-based data warehouse solutioning including naming standards. · Designing and implementing highly performant data pipelines from multiple sources using Apache Spark and/or Azure Databricks · Integrating the end-to-end data pipeline to take data from source systems to target data repositories ensuring the quality and consistency of data is always maintained · Working with other members of the project team to support delivery of additional project components (API interfaces) · Evaluating the performance and applicability of multiple tools against customer requirements · Working within an Agile delivery / DevOps methodology to deliver proof of concept and production implementation in iterative sprints. · Integrate Databricks with other technologies (Ingestion tools, Visualization tools). · Proven experience working as a data engineer · Highly proficient in using the spark framework (python and/or Scala) · Extensive knowledge of Data Warehousing concepts, strategies, methodologies. · Direct experience of building data pipelines using Azure Data Factory and Apache Spark (preferably in Databricks). · Hands on experience designing and delivering solutions using Azure including Azure Storage, Azure SQL Data Warehouse, Azure Data Lake, Azure Cosmos DB, Azure Stream Analytics · Experience in designing and hands-on development in cloud-based analytics solutions. · Expert level understanding on Azure Data Factory, Azure Synapse, Azure SQL, Azure Data Lake, and Azure App Service is required. · Designing and building of data pipelines using API ingestion and Streaming ingestion methods. · Knowledge of Dev-Ops processes (including CI/CD) and Infrastructure as code is essential. · Thorough understanding of Azure Cloud Infrastructure offerings. · Strong experience in common data warehouse modeling principles including Kimball. · Working knowledge of Python is desirable · Experience developing security models. · Databricks & Azure Big Data Architecture Certification would be plus Mandatory skill sets: ADE, ADB, ADF Preferred skill sets: ADE, ADB, ADF Years of experience required: 4-8 Years Education qualification: BE, B.Tech, MCA, M.Tech d Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering, Master of Engineering Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills ADF Business Components, ADL Assistance, Android Debug Bridge (ADB) Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline {+ 27 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date
Posted 2 weeks ago
3.0 years
3 - 5 Lacs
Gurgaon
On-site
#Freepost Designation: Middleware Administrator L2 Experiences: 3+ Years Qualification: BE/BTech/Diploma in IT Background Roles & Responsibilities: Application Monitoring Services ✓ Monitor application response times from the end-user perspective in real time and alert organizations when performance is unacceptable. By alerting the user to problems and intelligently segmenting response times, it should quickly expose problem sources and minimizes the time necessary for resolution. ✓ It should allow specific application transactions to be captured and monitored separately. This allows administrators to select the most important operations within business-critical applications to be measured and tracked individually. ✓ It should use baseline-oriented thresholds to raise alerts when application response times deviate from acceptable levels. This allows IT administrators to respond quickly to problems and minimize the impact on service delivery. ✓ It should automatically segment response-time information into network, server and local workstation components to easily identify the source of bottlenecks. ✓ Monitoring of applications, including Oracle Forms 10g ,Oracle SSO 10g ,OID 10g, Oracle Portal 10g ,Oracle Reports 10g ,Internet Application Server (OAS) 10.1.2.2.0, Oracle Web Server (OWS) 10.1.2.2.0, Oracle WebCenter Portal 12.2.1.3 ,Oracle Access Manager 12.2.1.3,Oracle Internet Directory 12.2.1.3,Oracle WebLogic Server 12.2.1.3,Oracle HTTP Server 12.2.1.3, Oracle ADF 12.2.1.3 (Fusion middleware) ,Oracle Forms 12.2.1.3,Oracle Reports12.2.1.3,mobile apps, Windows IIS, portal, web cache, BizTalk application and DNS applications, tomcat etc Job Type: Full-time Pay: ₹350,000.00 - ₹500,000.00 per year Benefits: Health insurance Provident Fund Work Location: In person
Posted 2 weeks ago
2.0 - 4.0 years
0 Lacs
Gurgaon
On-site
# Freepost Designation: Middleware administrator L1 Location: Gurgaon Experience: 2-4 years of experience Qualification: B.E. / B. Tech/BCA Required Key Skills: 1. Application Monitoring Services1. Real-Time Performance Monitoring Monitor application response times from the end-user perspective. Trigger alerts when performance is below acceptable thresholds. Segment response times to quickly identify problem sources and reduce resolution time. 2. Transaction-Level Monitoring Enable tracking of specific, business-critical application transactions. Allow targeted monitoring for selected operations for better visibility and control. 3. Baseline-Oriented Threshold Alerts Use dynamic baselines to raise alerts on deviation in application response times. Help administrators detect and address issues proactively. 4. Response Time Segmentation Automatically categorize response time into: Network Server Local Workstation Assist in pinpointing performance bottlenecks. 5. Supported Applications and Platforms Monitoring support includes: Oracle Forms 10g, 12.2.1.3 Oracle SSO 10g, Oracle Access Manager 12.2.1.3 Oracle Internet Directory (OID) 10g, 12.2.1.3 Oracle Portal 10g, WebCenter Portal 12.2.1.3 Oracle Reports 10g, 12.2.1.3 Oracle Web Server (OWS) 10.1.2.2.0 Oracle Internet Application Server (OAS) 10.1.2.2.0 Oracle WebLogic Server 12.2.1.3 Oracle HTTP Server 12.2.1.3 Oracle ADF (Fusion Middleware) 12.2.1.3 Mobile applications, Windows IIS, Web Cache BizTalk Applications, DNS Applications, Apache Tomcat, etc. 6. Operational Activities Application shutdown and startup MIS report generation Load and performance monitoring Script execution for user account management Event and error log monitoring Daily health checklist compliance Portal status and content updates 7. Logging and Reporting System events and incidents logging Update SR and Incident tickets in Symphony iServe Tool Application Release Management 1. Release Coordination Schedule, coordinate, and manage application releases across environments. 2. Deployment Management Perform pre-deployment activities including: Code backup New code placement Restarting services post-deployment Job Types: Full-time, Permanent Benefits: Health insurance Provident Fund Schedule: Morning shift Rotational shift Work Location: In person
Posted 2 weeks ago
3.0 years
1 - 9 Lacs
Noida
On-site
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities: Design, develop, and implement data models and ETL processes for Power BI solutions Be able to understand and create Test Scripts for data validation as it moves through various lifecycles in cloud-based technologies Be able to work closely with business partners and data SMEs to understand Healthcare Quality Measures and its related business requirements Conduct data validation after major/minor enhancements in project and determine the best data validation techniques to implement Communicate effectively with leadership and analysts across teams Troubleshoot and resolve issues with Jobs/pipelines/overhead Ensure data accuracy and integrity between sources and consumers Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Graduate degree or equivalent (B.Tech./MCA preferred) with overall 3+ years of work experience 3+ years of advanced understanding to at least one programming language - Python, Spark, Scala Experience of working with Cloud technologies preferably Snowflake, ADF and Databricks Experience of working with Agile Methodology (preferably in Rally) Knowledge of Unix Shell Scripting for automation & scheduling Batch Jobs Knowledge of Configuration Management - Github Knowledge of Relational Databases - SQL Server, Oracle, Teradata, IBM DB2, MySQL Knowledge of Messaging Queues - Kafka/ActiveMQ/RabbitMQ Knowledge of CI/CD Tools - Jenkins Understanding Relational Database Model & Entity Relation diagrams Proven solid communication and interpersonal skills Proven excellent written and verbal communication skills with ability to provide clear explanations and overviews to others (internal and external) of their work efforts Proven solid facilitation, critical thinking, problem solving, decision making and analytical skills Demonstrated ability to prioritize and manage multiple tasks At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.
Posted 2 weeks ago
5.0 years
0 Lacs
India
On-site
Job Title: Data Engineer Experience: 5+ Years Location: Pan India Mode: Hybrid Skill combination- Python AND AWS AND Databricks AND Pyspark AND Elastic Search We are looking for a Data Engineer to join our Team to build, maintain, and enhance scalable, high-performance data pipelines and cloud-native solutions. The ideal candidate will have deep experience in Databricks , Python , PySpark , Elastic Search , and SQL , and a strong understanding of cloud-based ETL services, data modeling, and data security best practices. Key Responsibilities: Design, implement, and maintain scalable data pipelines using Databricks , PySpark , and SQL . Develop and optimize ETL processes leveraging services like AWS Glue , GCP DataProc/DataFlow , Azure ADF/ADLF , and Apache Spark . Build, manage, and monitor Airflow DAGs to orchestrate data workflows. Integrate and manage Elastic Search for data indexing, querying, and analytics. Write advanced SQL queries using window functions and analytics techniques. Design data schemas and models that align with various business domains and use cases. Optimize data warehousing performance and storage using best practices. Ensure data security, governance, and compliance across all environments. Apply data engineering design patterns and frameworks to build robust solutions. Collaborate with Product, Data, and Engineering teams; support executive data needs. Participate in Agile ceremonies and follow DevOps/DataOps/DevSecOps practices. Respond to critical business issues as part of an on-call rotation. Must-Have Skills: Databricks (3+ years): Development and orchestration of data workflows. Python & PySpark (3+ years): Hands-on experience in distributed data processing. Elastic Search (3+ years): Indexing and querying large-scale datasets. SQL (3+ years): Proficiency in analytical SQL including window functions . ETL Services : AWS Glue GCP DataProc/DataFlow Azure ADF / ADLF Airflow : Designing and maintaining data workflows. Data Warehousing : Expertise in performance tuning and optimization. Data Modeling : Understanding of data schemas and business-oriented data models. Data Security : Familiarity with encryption, access control, and compliance standards. Cloud Platforms : AWS (must), GCP and Azure (preferred). Skills Python,Databricks,Pyspark,Elastic Search
Posted 2 weeks ago
4.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Location HYDERABAD OFFICE INDIA Job Description Are you looking to take your career to the next level? We’re looking for a Junior Software Engineer to join our Data & Analytics Core Data Lake Platform engineering team. We are searching for self-motivated candidates, who will demonstrate modern Agile and DevOps practices to craft, develop, test and deploy IT systems and applications, delivering global projects in multinational teams. P&G Core Data Lake Platform is a central component of P&G data and analytics ecosystem. CDL Platform is used to deliver a broad scope of digital products and frameworks used by data engineers and business analysts. In this role you will have an opportunity to use data engineering skills to deliver solutions enriching data cataloging and data discoverability for our users. With our approach to building solutions that would fit the scale P&G business is operating, we combine software engineering standard methodologies (Databricks) with modern software engineering standards (Azure, DevOps, SRE) to deliver value for P&G. RESPONSIBILITIES: Writing and testing code for Data & Analytics applications and building E2E cloud native (Azure) solutions. Engineering applications throughout its entire lifecycle from development, deployment, upgrade, and replacement/termination Ensuring that development and architecture carry out to established standards, including modern software engineering practices (CICD, Agile, DevOps) Collaborate with internal technical specialists and vendors to develop final products to improve overall performance, efficiency and/or to enable adaptation of new business processes. Qualifications Job Qualifications Bachelor’s degree in computer science or related technical field. 4+ years of experience working as Software Engineer (with focus on developing in Python, PySpark, Databricks, ADF) Fullstack engineering experience (Python/React/Javascript/APIs) Experience demonstrating modern software engineering practices (code standards, Gitflow, automated testing, CICD, DevOps) Experience working with Cloud infrastructure (Azure preferred) Strong verbal, written, and interpersonal communication skills. A strong desire to produce high quality software through multi-functional teamwork, testing, code reviews, and other best practices. YOU ALSO SHOULD HAVE: Strong written and verbal English communication skills to influence others Proven use of data and tools Ability to prioritize multiple priorities Ability to work collaboratively across different functions and geographies We produce globally recognized brands and we grow the best business leaders in the industry. With a portfolio of trusted brands as diverse as ours, it is paramount our leaders are able to lead with courage the vast array of brands, categories and functions. We serve consumers around the world with one of the strongest portfolios of trusted, quality, leadership brands, including Always®, Ariel®, Gillette®, Head & Shoulders®, Herbal Essences®, Oral-B®, Pampers®, Pantene®, Tampax® and more. Our community includes operations in approximately 70 countries worldwide. Visit http://www.pg.com to know more. We are an equal opportunity employer and value diversity at our company. We do not discriminate against individuals on the basis of race, color, gender, age, national origin, religion, sexual orientation, gender identity or expression, marital status, citizenship, disability, HIV/AIDS status, or any other legally protected factor. "At P&G, the hiring journey is personalized every step of the way, thereby ensuring equal opportunities for all, with a strong foundation of Ethics & Corporate Responsibility guiding everything we do. All the available job opportunities are posted either on our website - pgcareers.com, or on our official social media pages, for the convenience of prospective candidates, and do not require them to pay any kind of fees towards their application.” Job Schedule Full time Job Number R000134973 Job Segmentation Experienced Professionals (Job Segmentation)
Posted 2 weeks ago
5.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Job Title: Azure Data Engineer Experience: 5+ Years About the Company: EY is a leading global professional services firm offering a broad range of services in assurance, tax, transaction, and advisory services. We're looking to hire a skilled ADF Developer who has proficiency in Python and Microsoft Power Platform. Job Responsibilities: Analyse and translate business requirements into technical requirements and architecture. Design, develop, test, and deploy Azure Data Factory (ADF) pipelines for ETL processes. Create, maintain, and optimize Python scripts that interface with ADF and support data operations. Utilize Microsoft Power Platform for designing intuitive user interfaces, automating workflows, and creating effective database solutions. Implements Power Apps, Power Automate, and Power BI to support better business decisions. Collaborate with diverse teams to ensure seamless integration of ADF solutions with other software components. Debug and resolve technical issues related to data transformations and processing. Implement robust data validation and error handling routines to ensure data consistency and accuracy. Maintain documentation of all systems and processes developed, promoting transparency, and consistency. Monitor and optimize performance of ADF solutions regularly. Proactively stay up-to-date with the latest technologies and techniques in data handling and solutions. Required Skills: Proven working experience as an ADF Developer. Hands-on experience with data architectures including complex data models and data governance. Strong proficiency in Python and demonstrated experience with ETL processes. Proficient knowledge of Microsoft Power Platform (Power BI, Power Apps, and Power Automate). Understanding of SQL and relational database concepts. Familiarity with cloud technologies, particularly Microsoft Azure. Excellent problem-solving skills and ability to debug complex systems. Preferred Skills: Knowledge of standard authentication and authorization protocols such as OAuth, SAML, and LDAP. Education : BS/MS degree in Computer Science, Engineering, or a related subject is required. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 2 weeks ago
10.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About the Company Why Join 7-Eleven Global Solution Center? When you join us, you'll embrace ownership as teams within specific product areas take responsibility for end-to-end solution delivery, supporting local teams and integrating new digital assets. Challenge yourself by contributing to products deployed across our extensive network of convenience stores, processing over a billion transactions annually. Build solutions for scale, addressing the diverse needs of our 84,000+ stores in 19 countries. Experience growth through cross-functional learning, encouraged and applauded at 7-Eleven GSC. With our size, stability, and resources, you can navigate a rewarding career. Embody leadership and service as 7-Eleven GSC remains dedicated to meeting the needs of customers and communities. Why We Exist, Our Purpose and Our Transformation? 7-Eleven is dedicated to being a customer-centric, digitally empowered organization that seamlessly integrates our physical stores with digital offerings. Our goal is to redefine convenience by consistently providing top-notch customer experiences and solutions in a rapidly evolving consumer landscape. Anticipating customer preferences, we create and implement platforms that empower customers to shop, pay, and access products and services according to their preferences. To achieve success, we are driving a cultural shift anchored in leadership principles, supported by the realignment of organizational resources and processes. At 7-Eleven we are guided by our Leadership Principles. Each principle has a defined set of behaviours which help guide the 7-Eleven GSC team to Serve Customers and Support Stores. Be Customer Obsessed Be Courageous with Your Point of View Challenge the Status Quo Act Like an Entrepreneur Have an “It Can Be Done” Attitude Do the Right Thing Be Accountable About the Role Job Title: Manager - QA Location: Bengaluru Responsibilities 10+ years of experience to lead QA strategy across Oracle MOM modules (ORMS, MFCS, RPM, Invoice Matching, ReSA, EBS, EPM). Functional knowledge on Oracle MOM is an advantage. Manage test plan, execution, and defect triage across all test phases (Functional, Regression, UAT, Integration). Own and evolve the automated test framework using Selenium, Robot Framework, Python, and RPA principles. Oversee CI/CD test pipeline integration via Jenkins. Define and execute performance testing plans using tools like JMeter or LoadRunner. Coordinate API testing using Postman, REST Assured, or similar tools for MFCS and other services. Drive regular regression cycles across patch releases, upgrades, and enhancements. Collaborate with business analysts, developers, and infrastructure teams to align test coverage. Ensure proper test data management and environment readiness. Experience in managing automation team sizes 5-7 resources. Qualifications Bachelor’s degree in Computer Science, Information Technology, or a related field. 10+ years of experience to lead QA strategy across Oracle MOM modules (ORMS, MFCS, RPM, Invoice Matching, ReSA, EBS, EPM). Functional knowledge on Oracle MOM is an advantage. Experience working in Agile environments is highly preferred. Required Skills Proficient in developing UI test automation frameworks using Python and RPA tools (e.g., UiPath, Automation Anywhere, or Blue Prism), Jenkins and Java/JavaScript. Must have a good hands on UI automation using Selenium/ RPA/ Cucumber. Write and maintain automated test scripts using Java/Python and RPA tools (e.g., UiPath, Automation Anywhere, or Blue Prism). Work with the development team to create and implement effective test automation strategies for continuous integration/continuous delivery (CI/CD) pipelines. Develop automated test frameworks and enhance existing automation suites to improve testing efficiency and coverage. Hands-on experience in Oracle ADF Web, API testing and Oracle Database testing. Strong knowledge of test management and defect tracking tools (e.g., JIRA). Experience with database validations using Oracle SQL queries. Familiarity with Agile methodologies and practices. Ability to perform backend data verification. Preferred Skills Strong analytical and problem-solving skills. Excellent verbal and written communication skills. Ability to work collaboratively across cross-functional teams. Attention to detail and a proactive approach to identifying issues. Ability to manage multiple tasks and priorities effectively. Pay range and compensation package 7-Eleven Global Solution Center offers a comprehensive benefits plan tailored to meet the needs and improve the overall experience of our employees, aiding in the management of both their professional and personal aspects. Equal Opportunity Statement 7-Eleven Global Solution Center is an Equal Opportunity Employer committed to diversity in the workplace. Our strategy focuses on three core pillars – workplace culture, diverse talent and how we show up in the communities we serve. As the recognized leader in convenience, the 7-Eleven family of brands embraces diversity, equity and inclusion (DE+I). It’s not only the right thing to do for customers, Franchisees and employees—it’s a business imperative.
Posted 2 weeks ago
3.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
About Gartner IT Join a world-class team of skilled engineers who build creative digital solutions to support our colleagues and clients. We make a broad organizational impact by delivering cutting-edge technology solutions that power Gartner. Gartner IT values its culture of nonstop innovation, an outcome-driven approach to success, and the notion that great ideas can come from anyone on the team. About The Role Senior Data engineer for production support who will provide daily end-to-end support for daily data loads & manage production issues. What Will You Do Monitor & support various data loads for our Enterprise Data Warehouse. Support business users who are accessing POWER BI dashboards & Datawarehouse tables. Handle incidents, service requests within defined SLA’s. Work with team on managing Azure resources including but not limited to Databricks, Azure Data Factory pipelines, ADLS etc. Build new ETL/ELT pipelines using Azure Data Products like Azure Data Factory, Databricks etc. Help build best practices & processes. Coordinate with upstream/downstream teams to resolve data issues. Work with the QA team and Dev team to ensure appropriate automated regressions are added to detect such issues in future. Work with the Dev team to improve automated error handling so manual interventions can be reduced. Analyze process and pattern so other similar unreported issues can be resolved in one go. What You Will Need Strong IT professional with 3-4 years of experience in Data Engineering. The candidate should have strong analytical and problem-solving skills. Must Have 3-4 years of experience in Data warehouse design & development and ETL using Azure Data Factory (ADF) Experience in writing complex TSQL procedures on MPP platforms - Synapse, Snowflake etc. Experience in analyzing complex code to troubleshoot failure and where applicable recommend best practices around error handling, performance tuning etc. Ability to work independently, as well as part of a team and experience working with fast-paced operations/dev teams. Good understanding of business process and analyzing underlying data Understanding of dimensional and relational modelling Detailed oriented, with the ability to plan, prioritize, and meet deadlines in a fast-paced environment. Can be added to SDE Knowledge of Azure cloud technologies Exceptional problem-solving skills Nice To Have Experience crafting, building, and deploying applications in a DevOps environment utilizing CI/CD tools Understanding of dimensional and relational modeling Relevant certifications Basic knowledge of Power BI. Who Are You Bachelor’s degree or foreign equivalent degree in Computer Science or a related field required Excellent communication skills. Able to work independently or within a team proactively in a fast-paced AGILE-SCRUM environment. Owns success – Takes responsibility for the successful delivery of the solutions. Strong desire to improve upon their skills in tools and technologies Don’t meet every single requirement? We encourage you to apply anyway. You might just be the right candidate for this, or other roles. Who are we? At Gartner, Inc. (NYSE:IT), we guide the leaders who shape the world. Our mission relies on expert analysis and bold ideas to deliver actionable, objective insight, helping enterprise leaders and their teams succeed with their mission-critical priorities. Since our founding in 1979, we’ve grown to more than 21,000 associates globally who support ~14,000 client enterprises in ~90 countries and territories. We do important, interesting and substantive work that matters. That’s why we hire associates with the intellectual curiosity, energy and drive to want to make a difference. The bar is unapologetically high. So is the impact you can have here. What makes Gartner a great place to work? Our sustained success creates limitless opportunities for you to grow professionally and flourish personally. We have a vast, virtually untapped market potential ahead of us, providing you with an exciting trajectory long into the future. How far you go is driven by your passion and performance. We hire remarkable people who collaborate and win as a team. Together, our singular, unifying goal is to deliver results for our clients. Our teams are inclusive and composed of individuals from different geographies, cultures, religions, ethnicities, races, genders, sexual orientations, abilities and generations. We invest in great leaders who bring out the best in you and the company, enabling us to multiply our impact and results. This is why, year after year, we are recognized worldwide as a great place to work . What do we offer? Gartner offers world-class benefits, highly competitive compensation and disproportionate rewards for top performers. In our hybrid work environment, we provide the flexibility and support for you to thrive — working virtually when it's productive to do so and getting together with colleagues in a vibrant community that is purposeful, engaging and inspiring. Ready to grow your career with Gartner? Join us. The policy of Gartner is to provide equal employment opportunities to all applicants and employees without regard to race, color, creed, religion, sex, sexual orientation, gender identity, marital status, citizenship status, age, national origin, ancestry, disability, veteran status, or any other legally protected status and to seek to advance the principles of equal employment opportunity. Gartner is committed to being an Equal Opportunity Employer and offers opportunities to all job seekers, including job seekers with disabilities. If you are a qualified individual with a disability or a disabled veteran, you may request a reasonable accommodation if you are unable or limited in your ability to use or access the Company’s career webpage as a result of your disability. You may request reasonable accommodations by calling Human Resources at +1 (203) 964-0096 or by sending an email to ApplicantAccommodations@gartner.com . Job Requisition ID:99740 By submitting your information and application, you confirm that you have read and agree to the country or regional recruitment notice linked below applicable to your place of residence. Gartner Applicant Privacy Link: https://jobs.gartner.com/applicant-privacy-policy For efficient navigation through the application, please only use the back button within the application, not the back arrow within your browser.
Posted 2 weeks ago
8.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Looking for Solution Architect with at least 8 years of experience in APEX, ADF, Workflow, ATP, Pl/SQL and OIC. Job Summary: We are seeking an experienced Solution Architect with deep expertise in Oracle technologies, including APEX, ADF, Workflow, ATP, PL/SQL, and Oracle Integration Cloud (OIC) . The ideal candidate will play a key role in designing and delivering enterprise-grade solutions, providing technical leadership, and ensuring seamless integration of Oracle applications and services across the ecosystem. Key Responsibilities: Architect and design robust, scalable, and secure enterprise applications using Oracle APEX, ADF, and OIC. Lead solution development and integration efforts across Oracle Cloud and on-premise systems. Define and implement workflow processes using Oracle Workflow and related technologies. Develop complex PL/SQL procedures, functions, and triggers to support business logic. Optimize and manage Oracle Autonomous Transaction Processing (ATP) environments. Provide end-to-end integration solutions using Oracle Integration Cloud (OIC). Collaborate with business analysts and stakeholders to gather requirements and translate them into technical specifications. Lead technical design reviews, conduct code reviews, and ensure best practices. Ensure solutions meet performance, scalability, and security requirements. Provide mentorship and guidance to development teams. Required Skills & Qualifications: Proven experience (8–10+ years) in Oracle technologies, especially APEX, ADF, and PL/SQL. Strong understanding of Oracle Autonomous Database (ATP) and its capabilities. Hands-on experience designing and developing integrations using Oracle Integration Cloud (OIC) . Experience in Oracle Workflow development and customization. Proficiency in database performance tuning, PL/SQL optimization, and Oracle SQL. Sound understanding of cloud infrastructure, REST/SOAP web services, and integration patterns. Excellent communication and stakeholder management skills. Bachelor's degree in Computer Science, Engineering, or a related field (Master’s preferred). Oracle certifications are a plus.
Posted 2 weeks ago
0 years
0 Lacs
India
Remote
🚀 We’re Hiring – Oracle Retail Techno-Functional Consultant (Remote Opportunity) 🛍️🌐 Join us on a mission to modernize retail systems for a leading US-based retailer ! We’re looking for Oracle Retail experts who can bring both functional clarity and technical depth to high-impact projects. 🔹 Key Skills & Experience: ✔️ Strong expertise in Oracle Retail modules : RMS, RPM, ReIM, ReSA, RMFCS, RPCS (cloud experience is a big plus) ✔️ Proficiency in PL/SQL, Oracle APEX , and BI Publisher ✔️ Deep functional understanding of merchandising , inventory , purchase orders , pricing , and invoice matching ✔️ Hands-on experience in data migration & cleansing — familiarity with Azure Databricks, ADF, or Python preferred ✔️ Agile and DevOps project experience with proven global stakeholder communication ✔️ Ability to lead modules , manage offshore teams, and troubleshoot production issues independently 🧑💻 Work Mode : 100% Remote 🌍 Client : US-based retailer 📈 Great opportunity to work with cross-functional global teams on Oracle Retail transformation programs! 📩 Ready to apply or refer a colleague? Message us or drop your resume today. #OracleRetail #RemoteJobs #RMS #RetailTransformation #PLSQL #Databricks #ADF #TechLead #RetailIT #HiringNow #OracleJobs #APEX #BIpublisher
Posted 2 weeks ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
WPP is the creative transformation company. We use the power of creativity to build better futures for our people, planet, clients, and communities. Working at WPP means being part of a global network of more than 100,000 talented people dedicated to doing extraordinary work for our clients. We operate in over 100 countries, with corporate headquarters in New York, London and Singapore. WPP is a world leader in marketing services, with deep AI, data and technology capabilities, global presence and unrivalled creative talent. Our clients include many of the biggest companies and advertisers in the world, including approximately 300 of the Fortune Global 500. Our people are the key to our success. We're committed to fostering a culture of creativity, belonging and continuous learning, attracting and developing the brightest talent, and providing exciting career opportunities that help our people grow. Why we're hiring: At WPP, technology is at the heart of everything we do, and it is WPP IT’s mission to enable everyone to collaborate, create and thrive. WPP IT is undergoing a significant transformation to modernise ways of working, shift to cloud and micro-service-based architectures, drive automation, digitise colleague and client experiences and deliver insight from WPP’s petabytes of data. WPP Media is the world’s leading media investment company responsible for more than $63B in annual media investment through agencies Mindshare, MediaCom, Wavemaker, Essence and m/SIX, as well as the outcomes-driven programmatic audience company, Xaxis and data and technology company Choreograph. WPP Medias portfolio includes Data & Technology, Investment and Services, all united in a vision to shape the next era of media where advertising works better for people. By leveraging all the benefits of scale, the company innovates, differentiates and generates sustained value for our clients wherever they do business. The WPP Media team in WPP IT are the technology solutions partner for the WPP Media group of agencies and are accountable for co-ordinating and assuring end-to-end change delivery, managing the WPP Media IT technology life-cycle and innovation pipeline. As part of the global Data & Measure team, this role will play a key part in building scalable, insightful and user-centric data products. Working closely with global stakeholders and cross-functional teams, the Senior Power BI Developer will lead both the frontend development and backend data modeling for business-critical dashboards and analytics solutions. This is not a pure report-builder role — success depends on a solid understanding of data architecture, ETL, and integration, paired with sharp visual storytelling and dashboard design. What you'll be doing: Primary Responsibilities: Design and build interactive dashboards and reports using Power BI Work closely with product teams to understand reporting needs and translate them into scalable data products Own data transformation and modeling in Power Query and DAX Maintain and optimize data flows, datasets, and semantic models Ensure data accuracy, usability, and access control across published solutions Collaborate with backend teams to shape data sources for frontend consumption Document BI solutions, including business logic, KPIs, and metadata Partner with global teams to define standards and reuse patterns Additional Responsibilities: Support Power BI integration into existing or evolving global platforms, such as Measure. Contribute to defining global best practices for BI and self-service enablement Participate in data quality reviews and source system integration discussions Guide junior developers or offshore partners when necessary What you'll need: Required Skills: Bachelor’s degree in computer science, Engineering, or related field 5+ years of experience with Power BI development (incl. DAX, Power Query) Experience working with large datasets and complex data models Strong understanding of SQL and backend data principles Ability to bridge business needs with technical implementation Excellent data visualization and storytelling skills Experience working in cross-functional, global teams Strong English communication skills (verbal and written) Preferred Skills: Experience with Azure SQL, Synapse, or similar data platforms Familiarity with Azure Data Factory (ADF) for orchestrating data pipelines and ETL processes Exposure to data warehousing or MDM concepts Familiarity with DevOps processes and version control (e.g., Git) Experience working in an agile or product-based environment Power Platform exposure (Power Automate, Power Apps) is a plus Who you are: You're open : We are inclusive and collaborative; we encourage the free exchange of ideas; we respect and celebrate diverse views. We are open-minded: to new ideas, new partnerships, new ways of working. You're optimistic : We believe in the power of creativity, technology and talent to create brighter futures or our people, our clients and our communities. We approach all that we do with conviction: to try the new and to seek the unexpected. You're extraordinary: we are stronger together: through collaboration we achieve the amazing. We are creative leaders and pioneers of our industry; we provide extraordinary every day. What we'll give you: Passionate, inspired people – We aim to create a culture in which people can do extraordinary work. Scale and opportunity – We offer the opportunity to create, influence and complete projects at a scale that is unparalleled in the industry. Challenging and stimulating work – Unique work and the opportunity to join a group of creative problem solvers. Are you up for the challenge? We believe the best work happens when we're together, fostering creativity, collaboration, and connection. That's why we’ve adopted a hybrid approach, with teams in the office around four days a week. If you require accommodations or flexibility, please discuss this with the hiring team during the interview process. WPP is an equal opportunity employer and considers applicants for all positions without discrimination or regard to particular characteristics. We are committed to fostering a culture of respect in which everyone feels they belong and has the same opportunities to progress in their careers. Please read our Privacy Notice (https://www.wpp.com/en/careers/wpp-privacy-policy-for-recruitment) for more information on how we process the information you provide.
Posted 2 weeks ago
12.0 years
0 Lacs
India
On-site
Job Title: Azure Administrator Location: Bangalore - Hybrid Job Type: Full-time Experience Level: Mid to Senior (4–12 years) About The Role: We are seeking a skilled and proactive Azure Administrator to join our cloud infrastructure and operations team. The ideal candidate will have strong experience managing and securing Microsoft Azure environments, implementing DevOps practices, and supporting CI/CD pipelines. In addition to infrastructure responsibilities, the role requires hands-on experience with Azure Data Factory for managing data workflows and integrations across systems. You will help ensure seamless infrastructure performance, secure access, and efficient data operations across the cloud ecosystem. Key Responsibilities: Azure Infrastructure Management: Administer and optimize Azure services including Virtual Machines, Virtual Networks, Storage Accounts, Load Balancers, and Azure Resource Groups. Manage provisioning, scaling, and cost optimization of Azure resources using best practices. Azure Data Factory (ADF): Design, build, and manage data pipelines and integration workflows using Azure Data Factory. Collaborate with data teams to support ETL/ELT operations, data movement, and transformation across hybrid and cloud systems. Monitor data pipeline performance, troubleshoot failures, and optimize pipeline efficiency. DevOps & CI/CD: Build and maintain automated CI/CD pipelines using Azure DevOps, GitHub Actions, or Jenkins. Integrate deployment automation using Infrastructure-as-Code (IaC) tools such as Terraform, Bicep, or ARM templates. Ensure seamless deployment, versioning, and rollback of infrastructure and application components. Identity & Access Management (IAM): Administer Azure Active Directory (Azure AD), manage user identities, groups, service principals, and enterprise applications. Implement and enforce MFA, Conditional Access policies, and SSO across services. Role-Based Access Control (RBAC): Define and apply granular RBAC policies to control access to resources. Ensure least-privilege access principles and maintain access audit logs. Monitoring & Security: Set up and manage monitoring using Azure Monitor, Log Analytics, and Application Insights. Respond to incidents, participate in on-call rotations, and resolve critical system issues. Apply security recommendations from Azure Security Center and enforce compliance standards. Documentation & Process Improvement: Maintain clear and updated documentation of infrastructure configurations, access controls, and deployment processes. Identify opportunities to improve automation, performance, and data reliability. Required Qualifications: Bachelor’s degree in Computer Science, Information Technology, or a related field (or equivalent hands-on experience). 4–7 years of experience in Azure administration, cloud infrastructure, and data integration. Strong hands-on expertise with Azure services, including: Azure Virtual Networks, VMs, Storage, and Networking. Azure Active Directory and RBAC. Azure Data Factory (ADF) – data pipeline creation, management, and troubleshooting. Experience with CI/CD pipeline tools and DevOps practices. Proficiency in scripting (PowerShell, Bash, Azure CLI). Experience with IaC tools: Terraform, Bicep, or ARM templates. Strong understanding of cloud security and identity management. Excellent analytical and troubleshooting skills. Strong written and verbal communication. Collaborative, detail-oriented, and proactive mindset. Preferred Skills: Azure Certifications (e.g., AZ-104, AZ-400, AZ-500, or DP-203). Experience with hybrid cloud environments. Knowledge of data integration with SQL, Blob Storage, Synapse, or on-prem systems via ADF. Familiarity with Docker, Kubernetes, or AKS is a plus.
Posted 2 weeks ago
10.0 years
0 Lacs
Pune, Maharashtra, India
On-site
About the Role: We are seeking a seasoned Power BI Technical Manager to lead our business intelligence and data visualization initiatives. The ideal candidate will have deep expertise in Power BI and Microsoft’s data stack, combined with proven leadership experience managing BI teams and driving data strategy across the organization. Key Responsibilities: Lead the end-to-end design, development, deployment, and maintenance of Power BI dashboards and reports. Collaborate with business stakeholders to understand reporting needs, translate requirements into technical solutions, and deliver high-impact dashboards. Architect scalable Power BI data models, ensure performance optimization, and enforce data governance best practices. Manage a team of BI developers and analysts; mentor and support their growth and technical development. Oversee integration of data from various sources (SQL Server, Azure, Excel, APIs, etc.) into Power BI. Ensure data accuracy, security, and compliance with internal and external policies. Drive adoption of self-service BI practices across departments. Collaborate with cross-functional teams including data engineering, IT, and business functions. Stay updated on the latest trends in BI, data visualization, and Microsoft Power Platform. Required Skills & Experience: 10 to 12 years of overall experience in Business Intelligence, with at least 5+ years hands-on in Power BI. Strong experience in DAX, Power Query (M language), data modeling, and visual storytelling. Proficiency with SQL Server, SSIS, and Azure Data Services (ADF, Synapse, etc.). Solid understanding of data warehousing concepts and data governance frameworks. Experience managing and mentoring BI/analytics teams. Ability to translate business needs into technical solutions with high accuracy and efficiency. Experience working in Agile/Scrum environments. Excellent communication, stakeholder management, and presentation skills. Microsoft certifications (e.g., DA-100, PL-300, or related) preferred. Educational Qualifications: Bachelor’s or Master’s degree in Computer Science, Information Systems, Engineering, or related field. Why Join Us? Work on cutting-edge BI solutions that drive strategic decisions. Collaborative and innovation-driven work environment. Competitive compensation and performance-based growth opportunities.
Posted 2 weeks ago
3.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities Design, develop, and implement data models and ETL processes for Power BI solutions Be able to understand and create Test Scripts for data validation as it moves through various lifecycles in cloud-based technologies Be able to work closely with business partners and data SMEs to understand Healthcare Quality Measures and its related business requirements Conduct data validation after major/minor enhancements in project and determine the best data validation techniques to implement Communicate effectively with leadership and analysts across teams Troubleshoot and resolve issues with Jobs/pipelines/overhead Ensure data accuracy and integrity between sources and consumers Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Graduate degree or equivalent (B.Tech./MCA preferred) with overall 3+ years of work experience 3+ years of advanced understanding to at least one programming language - Python, Spark, Scala Experience of working with Cloud technologies preferably Snowflake, ADF and Databricks Experience of working with Agile Methodology (preferably in Rally) Knowledge of Unix Shell Scripting for automation & scheduling Batch Jobs Knowledge of Configuration Management - Github Knowledge of Relational Databases - SQL Server, Oracle, Teradata, IBM DB2, MySQL Knowledge of Messaging Queues - Kafka/ActiveMQ/RabbitMQ Knowledge of CI/CD Tools - Jenkins Understanding Relational Database Model & Entity Relation diagrams Proven solid communication and interpersonal skills Proven excellent written and verbal communication skills with ability to provide clear explanations and overviews to others (internal and external) of their work efforts Proven solid facilitation, critical thinking, problem solving, decision making and analytical skills Demonstrated ability to prioritize and manage multiple tasks At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.
Posted 2 weeks ago
10.0 years
0 Lacs
India
On-site
About Company Papigen is a fast-growing global technology services company, delivering innovative digital solutions through deep industry experience and cutting-edge expertise. We specialize in technology transformation, enterprise modernization, and dynamic areas like Cloud, Big Data, Java, React, DevOps, and more. Our client-centric approach combines consulting, engineering, and data science to help businesses evolve and scale efficiently. Job Description We are seeking a Senior Software Engineer with strong experience in .NET, Microsoft Azure, and Identity Access Management (IAM) technologies. The ideal candidate will have deep technical knowledge and proven success in designing, developing, and maintaining enterprise-grade applications with a focus on performance, scalability, and security. Key Responsibilities Design and implement scalable software using C#, .NET, and related technologies. Develop technical documentation including high-level/low-level designs and UML diagrams. Build and maintain secure APIs and microservices using Azure services (App Service, APIM, ADF, AppInsights). Implement fine-grained authorization policies using IAM tools (e.g., PlainID, Azure AD/Entra ID). Design, optimize, and manage relational and non-relational databases (SQL Server, Oracle, PostgreSQL). Integrate and support cloud-based deployments using Azure DevOps CI/CD pipelines. Provide production support, perform root cause analysis, and resolve issues proactively. Participate in Agile ceremonies, sprint planning, stand-ups, and retrospectives. Required Skills & Experience 10+ years of experience in software development, system design, and architecture. Proficient in C#, .NET, JavaScript, and Python. Strong experience with RESTful/SOAP APIs, API Management, and Swagger/OpenAPI. Hands-on with Azure services: Logic Apps, DevOps, ADF, APIM, Databricks (desired). Skilled in writing complex SQL queries, stored procedures, and optimizing database performance. Familiarity with IAM tools like PlainID and Azure Entra ID (formerly AD). Experience with SharePoint development and Power Automate. Excellent communication, documentation, and cross-functional collaboration skills. Benefits & Perks Opportunity to work with leading global clients Exposure to modern technology stacks and tools Supportive and collaborative team environment Continuous learning and career development opportunities Skills: javascript,databricks,database management,plainid,ci/cd,soap apis,.net,apim,system design,agile methodologies,sql,python,postgresql,microsoft azure,power automate,sharepoint,oracle,identity access management (iam),restful apis,orm,azure,uml,c#,azure devops,api management,azure ad,iam,microservices
Posted 2 weeks ago
3.0 years
0 Lacs
India
Remote
WHO WE ARE: Beyondsoft is a leading mid-sized business IT and consulting company that combines modern technologies and proven methodologies to tailor solutions that move your business forward. Our global head office is based in Singapore, and our team is made up of a diversely talented team of experts who thrive on innovation and pushing the bounds of technology to solve our customers’ most pressing challenges. When it comes time to deliver, we set our sights on that sweet spot where brilliance, emerging technologies, best practices, and accountability converge. We have a global presence spanning four continents (North America, South America, Europe, and Asia). Our global network of talent and customer-centric engagement model enables us to provide top-quality services on an unprecedented scale. WHAT WE’RE ABOUT: We believe that collaboration, transparency, and accountability are the values that guide our business, our delivery, and our brand. Everyone has something to bring to the table, and we believe in working together with our peers and clients to leverage the best of one another in everything we do. When we proactively collaborate, business decisions become easier, innovation is greater, and outcomes are better. Our ability to achieve our mission and live out our values depends upon a diverse, equitable, and inclusive culture. So, we strive to foster a workplace where people have the respect, support, and voice they deserve, where innovative ideas flourish, and where people can unleash their brilliance. For more information regarding DEI at Beyondsoft, please go to https://www.beyondsoft.com/diversity/. POSITION SUMMARY: As a Data Engineer, you will be responsible for designing, building, and optimizing scalable data pipelines and infrastructure. You’ll work closely with analytics, engineering, and product teams to ensure data integrity and enable high-impact decision-making. This position requires flexibility to work in PST timezone. ADDITIONAL REQUIREMENT FOR REMOTE POSITIONS: For remote positions, all candidates must complete a video screen with our corporate recruiting team. WHAT YOU WILL BE DOING: Maintain automated data onboarding and diagnostic tools for AIP partners Monitor ADF pipelines and mitigate pipeline runs as needed Maintain Privacy Dashboard and Bing user interests for Bing Growth team usage Participate and resolve live sites in related areas Data Platform development and maintenance, Notebook based processing pipelines and MT migration Manage the regular data quality Cosmos/MT jobs Online tooling and support such as DADT tools Watch out the abnormal pattern, perform ad-hoc data quality analysis, investigate daily the user ad click broken cases Perform additional duties as assigned. MINIMUM QUALIFICATIONS: Bachelor’s degree or higher in Computer Science or a related field. At least 3 years of experience in software development. Good quality software development and understanding. Ability to quickly communicate across time zones. Excellent written and verbal communication skills in English Self-motivated Coding Language: Java, C#, Python, Scala Technologies: Apache Spark, Apache Flink, Apache Kafka, Hadoop, Cosmos, SQL Azure resource management: Azure Data Factory, Azure Databricks, Azure Key vaults, Managed Identity, Azure Storage, etc. MS Project Big data experience is a plus Occasional infrequent in person activity may be required WHAT WE HAVE TO OFFER: Because we know how important our people are to the success of our clients, it’s a priority to make sure we stay committed to our employees and making Beyondsoft a great place to work. We take pride in offering competitive compensation and benefits along with a company culture that embodies continuous learning, growth, and training with a dedicated focus on employee satisfaction and work/life balance. Beyondsoft provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type with regards to race, color, religion, age, sex, national origin, disability status, genetics, veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state, or local laws. This policy applies to all terms and conditions of employment, including recruiting, hiring, and the full employee lifecycle up through and including termination.
Posted 2 weeks ago
8.0 years
0 Lacs
India
On-site
About Company Papigen is a fast-growing global technology services company, delivering innovative digital solutions through deep industry experience and cutting-edge expertise. We specialize in technology transformation, enterprise modernization, and dynamic areas like Cloud, Big Data, Java, React, DevOps, and more. Our client-centric approach combines consulting, engineering, and data science to help businesses evolve and scale efficiently. About The Role We are seeking a Senior Data Engineer to join our growing cloud data team. In this role, you will design and implement scalable data pipelines and ETL processes using Azure Databricks , Azure Data Factory , PySpark , and Spark SQL . You’ll work with cross-functional teams to develop high-quality, secure, and efficient data solutions in a data lakehouse architecture on Azure. Key Responsibilities Design, develop, and optimize scalable data pipelines using Databricks, ADF, PySpark, Spark SQL, and Python Build robust ETL workflows to transform and load data into a lakehouse architecture on Azure Ensure data quality, security, and compliance with data governance and privacy standards Collaborate with stakeholders to gather business requirements and deliver technical data solutions Create and maintain technical documentation for workflows, architecture, and data models Work within an Agile environment and track tasks using tools like Azure DevOps Required Skills & Experience 8+ years of experience in data engineering and enterprise data platform development Proven expertise in Azure Databricks, Azure Data Factory, PySpark, and Spark SQL Strong understanding of Data Warehouses, Data Marts, and Operational Data Stores Proficient in writing complex SQL / PL-SQL queries and understanding data models and data lineage Knowledge of data management best practices: data quality, lineage, metadata, reference/master data Experience working in Agile teams with tools like Azure DevOps Strong problem-solving skills, attention to detail, and the ability to multi-task effectively Excellent communication skills for interacting with both technical and business teams Benefits And Perks Opportunity to work with leading global clients Exposure to modern technology stacks and tools Supportive and collaborative team environment Continuous learning and career development opportunities Skills: lineage,data modeling,pyspark,metadata,spark sql,data marts,azure databricks,sql,azure data factory,pl-sql,spark,pl/sql,adf,data governance,python,data warehouses
Posted 2 weeks ago
7.0 - 12.0 years
15 - 25 Lacs
Pune, Chennai, Bengaluru
Hybrid
Azure Data Engineer - ADF, ADB, Pyspark, SQL Interested candidates please share resume with below details on juisagars@hexaware.com Total Experience: Relevant Experience: Current company: Current CTC: Expected CTC: Notice Period: Current location: Preferred location :
Posted 2 weeks ago
5.0 years
0 Lacs
Hyderābād
On-site
Job Description: Senior/Azure Data Engineer Job Location: Hyderabad / Bangalore / Chennai / Kolkata / Noida/ Gurgaon / Pune / Indore / Mumbai At least 5+ years’ of relevant hands on development experience as Azure Data Engineering role Proficient in Azure technologies like ADB, ADF, SQL(capability of writing complex SQL queries), ADB, PySpark, Python, Synapse, Delta Tables, Unity Catalog Hands on in Python, PySpark or Spark SQL Hands on in Azure Analytics and DevOps Taking part in Proof of Concepts (POCs) and pilot solutions preparation Ability to conduct data profiling, cataloguing, and mapping for technical design and construction of technical data flows Experience in business processing mapping of data and analytics solutions At DXC Technology, we believe strong connections and community are key to our success. Our work model prioritizes in-person collaboration while offering flexibility to support wellbeing, productivity, individual work styles, and life circumstances. We’re committed to fostering an inclusive environment where everyone can thrive. Recruitment fraud is a scheme in which fictitious job opportunities are offered to job seekers typically through online services, such as false websites, or through unsolicited emails claiming to be from the company. These emails may request recipients to provide personal information or to make payments as part of their illegitimate recruiting process. DXC does not make offers of employment via social media networks and DXC never asks for any money or payments from applicants at any point in the recruitment process, nor ask a job seeker to purchase IT or other equipment on our behalf. More information on employment scams is available here .
Posted 2 weeks ago
5.0 years
0 Lacs
Bengaluru
On-site
Job Description: Senior Data Engineer (Azure, Snowflake, ADF) Job Location: Hyderabad / Bangalore / Chennai / Kolkata / Noida/ Gurgaon / Pune / Indore / Mumbai Key Responsibilities: Data Integration & Orchestration: Integrate with Snowflake for scalable data storage and retrieval. Use Azure Data Factory (ADF) and Function Apps for orchestrating and transforming data pipelines. Streaming & Messaging: 5+ years of experience in ML/AI/DevOps engineering, including Edge deployment. Strong proficiency in OpenShift, Azure ML, and Terraform. Hands-on experience with Kafka, Snowflake, and Function Apps. Proven experience with CI/CD pipelines, preferably Azure DevOps and Argo. Good understanding of monitoring tools (Prometheus, Grafana, AppInsights). Experience in secure deployments and managing private endpoints in Azure. At DXC Technology, we believe strong connections and community are key to our success. Our work model prioritizes in-person collaboration while offering flexibility to support wellbeing, productivity, individual work styles, and life circumstances. We’re committed to fostering an inclusive environment where everyone can thrive. Recruitment fraud is a scheme in which fictitious job opportunities are offered to job seekers typically through online services, such as false websites, or through unsolicited emails claiming to be from the company. These emails may request recipients to provide personal information or to make payments as part of their illegitimate recruiting process. DXC does not make offers of employment via social media networks and DXC never asks for any money or payments from applicants at any point in the recruitment process, nor ask a job seeker to purchase IT or other equipment on our behalf. More information on employment scams is available here .
Posted 2 weeks ago
150.0 years
3 - 10 Lacs
Bengaluru
On-site
Your Job You’re not the person who will settle for just any role. Neither are we. Because we’re out to create Better Care for a Better World, and that takes a certain kind of person and teams who care about making a difference. Here, you’ll bring your professional expertise, talent, and drive to building and managing our portfolio of iconic, ground-breaking brands. In your role, you’ll help us deliver better care for billions of people around the world. It starts with YOU. About Us Huggies®. Kleenex®. Cottonelle®. Scott®. Kotex®. Poise®. Depend®. Kimberly-Clark Professional®. You already know our legendary brands—and so does the rest of the world. In fact, millions of people use Kimberly-Clark products every day. We know these amazing Kimberly-Clark products wouldn’t exist without talented professionals, like you. At Kimberly-Clark, you’ll be part of the best team committed to driving innovation, growth and impact. We’re founded on 150 years of market leadership, and we’re always looking for new and better ways to perform – so there’s your open door of opportunity. It’s all here for you at Kimberly-Clark; you just need to log on! Led by Purpose. Driven by You. About You You’re driven to perform at the highest level possible, and you appreciate a performance culture fueled by authentic caring. You want to be part of a company actively dedicated to sustainability, inclusion, wellbeing, and career development. You love what you do, especially when the work you do makes a difference. At Kimberly-Clark, we’re constantly exploring new ideas on how, when, and where we can best achieve results. When you join our team, you’ll experience Flex That Works: flexible (hybrid) work arrangements that empower you to have purposeful time in the office and partner with your leader to make flexibility work for both you and the business. Our Data Engineers play a crucial role in designing and operationalizing transformational enterprise data solutions on Cloud Platforms, integrating Azure services, Snowflake technology, and other third-party data technologies. Cloud Data Engineers will work closely with a multidisciplinary agile team to build high-quality data pipelines that drive analytic solutions. These solutions will generate insights from our connected data, enabling Kimberly-Clark to advance its data-driven decision-making capabilities. The ideal candidate will have a deep understanding of data architecture, data engineering, data warehousing, data analysis, reporting, and data science techniques and workflows. They should be skilled in creating data products that support analytic solutions and possess proficiency in working with APIs and understanding data structures to serve them. Experience in using ADF (Azure Data Factory) for orchestrating and automating data movement and transformation. Additionally, expertise in data visualization tools, specifically PowerBI, is required. The candidate should have strong problem-solving skills, be able to work as part of a technical, cross-functional analytics team, and be an agile learner with a passion for solving complex data problems and delivering insights. If you are an agile learner, possess strong problem-solving skills, can work as part of a technical, cross-functional analytics team, and want to solve complex data problems while delivering insights that help enable our analytics strategy, we would like to hear from you. This role is perfect for a developer passionate about leveraging cutting-edge technologies to create impactful digital products that connect with and serve our clients effectively. Kimberly-Clark has an amazing opportunity to continue leading the market, and DTS is poised to deliver compelling and robust digital capabilities, products, and solutions to support it. This role will have substantial influence in this endeavor. If you are excited to make a difference applying cutting-edge technologies to solve real business challenges and add value to a global, market-leading organization, please come join us! Scope/Categories: Role will report to the Data & Analytics Engineer Manager and Product Owner. Key Responsibilities: Design and operationalize enterprise data solutions on Cloud Platforms : Develop and implement scalable and secure data solutions on cloud platforms, ensuring they meet enterprise standards and requirements. This includes designing data architecture, selecting appropriate cloud services, and optimizing performance for data processing and storage. Integrate Azure services, Snowflake technology, and other third-party data technologies: Seamlessly integrate various data technologies, including Azure services, Snowflake, and other third-party tools, to create a cohesive data ecosystem. This involves configuring data connectors, ensuring data flow consistency, and managing dependencies between different systems. Build and maintain high-quality data pipelines for analytic solutions: Develop robust data pipelines that automate the extraction, transformation, and loading (ETL) of data from various sources into a centralized data warehouse or lake. Ensure these pipelines are efficient, reliable, and capable of handling large volumes of data. Collaborate with a multidisciplinary agile team to generate insights from connected data Work closely with data scientists, analysts, and other team members in an agile environment to translate business requirements into technical solutions. Participate in sprint planning, stand-ups, and retrospectives to ensure timely delivery of data products. Manage and create data inventories for analytics and APIs to be consumed : Develop and maintain comprehensive data inventories that catalog available data assets and their metadata. Ensure these inventories are accessible and usable by various stakeholders, including through APIs that facilitate data consumption. Design data integrations with internal and external products : Architect and implement data integration solutions that enable seamless data exchange between internal systems and external partners or products. This includes ensuring data integrity, security, and compliance with relevant standards. Build data visualizations to support analytic insights : Create intuitive and insightful data visualizations using tools like PowerBI, incorporating semantic layers to provide a unified view of data and help stakeholders understand complex data sets and derive actionable insights. Required Skills and Experience: Proficiency with Snowflake Ecosystem : Demonstrated ability to use Snowflake for data warehousing, including data ingestion, transformation, and querying. Proficiency in using Snowflake's features for scalable data processing, including the use of Snowpipe for continuous data ingestion and Snowflake's SQL capabilities for data transformation. Ability to optimize Snowflake performance through clustering, partitioning, and other best practices. Azure Data Factory (ADF): Experience in using ADF for orchestrating and automating data movement and transformation within the Azure ecosystem. Proficiency in programming languages such as SQL, NoSQL, Python, Java, R, and Scala: Strong coding skills in multiple programming languages used for data manipulation, analysis, and pipeline development. Experience with ETL (extract, transform, and load) systems and API integrations: Expertise in building and maintaining ETL processes to consolidate data from various sources into centralized repositories, and integrating APIs for seamless data exchange. Understanding of data architecture, data engineering, data warehousing, data analysis, reporting, and data science techniques and workflows : You should have a comprehensive knowledge of designing and implementing data systems that support various analytic and operational use cases, including data storage, processing, and retrieval. Basic understanding of machine learning concepts to support data scientists on the team: Familiarity with key machine learning principles and techniques to better collaborate with data scientists and support their analytical models. Strong problem-solving skills and ability to work as part of a technical, cross-functional analytics team : Excellent analytical and troubleshooting abilities, with the capability to collaborate effectively with team members from various technical and business domains. Skilled in creating data products that support analytic solutions: Proficiency in developing data products that enable stakeholders to derive meaningful insights and make data-driven decisions. This involves creating datasets, data models, and data services tailored to specific business needs. Experience in working with APIs and understanding data structures to serve them: Experience in designing, developing, and consuming APIs for data access and integration. This includes understanding various data structures and formats used in API communication. Knowledge of managing sensitive data, ensuring data privacy and security: Expertise in handling sensitive data with strict adherence to data privacy regulations and security best practices to protect against unauthorized access and breaches. Agile learner with a passion for solving complex data problems and delivering insights: A proactive and continuous learner with enthusiasm for addressing challenging data issues and providing valuable insights through innovative solutions. Experience with CPG Companies and POS Data: Experience in analyzing and interpreting POS data to provide actionable insights for CPG companies, enhancing their understanding of consumer behavior and optimizing sales strategies. Knowledge and Experience Bachelor’s degree in management information systems/technology, Computer Science, Engineering, or related discipline. MBA or equivalent is preferred. 7+ years of experience in designing large-scale data solutions, performing design assessments, crafting design options and analysis, finalizing preferred solution choice working with IT and Business stakeholders. 5+ years of experience tailoring, configuring, and crafting solutions within the Snowflake environment, including a profound grasp of Snowflake's data warehousing capabilities, data architecture, SQL optimization for Snowflake, and leveraging Snowflake's unique features such as Snowpipe, Streams, and Tasks for real-time data processing and analytics. A strong foundation in data migration strategies, performance tuning, and securing data within the Snowflake ecosystem is essential. 3+ years demonstrated expertise in architecting solutions within the Snowflake ecosystem, adhering to best practices in data architecture and design patterns. 7+ years of data engineering or design experience, designing, developing, and deploying scalable enterprise data analytics solutions from source system through ingestion and reporting. Expertise in data modeling principles/methods including, Conceptual, Logical & Physical Data Models for data warehouses, data lakes and/or database management systems. 5+ years of hands-on experience designing, building, and operationalizing data solutions and applications using cloud data and analytics services in combination with 3rd parties. 7+ years of hands-on relational, dimensional, and/or analytic experience (using RDBMS, dimensional, NoSQL data platform technologies, and ETL and data ingestion protocols). 7+ years of experience with database development and scripting. Professional Skills: Strong communication and interpersonal skills. Strong analytical and problem-solving skills and passion for product development. Strong understanding of Agile methodologies and open to working in agile environments with multiple stakeholders. Professional attitude and service orientation; team player. Ability to translate business needs into potential analytics solutions. Strong work ethic, ability to work at an abstract level and gain consensus. Ability to build a sense of trust and rapport to create a comfortable and effective workplace. Self-starter who can see the big picture, prioritize work to make the largest impact on the business and customer's vision and requirements. Fluency in English. To Be Considered Click the Apply button and complete the online application process. A member of our recruiting team will review your application and follow up if you seem like a great fit for this role. In the meantime, check out the career’s website. You’ll want to review this and come prepared with relevant questions when you pass GO and begin interviews. For Kimberly-Clark to grow and prosper, we must be an inclusive organization that applies the diverse experiences and passions of its team members to brands that make life better for people all around the world. We actively seek to build a workforce that reflects the experiences of our consumers. When you bring your original thinking to Kimberly-Clark, you fuel the continued success of our enterprise. We are a committed equal opportunity employer, and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability status, protected veteran status, sexual orientation, gender, identity, age, pregnancy, genetic information, citizenship status, or any other characteristic protected by law. The statements above are intended to describe the general nature and level of work performed by employees assigned to this classification. Statements are not intended to be construed as an exhaustive list of all duties, responsibilities and skills required for this position. Additional information about the compensation and benefits for this role are available upon request. You may contact kcchrprod@service-now.com for assistance. You must include the six-digit Job # with your request. This role is available for local candidates already authorized to work in the role’s country only. Kimberly-Clark will not provide relocation support for this role. .
Posted 2 weeks ago
7.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Candidates ready to join immediately can share their details via email for quick processing. 📌 CCTC | ECTC | Notice Period | Location Preference nitin.patil@ust.com Act fast for immediate attention! ⏳📩 Key Responsibilities: Data Extraction: Extract data from diverse sources while ensuring accuracy and completeness. Data Transformation: Perform data cleaning, validation, and apply business rules to transform raw data into a structured format for analysis. Data Loading: Load transformed data into target systems and design efficient data models and workflows. ETL Process Management: Design, develop, implement, and maintain ETL processes to integrate data efficiently into data warehouses or analytics platforms. Performance Optimization: Optimize and tune ETL processes for performance improvements, monitor jobs, and troubleshoot production issues. Data Quality and Governance: Ensure the quality, integrity, and compliance of data according to organizational and regulatory standards. Collaboration & Documentation: Work with business stakeholders to understand data requirements, document ETL workflows, and ensure proper communication. Tool-Specific Responsibilities: Leverage DataStage for designing and building complex ETL jobs. Use Azure Data Factory for scalable cloud-based integration and orchestration. Develop and maintain solutions for Snowflake data warehousing. Utilize SQL Server to manage data extraction and transformation processes. Implement DataStage Sequencers , Parallel Jobs, Aggregators, Joins, Merges, Lookups, etc. Provide support in resolving integration-related production issues following the change management process. Key Focus: Ensuring efficient, accurate, and secure data flow for the organization’s data warehousing and analytics needs. Must-Have Skills: Education: Bachelor’s degree in Computer Science, Information Technology, Engineering, or a related field. ETL Tools: 7+ years of hands-on experience in DataStage (V8.5 or higher) . Expertise in DataStage V11.3 and 8.7 versions. Strong experience in DataStage design and parallel jobs (e.g., Aggregator, Merge, Lookup, Source dataset, Change Capture). Advanced knowledge of UNIX and shell scripting . Azure Data Factory (ADF): 3+ years of experience in designing, developing, and managing Azure Data Factory pipelines . Proficient in using ADF connectors for integration with different data sources and destinations. Experience in ADF Data Flows and pipeline orchestration. Database & SQL: 7+ years of experience in Microsoft SQL Server , including experience in writing and optimizing SQL queries . 3+ years of experience in DB2 UDB Administration and Support . Experience in creating and managing SQL Server Agent jobs and SSIS packages . Hands-on experience in Data warehousing solutions and data modeling with SQL Server. Data Quality & Governance: Ability to ensure high data integrity and governance throughout ETL processes. Good to Have Skills: Experience with Snowflake data warehouse solutions. Familiarity with cloud-based ETL tools and technologies. Knowledge of Kafka (Basic Understanding) for stream processing and integration. Experience with Report Solution/Design and building automated reports using SQL Server and other reporting tools. Experience with implementing Data Security and Compliance processes in ETL. Role Requirements: Problem-Solving Skills: Ability to troubleshoot issues related to ETL processes and data integration. Collaboration: Ability to work effectively in a cross-functional team with business analysts, data engineers, and other stakeholders. Attention to Detail: Strong focus on ensuring the accuracy and consistency of data throughout the ETL pipeline. Communication: Excellent communication skills for documentation and reporting purposes.
Posted 2 weeks ago
7.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Candidates ready to join immediately can share their details via email for quick processing. 📌 CCTC | ECTC | Notice Period | Location Preference nitin.patil@ust.com Act fast for immediate attention! ⏳📩 Key Responsibilities: Data Extraction: Extract data from diverse sources while ensuring accuracy and completeness. Data Transformation: Perform data cleaning, validation, and apply business rules to transform raw data into a structured format for analysis. Data Loading: Load transformed data into target systems and design efficient data models and workflows. ETL Process Management: Design, develop, implement, and maintain ETL processes to integrate data efficiently into data warehouses or analytics platforms. Performance Optimization: Optimize and tune ETL processes for performance improvements, monitor jobs, and troubleshoot production issues. Data Quality and Governance: Ensure the quality, integrity, and compliance of data according to organizational and regulatory standards. Collaboration & Documentation: Work with business stakeholders to understand data requirements, document ETL workflows, and ensure proper communication. Tool-Specific Responsibilities: Leverage DataStage for designing and building complex ETL jobs. Use Azure Data Factory for scalable cloud-based integration and orchestration. Develop and maintain solutions for Snowflake data warehousing. Utilize SQL Server to manage data extraction and transformation processes. Implement DataStage Sequencers , Parallel Jobs, Aggregators, Joins, Merges, Lookups, etc. Provide support in resolving integration-related production issues following the change management process. Key Focus: Ensuring efficient, accurate, and secure data flow for the organization’s data warehousing and analytics needs. Must-Have Skills: Education: Bachelor’s degree in Computer Science, Information Technology, Engineering, or a related field. ETL Tools: 7+ years of hands-on experience in DataStage (V8.5 or higher) . Expertise in DataStage V11.3 and 8.7 versions. Strong experience in DataStage design and parallel jobs (e.g., Aggregator, Merge, Lookup, Source dataset, Change Capture). Advanced knowledge of UNIX and shell scripting . Azure Data Factory (ADF): 3+ years of experience in designing, developing, and managing Azure Data Factory pipelines . Proficient in using ADF connectors for integration with different data sources and destinations. Experience in ADF Data Flows and pipeline orchestration. Database & SQL: 7+ years of experience in Microsoft SQL Server , including experience in writing and optimizing SQL queries . 3+ years of experience in DB2 UDB Administration and Support . Experience in creating and managing SQL Server Agent jobs and SSIS packages . Hands-on experience in Data warehousing solutions and data modeling with SQL Server. Data Quality & Governance: Ability to ensure high data integrity and governance throughout ETL processes. Good to Have Skills: Experience with Snowflake data warehouse solutions. Familiarity with cloud-based ETL tools and technologies. Knowledge of Kafka (Basic Understanding) for stream processing and integration. Experience with Report Solution/Design and building automated reports using SQL Server and other reporting tools. Experience with implementing Data Security and Compliance processes in ETL. Role Requirements: Problem-Solving Skills: Ability to troubleshoot issues related to ETL processes and data integration. Collaboration: Ability to work effectively in a cross-functional team with business analysts, data engineers, and other stakeholders. Attention to Detail: Strong focus on ensuring the accuracy and consistency of data throughout the ETL pipeline. Communication: Excellent communication skills for documentation and reporting purposes.
Posted 2 weeks ago
6.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About us: Where elite tech talent meets world-class opportunities! At Xenon7, we work with leading enterprises and innovative startups on exciting, cutting-edge projects that leverage the latest technologies across various domains of IT including Data, Web, Infrastructure, AI, and many others. Our expertise in IT solutions development and on-demand resources allows us to partner with clients on transformative initiatives, driving innovation and business growth. Whether it's empowering global organizations or collaborating with trailblazing startups, we are committed to delivering advanced, impactful solutions that meet today's most complex challenges. We are building a community of top-tier experts and we're opening the doors to an exclusive group of exceptional AI & ML Professionals ready to solve real-world problems and shape the future of intelligent systems. Structured Onboarding Process We ensure every member is aligned and empowered: Screening - We review your application and experience in Data & AI, ML engineering, and solution delivery Technical Assessment - 2-step technical assessment process that includes an interactive problem-solving test, and a verbal interview about your skills and experience Matching you to Opportunity - We explore how your skills align with ongoing projects and innovation tracks Who We're Looking For We are looking for a skilled and experienced Data Engineer with deep expertise in the Databricks ecosystem to join our data engineering team. You will be responsible for building, optimizing, and maintaining scalable data pipelines on Databricks, leveraging Delta Lake, PySpark, and cloud-native services (AWS, Azure, or GCP). You will collaborate with data scientists, analysts, and business stakeholders to ensure clean, high-quality, and governed data is available for analytics and machine learning use cases. Requirements 6+ years of experience as a Data Engineer, with at least 4 years hands-on with Databricks in production environments Proficient in PySpark and SQL for large-scale data processing Deep understanding of Delta Lake features: ACID transactions, schema enforcement, time travel, and vacuuming Experience working with cloud platforms: AWS (Glue, S3), Azure (Data Lake, ADF), or GCP (BigQuery, GCS) Hands-on experience with Databricks Auto Loader, Structured Streaming, and job scheduling Familiarity with Unity Catalog for multi-workspace governance and fine-grained data access Experience integrating with orchestration tools (Airflow, ADF) and using infrastructure-as-code for deployment Comfortable with version control and automation using Git, Databricks Repos, dbx, or Terraform Experience with performance tuning, Z-Ordering, caching strategies, and partitioning best practices Benefits At Xenon7, we're not just building AI systems—we're building a community of talent with the mindset to lead, collaborate, and innovate together. Ecosystem of Opportunity: You'll be part of a growing network where client engagements, thought leadership, research collaborations, and mentorship paths are interconnected. Whether you're building solutions or nurturing the next generation of talent, this is a place to scale your influence Collaborative Environment: Our culture thrives on openness, continuous learning, and engineering excellence. You'll work alongside seasoned practitioners who value smart execution and shared growth Flexible & Impact-Driven Work: Whether you're contributing from a client project, innovation sprint, or open-source initiative, we focus on outcomes—not hours. Autonomy, ownership, and curiosity are encouraged here Talent-Led Innovation: We believe communities are strongest when built around real practitioners. Our Innovation Community isn't just a knowledge-sharing forum—it's a launchpad for members to lead new projects, co-develop tools, and shape the direction of AI itself
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
31458 Jobs | Dublin
Wipro
16542 Jobs | Bengaluru
EY
10788 Jobs | London
Accenture in India
10711 Jobs | Dublin 2
Amazon
8660 Jobs | Seattle,WA
Uplers
8559 Jobs | Ahmedabad
IBM
7988 Jobs | Armonk
Oracle
7535 Jobs | Redwood City
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi
Capgemini
6091 Jobs | Paris,France