Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
8.0 - 10.0 years
9 - 14 Lacs
hyderabad
Work from Office
Required Skills •8+ years of experience in data architecture and design. •Strong hands-on experience with Azure Data Services, Databricks, and ADF. •Proven experience in insurance data domains and product-oriented data design.
Posted 2 days ago
5.0 - 7.0 years
5 - 5 Lacs
mumbai, chennai, gurugram
Work from Office
Role Proficiency: Act creatively to develop applications and select appropriate technical options optimizing application development maintenance and performance by employing design patterns and reusing proven solutions account for others' developmental activities Outcomes: Interpret the application/feature/component design to develop the same in accordance with specifications. Code debug test document and communicate product/component/feature development stages. Validate results with user representatives; integrates and commissions the overall solution Select appropriate technical options for development such as reusing improving or reconfiguration of existing components or creating own solutions Optimises efficiency cost and quality. Influence and improve customer satisfaction Set FAST goals for self/team; provide feedback to FAST goals of team members Measures of Outcomes: Adherence to engineering process and standards (coding standards) Adherence to project schedule / timelines Number of technical issues uncovered during the execution of the project Number of defects in the code Number of defects post delivery Number of non compliance issues On time completion of mandatory compliance trainings Outputs Expected: Code: Code as per design Follow coding standards templates and checklists Review code - for team and peers Documentation: Create/review templates checklists guidelines standards for design/process/development Create/review deliverable documents. Design documentation r and requirements test cases/results Configure: Define and govern configuration management plan Ensure compliance from the team Test: Review and create unit test cases scenarios and execution Review test plan created by testing team Provide clarifications to the testing team Domain relevance: Advise Software Developers on design and development of features and components with a deep understanding of the business problem being addressed for the client. Learn more about the customer domain identifying opportunities to provide valuable addition to customers Complete relevant domain certifications Manage Project: Manage delivery of modules and/or manage user stories Manage Defects: Perform defect RCA and mitigation Identify defect trends and take proactive measures to improve quality Estimate: Create and provide input for effort estimation for projects Manage knowledge: Consume and contribute to project related documents share point libraries and client universities Review the reusable documents created by the team Release: Execute and monitor release process Design: Contribute to creation of design (HLD LLD SAD)/architecture for Applications/Features/Business Components/Data Models Interface with Customer: Clarify requirements and provide guidance to development team Present design options to customers Conduct product demos Manage Team: Set FAST goals and provide feedback Understand aspirations of team members and provide guidance opportunities etc Ensure team is engaged in project Certifications: Take relevant domain/technology certification Skill Examples: Explain and communicate the design / development to the customer Perform and evaluate test results against product specifications Break down complex problems into logical components Develop user interfaces business software components Use data models Estimate time and effort required for developing / debugging features / components Perform and evaluate test in the customer or target environment Make quick decisions on technical/project related challenges Manage a Team mentor and handle people related issues in team Maintain high motivation levels and positive dynamics in the team. Interface with other teams designers and other parallel practices Set goals for self and team. Provide feedback to team members Create and articulate impactful technical presentations Follow high level of business etiquette in emails and other business communication Drive conference calls with customers addressing customer questions Proactively ask for and offer help Ability to work under pressure determine dependencies risks facilitate planning; handling multiple tasks. Build confidence with customers by meeting the deliverables on time with quality. Estimate time and effort resources required for developing / debugging features / components Make on appropriate utilization of Software / Hardware's. Strong analytical and problem-solving abilities Knowledge Examples: Appropriate software programs / modules Functional and technical designing Programming languages - proficient in multiple skill clusters DBMS Operating Systems and software platforms Software Development Life Cycle Agile - Scrum or Kanban Methods Integrated development environment (IDE) Rapid application development (RAD) Modelling technology and languages Interface definition languages (IDL) Knowledge of customer domain and deep understanding of sub domain where problem is solved Additional Comments: Accountabilities - Work in iterative processes to map data into common formats, perform advanced data analysis, validate findings, or test hypotheses, and communicate results and methodology. - Provide recommendations on who to utilize our data to optimize our search, increase in data accuracy results, and help us to better understand or existing data. - Communicate technical information successfully with technical and non-technical audiences such as third-party vendors, external customer technical departments, various levels of management and other relevant parties. - Collaborate effectively with all team members as well as attend regular team meetings. Required Qualifications - Bachelor's or master's degree in computer science, engineering, mathematics, statistics, or equivalent technical discipline - 6+ years of experience working with data mapping, data analysis and numerous large data sets/data warehouses. - Strong application development experience using JAVA, C++ - Strong experience with Azure Data bricks, ADLS2, Azure Data Explorer, EventHub technologies. - Experience with application containerization and deployment process (Docker, Helm charts, GitHub, CI/CD pipelines). - Experience working with Cosmos DB is preferred. - Ability to assemble, analyze, and evaluate big data and be able to make appropriate and well-reasoned recommendations to stakeholders. - Good analytical and problem-solving skills, good understanding of different data structures, algorithms, and their usage in solving business problems. - Strong communication (verbal and written) and customer service skills. Strong interpersonal, communication, and presentation skills applicable to a wide audience including senior and executive management, customers, etc. - Strong skills in setting, communicating, implementing, and achieving business objectives and goals. - Strong organization/project planning, time management, and change management skills across multiple functional groups and departments, and strong delegation skills involving prioritizing and reprioritizing projects and managing projects of various size and complexity Required Skills Java, Spring boot, Azure cloud, Docker, Helm charts (for Kubernetes Deployment).
Posted 2 days ago
6.0 - 8.0 years
15 - 20 Lacs
hyderabad, chennai, bengaluru
Work from Office
Role & responsibilities We are seeking a hands-on Data Engineer to develop, optimize, and maintain automated data pipelines supporting data governance and analytics initiatives. This role will focus on building production-ready workflows for ingestion, transformation, quality checks, lineage capture, access auditing, cost usage analysis, retention tracking, and metadata integration, primarily using Azure Databricks , Azure Data Lake , and Microsoft Purview . Experience: 4+ years in data engineering, with strong Azure and Databricks experience Key Responsibilities Pipeline Development Design, build, and deploy robust ETL/ELT pipelines in Databricks (PySpark, SQL, Delta Lake) to ingest, transform, and curate governance and operational metadata from multiple sources landed in Databricks. Granular Data Quality Capture – Implement profiling logic to capture issue-level metadata (source table, column, timestamp, severity, rule type) to support drill-down from dashboards into specific records and enable targeted remediation. Governance Metrics Automation – Develop data pipelines to generate metrics for dashboards covering data quality, lineage, job monitoring, access & permissions, query cost, usage & consumption, retention & lifecycle, policy enforcement, sensitive data mapping, and governance KPIs. Microsoft Purview Integration – Automate asset onboarding, metadata enrichment, classification tagging, and lineage extraction for integration into governance reporting. Data Retention & Policy Enforcement – Implement logic for retention tracking and policy compliance monitoring (masking, RLS, exceptions). Job & Query Monitoring – Build pipelines to track job performance, SLA adherence, and query costs for cost and performance optimization. Metadata Storage & Optimization – Maintain curated Delta tables for governance metrics, structured for efficient dashboard consumption. Testing & Troubleshooting – Monitor pipeline execution, optimize performance, and resolve issues quickly. Collaboration – Work closely with the lead engineer, QA, and reporting teams to validate metrics and resolve data quality issues. Security & Compliance – Ensure all pipelines meet organizational governance, privacy, and security standards. Required Qualifications Bachelor’s degree in Computer Science, Engineering, Information Systems, or related field 4+ years of hands-on data engineering experience, with Azure Databricks and Azure Data Lake Proficiency in PySpark , SQL , and ETL/ELT pipeline design Demonstrated experience building granular data quality checks and integrating governance logic into pipelines Working knowledge of Microsoft Purview for metadata management, lineage capture, and classification Experience with Azure Data Factory or equivalent orchestration tools Understanding of data modeling, metadata structures, and data cataloging concepts Strong debugging, performance tuning, and problem-solving skills Ability to document pipeline logic and collaborate with cross-functional teams Preferred candidate profile Preferred Qualifications Microsoft certification in Azure Data Engineering Experience in governance-heavy or regulated environments (e.g., finance, healthcare, hospitality) Exposure to Power BI or other BI tools as a data source consumer Familiarity with DevOps/CI-CD for data pipelines in Azure Experience integrating both cloud and on-premises data sources into Azure
Posted 2 days ago
5.0 - 10.0 years
12 - 22 Lacs
hyderabad, chennai, bengaluru
Work from Office
Job Summary: We are seeking an experienced and motivated Microsoft Fabric Architect with 5-6 years of experience in designing and building data solutions using Microsoft Fabric , Power BI , SQL , PySpark , and Azure Databricks . This role requires strong hands-on skills in cloud-based data architecture and modern analytics tools, with an emphasis on data modeling , performance optimization , and advanced data engineering . Key Responsibilities: Architect and implement modern data platforms using Microsoft Fabric , integrating Power BI , SQL , and Azure Databricks . Build scalable and optimized data pipelines and ETL/ELT workflows using SQL and PySpark . Develop robust data models to support business intelligence and analytics use cases. Design and implement data visualization solutions using Power BI . Collaborate with data engineers, analysts, and business teams to define and deliver end-to-end data solutions. Ensure data governance, quality, and security standards are met across the platform. Monitor, troubleshoot, and continuously improve the performance of data solutions. Required Skills and Experience: 5 to 6 years of experience in data architecture or data engineering roles. At least 1 year of hands-on experience with Microsoft Fabric and deep knowledge of Power BI . Strong proficiency in SQL for data modeling, transformation, and performance tuning. Experience developing data pipelines and transformations using PySpark in Azure Databricks . Familiarity with Microsoft Azure ecosystem (e.g., Azure Storage, Azure SQL). Good understanding of data governance , access control , and compliance in cloud environments. Strong problem-solving skills and the ability to work independently or as part of a team.
Posted 2 days ago
2.0 - 5.0 years
5 - 9 Lacs
gurugram
Work from Office
Educational Requirements Bachelor of Engineering Service Line Data & Analytics Unit Responsibilities A day in the life of an Infoscion As part of the Infosys delivery team, your primary role would be to interface with the client for quality assurance, issue resolution and ensuring high customer satisfaction. You will understand requirements, create and review designs, validate the architecture and ensure high levels of service offerings to clients in the technology domain. You will participate in project estimation, provide inputs for solution delivery, conduct technical risk planning, perform code reviews and unit test plan reviews. You will lead and guide your teams towards developing optimized high quality code deliverables, continual knowledge management and adherence to the organizational guidelines and processes. You would be a key contributor to building efficient programs/ systems and if you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you!If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Technical and Professional Requirements: Primary skills:Technology->Machine Learning->Python Preferred Skills: Technology->Machine Learning->Python
Posted 2 days ago
7.0 - 12.0 years
15 - 18 Lacs
pune
Work from Office
Greetings from NAM Info!!! Please go through the job description. If you are interested in this opportunity, please reply with the following information to praveen@nam-it.com : Full Name (as in Adhar): Current Location: Expected CTC: Present CTC: Least Notice Period(Last Working Day If any): PAN Number: Job Title: Azure Data Engineer Company: Thermax Limited Location: Pune Job Type: Full-Time Employee of Thermax Experience Level: Senior (7+ years) Interview Process: 2 - 3 Rounds virtual Job Summary: Thermax is seeking a Senior Azure Databricks Engineer to lead the design, development, and deployment of scalable data solutions on the Azure cloud platform. The ideal candidate will have deep experience in Azure Databricks, Spark, and modern data engineering practices, and will play a pivotal role in driving advanced analytics, predictive maintenance, and energy efficiency use cases across our industrial operations. Key Responsibilities: Design and implement robust data pipelines in Azure Databricks using PySpark, SQL, and Delta Lake. Build and maintain scalable ETL/ELT workflows for ingesting data from various industrial sources (SCADA, PLCs, SAP, IoT). Collaborate with data scientists, business analysts, and domain experts to support AI/ML model development and deployment. Work with Azure Data Lake Storage (ADLS Gen2), Azure Synapse, Azure Data Factory, and Azure Event Hub for data integration and transformation. Optimize Spark jobs for performance, reliability, and cost-efficiency. Implement CI/CD pipelines using DevOps tools (e.g., Azure DevOps, Git). Ensure data governance, lineage, and security compliance in collaboration with the IT and security teams. Support and mentor junior engineers in cloud data engineering best practices . Required Qualifications: Bachelors or masters degree in computer science, Engineering, or a related field. 7+ years of experience in Data Engineering, including 3+ years in Azure Databricks/Spark. Strong proficiency in PySpark, Spark SQL, and Delta Lake architecture . Experience with Azure services: Data Factory, Data Lake, Synapse, Event Hub, and Key Vault. Solid understanding of distributed computing, big data processing, and real-time streaming. Familiarity with industrial data protocols (e.g., OPC UA, MQTT) is a strong plus. Strong knowledge of data modeling, schema design, and data quality principles. Preferred Qualifications: Experience in energy, manufacturing, or utilities sectors. Exposure to IoT analytics, predictive maintenance, or digital twin architectures. Certifications such as Azure Data Engineer Associate (DP-203) or Databricks Certified Data Engineer. Experience with MLflow, Databricks Feature Store, or integrating with Power BI. Working knowledge of containerization (Docker, Kubernetes) is a plus. Soft Skills: Strong problem-solving and debugging skills. Excellent communication and stakeholder engagement. Ability to lead discussions across technical and non-technical teams. Self-starter, detail-oriented, and collaborative mindset. Regards, Praveen Staffing Executive NAM Info Pvt Ltd, Email: praveen@nam-it.com Website - www.nam-it.com
Posted 3 days ago
5.0 - 10.0 years
5 - 12 Lacs
bengaluru
Work from Office
Job Description: AWS/Azure/SAP ETL Data Modelling Data Integration & Ingestion Data Manipulation and Processing GITHUB, Action , Azure DevOps Data factory, Databricks, SQL DB, Synapse, Stream Analytics, Glue, Airflow, Kinesis, Redshift, SonarQube, PyTest
Posted 3 days ago
8.0 - 12.0 years
0 Lacs
karnataka
On-site
As a part of the data and analytics engineering team at PwC, your focus will be on utilizing advanced technologies and techniques to create robust data solutions for clients. Your role will involve transforming raw data into actionable insights, enabling informed decision-making, and contributing to business growth. Specifically in data engineering at PwC, you will be responsible for designing and constructing data infrastructure and systems that facilitate efficient data processing and analysis. This will include the development and implementation of data pipelines, data integration, and data transformation solutions. At PwC - AC, we are seeking an Azure Manager specializing in Data & AI, with a strong background in managing end-to-end implementations of Azure Databricks within large-scale Data & AI programs. In this role, you will be involved in architecting, designing, and deploying scalable and secure solutions that meet business requirements, encompassing ETL, data integration, and migration. Collaboration with cross-functional, geographically dispersed teams and clients will be key to understanding strategic needs and translating them into effective technology solutions. Your responsibilities will span technical project scoping, delivery planning, team leadership, and ensuring the timely execution of high-quality solutions. Utilizing big data technologies, you will create scalable, fault-tolerant components, engage stakeholders, overcome obstacles, and stay abreast of emerging technologies to enhance client ROI. Candidates applying for this role should possess 8-12 years of hands-on experience and meet the following position requirements: - Proficiency in designing, architecting, and implementing scalable Azure Data Analytics solutions utilizing Azure Databricks. - Expertise in Azure Databricks, including Spark architecture and optimization. - Strong grasp of Azure cloud computing and big data technologies. - Experience in traditional and modern data architecture and processing concepts, encompassing relational databases, data warehousing, big data, NoSQL, and business analytics. - Proficiency in Azure ADLS, Data Databricks, Data Flows, HDInsight, and Azure Analysis services. - Ability to build stream-processing systems using solutions like Storm or Spark-Streaming. - Practical knowledge of designing and building Near-Real Time and Batch Data Pipelines, expertise in SQL and Data modeling within an Agile development process. - Experience in the architecture, design, implementation, and support of complex application architectures. - Hands-on experience implementing Big Data solutions using Microsoft Data Platform and Azure Data Services. - Familiarity with working in a DevOps environment using tools like Chef, Puppet, or Terraform. - Strong analytical and troubleshooting skills, along with proficiency in quality processes and implementation. - Excellent communication skills and business/domain knowledge in Financial Services, Healthcare, Consumer Market, Industrial Products, Telecommunication, Media and Technology, or Deal advisory. - Familiarity with Application DevOps tools like Git, CI/CD Frameworks, Jenkins, or Gitlab. - Good understanding of Data Modeling and Data Architecture. Certification in Data Engineering on Microsoft Azure (DP 200/201/203) is required. Additional Information: - Travel Requirements: Travel to client locations may be necessary based on project needs. - Line of Service: Advisory - Horizontal: Technology Consulting - Designation: Manager - Location: Bangalore, India In addition to the above, the following skills are considered advantageous: - Cloud expertise in AWS, GCP, Informatica-Cloud, Oracle-Cloud. - Knowledge of Cloud DW technologies like Snowflake and Databricks. - Certifications in Azure Databricks. - Familiarity with Open Source technologies such as Apache Spark, Hadoop, NoSQL, Kafka, and Solr/Elastic Search. - Data Engineering skills in Java, Python, Pyspark, and R-Programming. - Data Visualization proficiency in Tableau and Qlik. Education qualifications accepted include BE/B.Tech/MCA/M.Sc/M.E/M.Tech/MBA.,
Posted 3 days ago
3.0 - 7.0 years
0 Lacs
pune, maharashtra
On-site
Choosing Capgemini means choosing a company where you will be empowered to shape your career in the way you'd like, where you'll be supported and inspired by a collaborative community of colleagues around the world, and where you'll be able to reimagine what's possible. Join us and help the world's leading organizations unlock the value of technology and build a more sustainable, more inclusive world. Design, develop, and maintain data pipelines using Azure Data Factory, Azure Databricks, and Azure Synapse. Implement ETL solutions to integrate data from various sources into Azure Data Lake and Data Warehouse. Hands-on experience with SQL, Python, PySpark for data processing. Expertise in building Power BI dashboards and reports. Strong DAX and Power Query skills. Experience in Power BI Service, Gateways, and embedding reports. Develop Power BI datasets, semantic models, and row-level security for data access control. Experience on Azure Data Factory, Azure Databricks, and Azure Synapse. Strong customer orientation, decision-making, problem-solving, communication, and presentation skills. Very good judgment skills and ability to shape compelling solutions and solve unstructured problems with assumptions. Very good collaboration skills and ability to interact with multicultural and multifunctional teams spread across geographies. Strong executive presence and entrepreneurial spirit. Superb leadership and team-building skills with the ability to build consensus and achieve goals through collaboration rather than direct line authority. You can shape your career with us. We offer a range of career paths and internal opportunities within the Capgemini group. You will also get personalized career guidance from our leaders. You will get comprehensive wellness benefits including health checks, telemedicine, insurance with top-ups, elder care, partner coverage, or new parent support via flexible work. You will have the opportunity to learn on one of the industry's largest digital learning platforms, with access to 250,000+ courses and numerous certifications. Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by its market-leading capabilities in AI, generative AI, cloud, and data, combined with its deep industry expertise and partner ecosystem.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
The role of a Databricks PySpark IICS professional at Infosys involves working as part of the consulting team to address customer issues, develop innovative solutions, and ensure successful deployment to achieve client satisfaction. Your responsibilities will include contributing to proposal development, solution design, product configuration, and conducting pilots and demonstrations. Additionally, you will lead small projects, provide high-quality solutions, and support organizational initiatives. The technical requirements for this role include expertise in Data On Cloud Platform, specifically Azure Data Lake (ADL). You should also possess the ability to develop strategies that drive innovation, growth, and profitability for clients. Familiarity with software configuration management systems, industry trends, and financial processes is essential. Strong problem-solving skills, collaboration abilities, and knowledge of pricing models are key for success in this role. Preferred skills for this position include experience with Azure Analytics Services, specifically Azure Databricks. By leveraging your domain knowledge, client interfacing skills, and project management capabilities, you can help clients navigate their digital transformation journey effectively. If you are passionate about delivering value-added solutions, driving business transformation, and embracing the latest technologies, this opportunity at Infosys is the right fit for you.,
Posted 1 week ago
7.0 - 11.0 years
0 Lacs
karnataka
On-site
As a technical leader at Advisor360, you will play a crucial role in the planning and implementation of mid- to large-scale projects. Your expertise will be instrumental in architecting and implementing cloud-based data solutions using your strong technical skills. You will be responsible for validating requirements, performing business and technical analysis, designing cloud-native applications, and writing optimized PySpark-based data pipelines. Ensuring compliance with coding standards and best practices for AI-assisted code generation will be a part of your responsibilities. You will utilize GenAI tools like Cursor, Claude, and other LLMs to decompose complex requirements and auto-generate UI, API, and database scripts for rapid development. Your proficiency in introducing new software design patterns, AI-driven development workflows, and emerging cloud technologies will be crucial for the team's success. As a subject matter expert within the organization, you will help resolve complex technical issues related to cloud data engineering, distributed computing, and microservices architecture. Mentoring team members and fostering collaborative learning environments, particularly in areas related to GenAI, Azure, PySpark, Databricks, and CI/CD automation, will be a key aspect of your role. Your preferred strengths should include proficiency in GenAI-powered development, strong experience with SQL, relational databases, and GIT, as well as expertise in Microsoft Azure cloud services. Familiarity with serverless architectures, containerization, and big data frameworks will be beneficial for this position. To excel in this role, you are expected to have 7+ years of software engineering experience with Python and .NET, proven ability to analyze and automate requirement-based development using GenAI, and hands-on expertise in building and deploying cloud-based data processing applications. Strong knowledge of SDLC methodologies, proficiency in coding standards, and experience in integrating AI-based automation for improving code quality are also required. While wealth management domain experience is a plus, it is not mandatory. Candidates should be willing to learn and master the domain through on-the-job experience. Join Advisor360 for a rewarding career experience where your contributions are recognized and rewarded. Enjoy competitive base salaries, annual performance-based bonuses, and comprehensive health benefits. We trust our employees to manage their time effectively and offer an unlimited paid time off program to ensure you can perform at your best every day. We are committed to diversity and inclusion, believing that it drives innovation and welcomes individuals from all backgrounds to bring their authentic selves to work every day.,
Posted 1 week ago
6.0 - 10.0 years
0 Lacs
karnataka
On-site
As a Senior Associate in the Data, Analytics & Specialist Managed Service Tower at PwC, with 6 to 10 years of experience, you will be part of a team of problem solvers addressing complex business issues from strategy to execution. Your responsibilities at this management level include utilizing feedback and reflection for personal development, being flexible in stretch opportunities, and demonstrating critical thinking skills to solve unstructured problems. You will also be involved in ticket quality reviews, project status reporting, and ensuring adherence to SLAs and incident, change, and problem management processes. Seeking diverse opportunities, communicating effectively, upholding ethical standards, demonstrating leadership capabilities, and collaborating in a team environment are essential aspects of this role. You will also be expected to contribute to cross competency work, COE activities, and manage escalations and risks. As a Senior Azure Cloud Engineer, you are required to have a minimum of 6 years of hands-on experience in building advanced Data warehousing solutions on leading cloud platforms, along with 3-5 years of Operate/Managed Services/Production Support Experience. Your responsibilities will include designing scalable and secure data structures, developing data pipelines for downstream consumption, and implementing ETL processes using tools like Informatica, Talend, SSIS, AWS, Azure, Spark, SQL, and Python. Experience with data analytics tools, data governance solutions, ITIL processes, and strong communication and problem-solving skills are essential for this role. Knowledge of Azure Data Factory, Azure SQL Database, Azure Data Lake, Azure Blob Storage, Azure Databricks, Azure Synapse Analytics, and Apache Spark is also required. Additionally, experience in data validation, cleansing, security, and privacy measures, as well as SQL querying, data governance, and performance tuning are essential. Nice to have qualifications for this role include Azure certification. Managed Services- Data, Analytics & Insights Managed Service at PwC focuses on providing integrated services and solutions to clients, enabling them to optimize operations and accelerate outcomes through technology and human-enabled experiences. The team emphasizes a consultative approach to operations, leveraging industry insights and talent to drive transformational journeys and sustained client outcomes. As a member of the Data, Analytics & Insights Managed Service team, you will be involved in critical offerings, help desk support, enhancement, optimization work, and strategic advisory engagements. Your role will require a mix of technical expertise and relationship management skills to support customer engagements effectively.,
Posted 1 week ago
2.0 - 6.0 years
0 Lacs
thiruvananthapuram, kerala
On-site
As an Associate Data Engineer/Analyst at EY, you will leverage your expertise to create, develop, and maintain scalable big data processing pipelines in distributed computing environments. Your role will involve designing and implementing interactive Power BI reports and dashboards tailored to the specific needs of various business units. Collaborating with cross-functional teams, you will gather data requirements and design effective data solutions. Your responsibilities will also include executing data ingestion, processing, and transformation workflows to support analytical and machine learning applications. To excel in this role, you should have over 2 years of experience as a data analyst or data engineer, along with a Bachelor's degree in a relevant field. Proficiency in SQL, a solid understanding of Azure data engineering tools like Azure Data Factory, and familiarity with Python programming are essential. You should also be competent in using Azure Databricks and possess expertise in Power BI and other Power Platform tools. Knowledge of large language models and generative AI solutions will be advantageous. Furthermore, you are expected to stay updated with emerging technologies and best practices in data processing and analytics, integrating them into EY's data engineering methodologies. Your role will require expertise in data modelling, data warehousing principles, data governance best practices, and ETL processes. Strong written and verbal communication skills, including documentation, presentation, and data storytelling, are essential for effective collaboration within the team. Joining EY means contributing to building a better working world, where new value is created for clients, people, society, and the planet while fostering trust in capital markets. With the support of data, AI, and advanced technology, EY teams help clients shape a confident future and address pressing issues of today and tomorrow. Working across assurance, consulting, tax, strategy, and transactions services, EY teams operate in a globally connected, multi-disciplinary network, providing services in over 150 countries and territories.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
ahmedabad, gujarat
On-site
You should have at least 5 years of experience working as a Data Engineer. Your expertise should include a strong background in Azure Cloud services and proficiency in tools such as Azure Databricks, PySpark, and Delta Lake. It is essential to have solid experience in Python and FastAPI for API development, as well as familiarity with Azure Functions for serverless API deployments. Experience in managing ETL pipelines using Apache Airflow is also required. Hands-on experience with databases like PostgreSQL and MongoDB is necessary. Strong SQL skills and the ability to work with large datasets are key for this role.,
Posted 1 week ago
2.0 - 6.0 years
0 Lacs
hyderabad, telangana
On-site
As a member of the data engineering team at PepsiCo, you will play a crucial role in developing and overseeing data product build & operations. Your primary responsibility will be to drive a strong vision for how data engineering can proactively create a positive impact on the business. Working alongside a team of data engineers, you will build data pipelines, rest data on the PepsiCo Data Lake, and facilitate exploration and access for analytics, visualization, machine learning, and product development efforts across the company. Your contributions will directly impact the design, architecture, and implementation of PepsiCo's flagship data products in areas such as revenue management, supply chain, manufacturing, and logistics. You will collaborate closely with process owners, product owners, and business users, operating in a hybrid environment that includes in-house, on-premise data sources as well as cloud and remote systems. Your responsibilities will include active contribution to code development, managing and scaling data pipelines, building automation and monitoring frameworks for data pipeline quality and performance, implementing best practices around systems integration, security, performance, and data management, and empowering the business through increased adoption of data, data science, and business intelligence. Additionally, you will collaborate with internal clients, drive solutioning and POC discussions, and evolve the architectural capabilities of the data platform by engaging with enterprise architects and strategic partners. To excel in this role, you should have at least 6+ years of overall technology experience, including 4+ years of hands-on software development, data engineering, and systems architecture. You should also possess 4+ years of experience with Data Lake Infrastructure, Data Warehousing, and Data Analytics tools, along with expertise in SQL optimization, performance tuning, and programming languages like Python, PySpark, and Scala. Experience in cloud data engineering, specifically in Azure, is essential, and familiarity with Azure cloud services is a plus. You should have experience in data modeling, data warehousing, building ETL pipelines, and working with data quality tools. Proficiency in MPP database technologies, cloud infrastructure, containerized services, version control systems, deployment & CI tools, and Azure services like Data Factory, Databricks, and Azure Machine Learning tools is desired. Additionally, experience with Statistical/ML techniques, retail or supply chain solutions, metadata management, data lineage, data glossaries, agile development, DevOps, DataOps concepts, and business intelligence tools will be advantageous. A degree in Computer Science, Math, Physics, or related technical fields is preferred for this role.,
Posted 1 week ago
8.0 - 12.0 years
0 Lacs
hyderabad, telangana
On-site
The role of Senior Data Engineer at GSPANN involves designing, developing, and optimizing scalable data solutions, utilizing expertise in Azure Data Factory, Azure Databricks, PySpark, Delta Tables, and advanced data modeling. The position also demands proficiency in performance optimization, API integrations, DevOps practices, and data governance. You will be responsible for designing, developing, and orchestrating scalable data pipelines using Azure Data Factory (ADF). Additionally, you will build and manage Apache Spark clusters, create notebooks, and execute jobs in Azure Databricks. Ingesting, organizing, and transforming data within the Microsoft Fabric ecosystem using OneLake will also be part of your role. Your tasks will include authoring complex transformations, writing SQL queries for large-scale data processing using PySpark and Spark SQL, and creating, optimizing, and maintaining Delta Lake tables. Furthermore, you will parse, validate, and transform semi-structured JSON datasets, build and consume REST/OData services for custom data ingestion through API integration, and implement bronze, silver, and gold layers in data lakes using the Medallion Architecture. To ensure efficient processing of high data volumes for large-scale performance optimization, you will apply partitioning, caching, and resource tuning. Designing star and snowflake schemas, along with fact and dimension tables for multidimensional modeling in reporting use cases, will be a crucial aspect of your responsibilities. Working with tabular and OLAP cube structures in Azure Analysis Services to facilitate downstream business intelligence will also be part of your role, along with collaborating with the DevOps team to define infrastructure, manage access and security, and automate deployments. In terms of skills and experience, you are expected to ingest and harmonize data from SAP ECC and S/4HANA systems using Data Sphere. Utilizing Git, Azure DevOps Pipelines, Terraform, or Azure Resource Manager templates for CI/CD and DevOps tooling, leveraging Azure Monitor, Log Analytics, and data pipeline metrics for data observability and monitoring, conducting query diagnostics, identifying bottlenecks, and determining root causes for performance troubleshooting are among the key responsibilities. Applying metadata management, tracking data lineage, and enforcing compliance best practices for data governance and cataloging are also part of the role. Lastly, documenting processes, designs, and solutions effectively in Confluence is essential for this position.,
Posted 1 week ago
12.0 - 16.0 years
0 Lacs
ahmedabad, gujarat
On-site
As a Data Analytics Lead, you will be responsible for overseeing the design, development, and implementation of data analysis solutions to meet business needs. Working closely with business stakeholders and the Aviation Subject Matter Expert (SME), you will define data requirements, project scope, and deliverables. Your role will involve driving the design and development of analytics data models and data warehouse designs, as well as developing and maintaining data quality standards and procedures. In this position, you will manage and prioritize data analysis projects to ensure timely completion. You will also be expected to identify opportunities to improve data analysis processes and tools, collaborating with Data Engineers and Data Architects to ensure that data solutions align with the overall data platform architecture. Additionally, you will evaluate and recommend new data analysis tools and technologies, contributing to the development of best practices for data analysis. Participating in project meetings, you will provide input on data-related issues, risks, and requirements. The ideal candidate for this role should have at least 12 years of experience as a Data Analytics Lead, with a proven track record of leading or mentoring a team. Extensive experience with cloud-based data modeling and data warehousing solutions, particularly using Azure Data Bricks, is required. Proficiency in data visualization tools such as Power BI is also essential. Furthermore, the role calls for experience in data analysis, statistical modeling, and machine learning techniques. Proficiency in analytical tools like Python, R, and libraries such as Pandas and NumPy for data analysis and modeling is expected. Strong expertise in Power BI for data visualization, data modeling, and DAX queries, along with experience in implementing Row-Level Security in Power BI, is highly valued. The successful candidate should also demonstrate proficiency in SQL Server and query optimization, expertise in application data design and process management, and extensive knowledge of data modeling. Hands-on experience with Azure Data Factory, Azure Databricks, SSIS (SQL Server Integration Services), and SSAS (SQL Server Analysis Services) is required. Additionally, familiarity with big data technologies such as Hadoop, Spark, and Kafka for large-scale data processing, as well as an understanding of data governance, compliance, and security measures within Azure environments, will be advantageous. Overall, this role offers the opportunity to work on medium-complex data models, understand application data design and processes quickly, and contribute to the optimization of data analysis processes within the organization.,
Posted 1 week ago
7.0 - 11.0 years
0 Lacs
indore, madhya pradesh
On-site
You are the leading global provider of managed services, cybersecurity, and business transformation for mid-market financial services organizations across the globe. From your unmatched range of services, you provide stability, security, and improved business performance, freeing your clients from technology concerns and enabling them to focus on running their businesses. More than 1,000 customers worldwide with over $3 trillion of assets under management put their trust in you. You believe that success is driven by passion and purpose. Your passion for technology is only surpassed by your commitment to empowering your employees around the world. You have an exciting Opportunity for a Cloud Data Engineer. This full-time position is open for an experienced Senior Data Engineer that will support several of your clients" systems. Client satisfaction is your primary objective; all available positions are customer-facing requiring excellent communication and people skills. A positive attitude, rigorous work habits, and professionalism in the workplace are a must. Fluency in English, both written and verbal, is required. This is an Onsite role. As a senior cloud data engineer with 7+ years of experience, you will have strong knowledge and hands-on experience with Azure data services such as Azure Data Factory, Azure Synapse Analytics, Azure SQL Database, Azure Data Lake, Logic apps, Azure Synapse Analytics, Apache Spark, and Snowflake Datawarehouse, Azure Fabric. It is good to have experience with Azure Databricks, Azure Cosmos DB, Azure AI, and developing cloud-based applications. You should be able to analyze problems and provide solutions, design, implement, and manage data warehouse solutions using Azure Synapse Analytics or similar technologies, migrate data from On-Premises to Cloud, and proficiency in data modeling techniques. Your responsibilities include designing and developing ETL/ELT processes to move data between systems and transform data for analytics, strong programming skills in languages such as SQL, Python, or Scala, developing and maintaining data pipelines, experience in at least one of the reporting tools such as Power BI/Tableau, working effectively in a team environment, communicating complex technical concepts to non-technical stakeholders, managing and optimizing databases, understanding business requirements, converting them to technical design for implementation, performing analysis, developing and testing code, designing and developing cloud-based applications using Python on a serverless framework, troubleshooting skills, creating, maintaining, and enhancing applications, working independently as an individual contributor, and following Agile Methodology (SCRUM). You have experience in developing cloud-based data applications, hands-on experience in Azure data services, data warehousing, ETL, understanding cloud architecture principles, best practices, developing pipelines using ADF, Synapse, migrating data from On-Premises to Cloud, writing complex SQL scripts, transformations, analyzing problems, providing solutions, knowledge in CI/CD pipelines, Python, and API Gateway. Product Management/BA experience is a nice-to-have. Your culture is all about connection - connection with your clients, your technology, and most importantly with each other. In addition to working with an amazing team around the world, you also offer a competitive compensation package. If someone believes they would be a great fit and are ready for their best job ever, you would like to hear from them. Love Your Job, Share Your Technology Passion, Create Your Future Here!,
Posted 1 week ago
2.0 - 6.0 years
0 Lacs
kolkata, west bengal
On-site
At EY, you'll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture, and technology to become the best version of you. And we're counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. As a Staff - Data Engineer at EY, your responsibilities include designing and developing software components using various tools such as PySpark, Sqoop, Flume, Azure Databricks, and more. You will perform detailed analysis and effectively interact with onshore/offshore team members, ensuring all deliverables conform to the highest quality standards and are executed in a timely manner. This role is deadline-oriented and may require working under the US time schedule. Additionally, you will identify areas of improvement, conduct performance tests, consult with the design team, ensure high performance of applications, and work well with development/product engineering teams. To be successful in this role, you should have 2-4 years of experience in BCM or WAM industry, preferably with exposure to US-based asset management or fund administration firms. You should have a strong understanding of data in BCM/WAM space, including knowledge of KDEs such as Funds, Positions, Transactions, Trail Balance, Securities, Investors, and more. Proficiency in programming languages like Python, hands-on experience with Big Data tools such as PySpark, Sqoop, Hive, and Hadoop Cluster, as well as Cloud technologies like Azure Databricks are essential. Expertise in databases like Oracle, SQL Server, and exposure to Big Data is a plus. Knowledge of Data Visualization tools and the ability to write programs for file/data validations, EDA, and data cleansing are also desired. As an ideal candidate, you should be highly data-driven, capable of writing complex data transformation programs using PySpark and Python, and have experience in data integration and processing using Spark. Hands-on experience in creating real-time data streaming solutions using Spark Streaming and Flume, as well as handling large datasets and writing Spark jobs and hive queries for data analysis, are valuable assets. Experience working in an agile environment will be beneficial for this role. Join EY in building a better working world, where diverse teams across assurance, consulting, law, strategy, tax, and transactions help clients grow, transform, and operate. EY aims to create long-term value for clients, people, and society, while building trust in the capital markets through data and technology-enabled solutions worldwide.,
Posted 1 week ago
10.0 - 14.0 years
0 Lacs
karnataka
On-site
As a Data Scientist at the Global Design Technology Studio within Gensler, you will play a crucial role in driving innovative solutions that connect data and outcomes from designs to practitioners, projects, and clients. You will be part of a dynamic team that fosters creativity and collaboration to identify trends, insights, and opportunities for improvement through the analysis, synthesis, and assessment of large datasets. Your responsibilities will include collaborating with the Data team to develop, train, and deploy ML classification algorithms using data from design authoring tools, exploring trends and defining insights aligned with business needs, developing data pipelines in MS Azure, researching and implementing the integration of additional datasets, and supporting the R&D of future Strategic Insights initiatives. To excel in this role, you are required to have a Bachelor's or Masters degree in Data Science, Statistics, Applied Math, or Computer Science, along with practical experience in statistical analysis, machine learning, and predictive modeling. Proficiency in Azure services like Azure Machine Learning, Azure Databricks, and Azure Data Factory, as well as programming languages such as SQL, Python, and R, is essential. Experience with data visualization tools like PowerBI, machine learning technologies, and generative AI models is highly beneficial. Gensler is a people-first organization that values work-life balance and professional development. As part of the team, you will have access to comprehensive benefits, wellness programs, and opportunities for growth and development. Join us in transforming the digital landscape of design technology and making a real impact with your skills and expertise.,
Posted 1 week ago
5.0 - 12.0 years
0 Lacs
hyderabad, telangana
On-site
You have a great opportunity to join as a Data Software Engineer with 5-12 years of experience in Big Data & Data related technology. We are looking for candidates with an expert level understanding of distributed computing principles and hands-on experience in Apache Spark along with proficiency in Python. You should also have experience with technologies like Hadoop, Map Reduce, HDFS, Sqoop, Apache Storm, Spark-Streaming, Kafka, Hive, Impala, and integration of data from various sources such as RDBMS, ERP, and Files. Additionally, knowledge of NoSQL databases, ETL techniques, SQL queries, joins, stored procedures, relational schemas, and performance tuning of Spark Jobs is required. Moreover, you must have experience with native Cloud data services like AZURE Databricks and the ability to lead a team efficiently. Familiarity with AGILE methodology and designing/implementing Big data solutions would be an added advantage. This full-time position is based in Hyderabad and requires candidates who are available for face-to-face interactions. If you meet these requirements and are passionate about working with cutting-edge technologies in the field of Big Data, we would love to hear from you.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
You should have 6 to 9 years of experience in Data Engineering with a focus on Azure Databricks, Azure Data Factory, Pyspark, and SQL. The work location for this role can be Bengaluru, Pune, Mumbai, Noida, or Gurugram with a notice period of 0 to 15 days. This job is in a Hybrid mode. As a Data Engineer in this role, your key responsibilities will include developing scalable data pipelines using Azure Data Factory (ADF), Databricks, PySpark, and Delta Lake to support ML and AI workloads. You will be optimizing and transforming large datasets for feature engineering, model training, and real-time AI inference. Building and maintaining lakehouse architecture using Azure Data Lake Storage (ADLS) & Delta Lake will also be a part of your responsibilities. Collaborating closely with ML engineers & Data Scientists, you will deliver high-quality, structured data for training Generative AI models. Implementing MLOps best practices for continuous data processing, versioning, and model retraining workflows will be essential. Monitoring & enhancing data quality using Azure Data Quality Services is also a key responsibility. Ensuring cost-efficient data processing in Databricks by utilizing Photon, Delta Caching, and Auto-Scaling Clusters is another important aspect of this role. Securing data pipelines by implementing RBAC, encryption, and governance will also be a part of your duties. The required skills & experience for this position include having 5+ years of experience in Data Engineering with Azure & Databricks. Proficiency in PySpark, SQL, and Delta Lake for large-scale data transformations is crucial. Strong experience with Azure Data Factory (ADF), Azure Synapse, and Event Hubs is necessary. Hands-on experience in building feature stores for ML models will be beneficial. Experience with ML model deployment and MLOps pipelines (MLflow, Kubernetes, or Azure ML) is considered a plus. A good understanding of Generative AI concepts and handling unstructured data is desirable. Strong problem-solving, debugging, and performance optimization skills are also required for this role.,
Posted 1 week ago
6.0 - 10.0 years
0 Lacs
karnataka
On-site
At EY, you'll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we're counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Our Technology team builds innovative digital solutions rapidly and at scale to deliver the next generation of Financial and Non-Financial services across the globe. The Position is a senior technical, hands-on delivery role, requiring the knowledge of data engineering, cloud infrastructure and platform engineering, platform operations, and production support using ground-breaking cloud and big data technologies. The ideal candidate with 6-8 years of experience will possess strong technical skills, an eagerness to learn, a keen interest in the three key pillars that our team supports i.e. Financial Crime, Financial Risk, and Compliance technology transformation. The ability to work collaboratively in a fast-paced environment, and an aptitude for picking up new tools and techniques on the job, building on existing skillsets as a foundation. In this role you will: - Ingestion and provisioning of raw datasets, enriched tables, and/or curated, re-usable data assets to enable a variety of use cases. - Driving improvements in the reliability and frequency of data ingestion including increasing real-time coverage. - Support and enhancement of data ingestion infrastructure and pipelines. - Designing and implementing data pipelines that will collect data from disparate sources across the enterprise, and from external sources and deliver it to our data platform. - Extract Transform and Load (ETL) workflows, using both advanced data manipulation tools and programmatically manipulation data throughout our data flows, ensuring data is available at each stage in the data flow, and in the form needed for each system, service, and customer along said data flow. - Identifying and onboarding data sources using existing schemas and where required, conducting exploratory data analysis to investigate and provide solutions. - Evaluate modern technologies, frameworks, and tools in the data engineering space to drive innovation and improve data processing capabilities. Core/Must-Have Skills: - 3-8 years of expertise in designing and implementing data warehouses, data lakes using Oracle Tech Stack (ETL: ODI, SSIS, DB: PLSQL and AWS Redshift). - At least 4+ years of experience in managing data extraction, transformation, and loading various sources using Oracle Data Integrator with exposure to other tools like SSIS. - At least 4+ years of experience in Database Design and Dimension modeling using Oracle PLSQL, Microsoft SQL Server. - Experience in developing ETL processes ETL control tables, error logging, auditing, data quality, etc. Should be able to implement reusability, parameterization workflow design, etc. - Advanced working SQL Knowledge and experience working with relational and NoSQL databases as well as working familiarity with a variety of databases (Oracle, SQL Server, Neo4J). - Strong analytical and critical thinking skills, with the ability to identify and resolve issues in data pipelines and systems. - Expertise in data modeling and DB Design with skills in performance tuning. - Experience with OLAP, OLTP databases, and data structuring/modeling with an understanding of key data points. - Experience building and optimizing data pipelines on Azure Databricks or AWS glue or Oracle cloud. - Create and Support ETL Pipelines and table schemas to facilitate the accommodation of new and existing data sources for the Lakehouse. - Experience with data visualization (Power BI/Tableau) and SSRS. Good to Have: - Experience working in Financial Crime, Financial Risk, and Compliance technology transformation domains. - Certification in any cloud tech stack preferred Microsoft Azure. - In-depth knowledge and hands-on experience with data engineering, Data Warehousing, and Delta Lake on-prem (Oracle RDBMS, Microsoft SQL Server) and cloud (Azure or AWS or Oracle cloud). - Ability to script (Bash, Azure CLI), Code (Python, C#), query (SQL, PLSQL, T-SQL) coupled with software versioning control systems (e.g. GitHub) AND ci/cd systems. - Design and development of systems for the maintenance of the Azure/AWS Lakehouse, ETL process, business Intelligence, and data ingestion pipelines for AI/ML use cases.,
Posted 1 week ago
7.0 - 11.0 years
0 Lacs
karnataka
On-site
As an Azure Databricks Developer with fluency in Japanese language, you will be responsible for leveraging your technical expertise to contribute to ongoing operations and development projects within our technology services client's team. With a focus on Azure Databricks and Pyspark, you will play a key role in enhancing data processing efficiency and optimizing solutions using various tools and technologies. Your role will involve working closely with Azure Databricks, PySpark Dataframes, RDD, DAG, Partitioning, SparkSQL, and Clustering to drive innovation and problem-solving. Your hands-on experience in Azure Databricks and Power BI tech stack will be beneficial in ensuring seamless operations and continuous improvement in data projects. Moreover, your knowledge of Azure DevOps, Azure SQL DW, and ADF will enable you to design and implement effective solutions, while your familiarity with Azure medallion architecture will be advantageous in aligning with architectural best practices. You will be expected to conduct detailed Root Cause Analysis (RCA) on incidents, develop preventive actions, and actively contribute to both development and operations data projects. Your ability to establish and manage processes related to code quality, CI/CD, testing, and execution will be crucial in maintaining high standards within the team. Strong communication skills in English and Japanese, both written and oral, are essential for effective collaboration and coordination with stakeholders. If you possess the requisite technical skills, language proficiency, and a passion for driving impactful data solutions, we encourage you to share your updated resume with us at ravi.k@s3staff.com. Join us to be part of a dynamic team where your expertise in Azure Databricks and Japanese language will be valued, and your contributions will play a significant role in shaping innovative solutions and driving operational excellence.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
noida, uttar pradesh
On-site
At Capgemini Invent, we believe difference drives change. As inventive transformation consultants, we blend our strategic, creative, and scientific capabilities, collaborating closely with clients to deliver cutting-edge solutions. Join us to drive transformation tailored to our client's challenges of today and tomorrow, informed and validated by science and data, superpowered by creativity and design, all underpinned by technology created with purpose. Your role involves having IT experience with a minimum of 5+ years in creating data warehouses, data lakes, ETL/ELT, data pipelines on cloud. You should have experience in data pipeline implementation with cloud providers such as AWS, Azure, GCP, preferably in the Life Sciences Domain. Experience with cloud storage, cloud database, cloud Data Warehousing, and Data Lake solutions like Snowflake, BigQuery, AWS Redshift, ADLS, S3 is essential. You should also be familiar with cloud data integration services for structured, semi-structured, and unstructured data like Azure Databricks, Azure Data Factory, Azure Synapse Analytics, AWS Glue, AWS EMR, Dataflow, Dataproc. Good knowledge of Infra capacity sizing, costing of cloud services to drive optimized solution architecture, leading to optimal infra investment vs performance and scaling is required. Your profile should demonstrate the ability to contribute to making architectural choices using various cloud services and solution methodologies. Expertise in programming using Python is a must. Very good knowledge of cloud DevOps practices such as infrastructure as code, CI/CD components, and automated deployments on the cloud is essential. Understanding networking, security, design principles, and best practices in the cloud is expected. Knowledge of IoT and real-time streaming would be an added advantage. You will be leading architectural/technical discussions with clients and should possess excellent communication and presentation skills. At Capgemini, we recognize the significance of flexible work arrangements to provide support. Whether it's remote work or flexible work hours, you will get an environment to maintain a healthy work-life balance. Our mission is centered on your career growth, offering an array of career growth programs and diverse professions crafted to support you in exploring a world of opportunities. Equip yourself with valuable certifications in the latest technologies such as Generative AI. Capgemini is a global business and technology transformation partner, helping organizations accelerate their dual transition to a digital and sustainable world while creating tangible impact for enterprises and society. With a responsible and diverse group of over 340,000 team members in more than 50 countries, Capgemini has a strong heritage of over 55 years. Clients trust Capgemini to unlock the value of technology to address the entire breadth of their business needs, delivering end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by market-leading capabilities in AI, Generative AI, cloud, and data, combined with deep industry expertise and a partner ecosystem.,
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
54024 Jobs | Dublin
Wipro
24262 Jobs | Bengaluru
Accenture in India
18733 Jobs | Dublin 2
EY
17079 Jobs | London
Uplers
12548 Jobs | Ahmedabad
IBM
11704 Jobs | Armonk
Amazon
11059 Jobs | Seattle,WA
Bajaj Finserv
10656 Jobs |
Accenture services Pvt Ltd
10587 Jobs |
Oracle
10506 Jobs | Redwood City