Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
We use cookies to offer you the best possible website experience. Your cookie preferences will be stored in your browser’s local storage. This includes cookies necessary for the website's operation. Additionally, you can freely decide and change any time whether you accept cookies or choose to opt out of cookies to improve website's performance, as well as cookies used to display content tailored to your interests. Your experience of the site and the services we are able to offer may be impacted if you do not accept all cookies. Press Tab to Move to Skip to Content Link Skip to main content Home Page Home Page Life At YASH Core Values Careers Business Consulting Jobs Digital Jobs ERP IT Infrastructure Jobs Sales & Marketing Jobs Software Development Jobs Solution Architects Jobs Join Our Talent Community Social Media LinkedIn Twitter Instagram Facebook Search by Keyword Search by Location Home Page Home Page Life At YASH Core Values Careers Business Consulting Jobs Digital Jobs ERP IT Infrastructure Jobs Sales & Marketing Jobs Software Development Jobs Solution Architects Jobs Join Our Talent Community Social Media LinkedIn Twitter Instagram Facebook View Profile Employee Login Search by Keyword Search by Location Show More Options Loading... Requisition ID All Skills All Select How Often (in Days) To Receive An Alert: Create Alert Select How Often (in Days) To Receive An Alert: Apply now » Apply Now Start apply with LinkedIn Please wait... Tech Lead - Data Bricks Job Date: Aug 2, 2025 Job Requisition Id: 59586 Location: Hyderabad, TG, IN YASH Technologies is a leading technology integrator specializing in helping clients reimagine operating models, enhance competitiveness, optimize costs, foster exceptional stakeholder experiences, and drive business transformation. At YASH, we’re a cluster of the brightest stars working with cutting-edge technologies. Our purpose is anchored in a single truth – bringing real positive changes in an increasingly virtual world and it drives us beyond generational gaps and disruptions of the future. We are looking forward to hire Data Bricks Professionals in the following areas : Experience 8+ Years Job Description Over all 8+ + Years experience and a Minimum 3+Years exp in Azure should have worked as lead for at least 3 year Should come from DWH background, Should have strong ETL experience Strong have strong hands on experience in Azure Data Bricks/Pyspark Strong have strong hands on experience inAzure Data Factory, Devops Strong knowledge on Bigdata stack Strong Knowledge of Azure EventHubs and Pub-Sub model, security Strong Communication and Analytical skills. Highly proficient at SQL development Experience working in an Agile environment Work as team lead to develop Cloud Data and Analytics solutions Mentor junior developers and testers Able to build strong relationships with client technical team Participate in the development of cloud data warehouses, data as a service, business intelligence solutions Data wrangling of heterogeneous data Coding complex Spark (Scala or Python). Required Behavioral Competencies Accountability : Takes responsibility for and ensures accuracy of own work, as well as the work and deadlines of the team. Collaboration : Shares information within team, participates in team activities, asks questions to understand other points of view. Agility : Demonstrates readiness for change, asking questions and determining how changes could impact own work. Customer Focus : Identifies trends and patterns emerging from customer preferences and works towards customizing/ refining existing services to exceed customer needs and expectations. Communication : Targets communications for the appropriate audience, clearly articulating and presenting his/her position or decision. Drives Results : Sets realistic stretch goals for self & others to achieve and exceed defined goals/targets. Resolves Conflict : Displays sensitivity in interactions and strives to understand others’ views and concerns. At YASH, you are empowered to create a career that will take you to where you want to go while working in an inclusive team environment. We leverage career-oriented skilling models and optimize our collective intelligence aided with technology for continuous learning, unlearning, and relearning at a rapid pace and scale. Our Hyperlearning workplace is grounded upon four principles Flexible work arrangements, Free spirit, and emotional positivity Agile self-determination, trust, transparency, and open collaboration All Support needed for the realization of business goals, Stable employment with a great atmosphere and ethical corporate culture Apply now » Apply Now Start apply with LinkedIn Please wait... Find Similar Jobs: Careers Home View All Jobs Top Jobs Quick Links Blogs Events Webinars Media Contact Contact Us Copyright © 2020. YASH Technologies. All Rights Reserved.
Posted 6 days ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
YASH Technologies is a leading technology integrator specializing in helping clients reimagine operating models, enhance competitiveness, optimize costs, foster exceptional stakeholder experiences, and drive business transformation. We are a cluster of the brightest stars working with cutting-edge technologies. Our purpose is anchored in a single truth - bringing real positive changes in an increasingly virtual world and it drives us beyond generational gaps and disruptions of the future. We are looking forward to hiring Hadoop Professionals in the following areas: **Position Title:** Data Engineer **SCOPE OF RESPONSIBILITY:** As part of a global, growing team of data engineers, you will collaborate in a DevOps model to enable Clients Life Science business with cutting-edge technology, leveraging data as an asset to support better decision-making. You will design, develop, test, and support automated end-to-end data pipelines and applications within Life Sciences data management and analytics platform (Palantir Foundry, Hadoop, and other components). This position requires proficiency in data engineering, distributed computation, and DevOps methodologies, utilizing AWS infrastructure and on-premises data centers to support multiple technology stacks. **PURPOSE OF THE POSITION:** The purpose of this role is to build and maintain data pipelines, develop applications on various platforms, and support data-driven decision-making processes across clients Life Science business. You will work closely with cross-functional teams, including business users, data scientists, and data analysts, while ensuring the best balance between technical feasibility and business requirements. **RESPONSIBILITIES:** - Develop data pipelines by ingesting various structured and unstructured data sources into Palantir Foundry. - Participate in end-to-end project lifecycles, from requirements analysis to deployment and operations. - Act as a business analyst for developing requirements related to Foundry pipelines. - Review code developed by other data engineers, ensuring adherence to platform standards and functional specifications. - Document technical work professionally and create high-quality technical documentation. - Balance technical feasibility with strict business requirements. - Deploy applications on Foundry platform infrastructure with clearly defined checks. - Implement changes and bug fixes following clients change management framework. - Work in DevOps project setups following Agile principles (e.g., Scrum). - Act as third-level support for critical applications, resolving complex incidents and debugging problems across the full stack. - Work closely with business users, data scientists, and analysts to design physical data models. - Provide support in designing ETL/ELT processes with databases and Hadoop platforms. **EDUCATION:** Bachelors degree or higher in Computer Science, Engineering, Mathematics, Physical Sciences, or related fields. **EXPERIENCE:** 5+ years of experience in system engineering or software development. 3+ years of experience in engineering with a focus on ETL work involving databases and Hadoop platforms. **TECHNICAL SKILLS:** - Hadoop General: Deep knowledge of distributed file system concepts, map-reduce principles, and distributed computing. Familiarity with Spark and its differences from MapReduce. - Data Management: Proficient in technical data management tasks such as reading, transforming, and storing data, including experience with XML/JSON and REST APIs. - Spark: Experience in launching Spark jobs in both client and cluster modes, with an understanding of property settings that impact performance. - Application Development: Familiarity with HTML, CSS, JavaScript, and basic visual design competencies. - SCC/Git: Experienced in using source code control systems like Git. - ETL/ELT: Experience developing ETL/ELT processes, including loading data from enterprise-level RDBMS systems (e.g., Oracle, DB2, MySQL). - Authorization: Basic understanding of user authorization, preferably with Apache Ranger. - Programming: Proficient in Python, with expertise in at least one high-level language (e.g., Java, C, Scala). Must have experience using REST APIs. - SQL: Expertise in SQL for manipulating database data, including views, functions, stored procedures, and exception handling. - AWS: General knowledge of the AWS stack (EC2, S3, EBS, etc.). - IT Process Compliance: Experience with SDLC processes, change control, and ITIL (incident, problem, and change management). **REQUIRED SKILLS:** - Strong problem-solving skills with an analytical mindset. - Excellent communication skills to collaborate with both technical and non-technical teams. - Experience working in Agile/DevOps teams, utilizing Scrum principles. - Ability to thrive in a fast-paced, dynamic environment while managing multiple tasks. - Strong organizational skills with attention to detail. At YASH, you are empowered to create a career that will take you to where you want to go while working in an inclusive team environment. We leverage career-oriented skilling models and optimize our collective intelligence aided with technology for continuous learning, unlearning, and relearning at a rapid pace and scale. Our Hyperlearning workplace is grounded upon four principles - Flexible work arrangements, Free spirit, and emotional positivity; Agile self-determination, trust, transparency, and open collaboration; All Support needed for the realization of business goals; Stable employment with a great atmosphere and ethical corporate culture.,
Posted 6 days ago
8.0 - 12.0 years
0 Lacs
pune, maharashtra
On-site
As a Lead Software Engineer at the Loyalty Rewards and Segments Organization within Mastercard, you will play a crucial role in designing, developing, testing, and delivering software frameworks for use in large-scale distributed systems. In this position, you will lead the technical direction, architecture, design, and engineering practices to create cutting-edge solutions in event-driven architecture and zero trust. Your responsibilities will include prototyping new technologies, designing and developing software frameworks using best practices, writing efficient code, debugging and troubleshooting to improve performance, and collaborating with cross-functional teams to deliver high-quality services. You will balance competing interests with judgment and experience, identify synergies across teams, and drive process improvements and efficiency gains. To excel in this role, you must have deep hands-on experience in software engineering, particularly in architecture, design, and implementation of large-scale distributed systems. Expertise in event-driven architecture and knowledge of zero trust architecture are essential. Proficiency in Java, Scala, SQL, and building pipelines is required, along with experience in the Hadoop ecosystem, including tools like Hive, Pig, Spark, and cloud platforms. Your technical skills should also include expertise in web applications, web services, and tools such as Springboot, Angular, REST, OAuth, Sonar, Splunk, and Dynatrace. Familiarity with XP, TDD, BDD, secure coding standards, and vulnerability management is important. You should demonstrate strong problem-solving skills, experience in Agile environments, and excellent verbal and written communication. As a Lead Software Engineer, you will have the opportunity to mentor junior team members, demo features to product owners, and take development work from inception to implementation. Your passion for technology, continuous learning, and proactive approach to challenges will drive the team towards success. You will also be responsible for upholding Mastercard's security policies, ensuring information security, and reporting any violations or breaches. If you are a motivated, intellectually curious individual with a strong background in software design and development, this role offers a platform to work on innovative technologies and deliver solutions that meet the needs of Mastercard's customers. Join us in shaping the future of loyalty management solutions for banks, merchants, and Fintechs.,
Posted 6 days ago
3.0 - 7.0 years
0 Lacs
maharashtra
On-site
Are you seeking an exciting opportunity to become a part of a dynamic and expanding team in a fast-paced and challenging environment This unique role offers you the chance to collaborate with the Business in providing a comprehensive view. This position entails a Quant Profile to support the activities of the Quantitative Research Group across various asset classes and Custody & Fund Services globally, located in Mumbai. The Quantitative Research team in Mumbai holds a pivotal role in delivering effective, timely, and independent assessments of the Firm's booking models for exotic structures and contributes to the development of new models as required. As a Quantitative Research Associate/Vice President within the Quantitative Research Group, you will engage with the Business to offer a comprehensive perspective and aid the activities of the Quantitative Research Group globally across asset classes. Your key responsibilities will involve delivering effective, timely, and independent assessments of the Firm's booking models for exotic structures and contributing to the development of new models when necessary. You will be an integral part of a team revolutionizing business practices through data science and other quantitative methods, where JP Morgan stands as a leading player, managing trillions of dollars of client assets. **Job Responsibilities:** - Conduct large-scale analysis on our proprietary dataset to address unprecedented challenges - Discover new insights to drive feature modeling for nuanced insights - Develop models from prototype to full-scale production - Provide real-world, commercial recommendations through impactful presentations to various stakeholders - Utilize data visualization for effective communication of data insights and results - Document and test new/existing models in collaboration with control groups - Implement models in Python-based proprietary libraries - Offer ongoing desk support **Required Qualifications, Capabilities & Skills:** - A master's or Ph.D. degree program in computer science, statistics, operations research, or other quantitative fields - Strong technical skills in data manipulation, extraction, and analysis - Fundamental knowledge of statistics, optimization, and machine learning methodologies - Ability to build models from prototype to full-scale production, utilizing ML/big data modeling techniques - Familiarity with developing models on cloud infrastructure - Integration and utilization of LLM models for advanced model development and enhancement - Basic understanding of financial instruments and their pricing - Proficiency in software design principles and development skills using C++, Python, R, Java, or Scala - Prior practical experience in solving machine learning problems using open-source packages - Excellent communication skills (both verbal and written) and the capability to present findings to non-technical audiences **Preferred Qualifications, Capabilities & Skills:** - Participation in KDD/Kaggle competitions or contributions to GitHub are highly desirable,
Posted 6 days ago
4.0 - 8.0 years
0 Lacs
pune, maharashtra
On-site
As a Senior Data Scientist specializing in Artificial Intelligence and Machine Learning at FIS, you will be part of a dynamic team that thrives on curiosity, motivation, and forward-thinking. Your role will involve utilizing cutting-edge AI technologies to tackle complex challenges in the financial services and technology sectors. You will be responsible for leveraging AIML concepts such as GenAI, LLMs, and Sentiment analysis to develop innovative solutions. This will involve establishing correlations between various data points and conducting predictive and preventive analyses to drive business outcomes. Working closely with our AIOPS team, you will contribute to the development of AIML solutions aimed at reducing Enterprise MTTR through a top-notch product. Your day-to-day tasks will include employing machine learning and statistical modeling techniques to enhance data-driven products, extracting actionable insights from diverse datasets, and transforming complex data into actionable information. To excel in this role, you should possess 4 to 6 years of experience in machine learning, artificial intelligence, statistical modeling, and data analysis. Proficiency in data science tools and programming languages such as SAS, Python, R, Scala, and SQL is essential. A relevant degree in Artificial Intelligence Machine Learning, as well as knowledge of cloud-based technologies and the financial services industry, will be advantageous. In return, FIS offers a comprehensive range of benefits to support your wellbeing and lifestyle. You will be part of a diverse and innovative team operating in a modern international work environment. Additionally, you will have access to professional education and personal development opportunities to further enhance your skills and career growth. Join us at FIS and be part of a team that values collaboration, entrepreneurship, and fun while making a meaningful impact in the world of AI and data science.,
Posted 6 days ago
3.0 - 8.0 years
0 Lacs
chennai, tamil nadu
On-site
This is a data engineer position where you will be responsible for designing, developing, implementing, and maintaining data flow channels and data processing systems to support the collection, storage, batch and real-time processing, and analysis of information in a scalable, repeatable, and secure manner in coordination with the Data & Analytics team. Your main objective will be to define optimal solutions for data collection, processing, and warehousing, particularly within the banking & finance domain. You must have expertise in Spark Java development for big data processing, Python, and Apache Spark. You will be involved in designing, coding, and testing data systems and integrating them into the internal infrastructure. Your responsibilities will include ensuring high-quality software development with complete documentation, developing and optimizing scalable Spark Java-based data pipelines, designing and implementing distributed computing solutions for risk modeling, pricing, and regulatory compliance, ensuring efficient data storage and retrieval using Big Data, implementing best practices for Spark performance tuning, maintaining high code quality through testing, CI/CD pipelines, and version control, working on batch processing frameworks for Market risk analytics, and promoting unit/functional testing and code inspection processes. You will also collaborate with business stakeholders, Business Analysts, and other data scientists to understand and interpret complex datasets. Qualifications: - 5-8 years of experience in working in data ecosystems - 4-5 years of hands-on experience in Hadoop, Scala, Java, Spark, Hive, Kafka, Impala, Unix Scripting, and other Big data frameworks - 3+ years of experience with relational SQL and NoSQL databases such as Oracle, MongoDB, HBase - Strong proficiency in Python and Spark Java with knowledge of core Spark concepts (RDDs, Dataframes, Spark Streaming, etc.), Scala, and SQL - Data integration, migration, and large-scale ETL experience - Data modeling experience - Experience building and optimizing big data pipelines, architectures, and datasets - Strong analytic skills and experience working with unstructured datasets - Experience with various technologies like Confluent Kafka, Redhat JBPM, CI/CD build pipelines, Git, BitBucket, Jira, external cloud platforms, container technologies, and supporting frameworks - Highly effective interpersonal and communication skills - Experience with software development life cycle Education: - Bachelors/University degree or equivalent experience in computer science, engineering, or a similar domain This is a full-time position in the Data Architecture job family group within the Technology sector.,
Posted 6 days ago
1.0 - 5.0 years
0 Lacs
karnataka
On-site
At Iron Mountain, we believe that work, when done well, can have a positive impact on our customers, employees, and the planet. That's why we are looking for smart and committed individuals to join our team. Whether you are starting your career or seeking a change, we invite you to explore how you can enhance the impact of your work at Iron Mountain. We offer expert and sustainable solutions in records and information management, digital transformation services, data centers, asset lifecycle management, and fine art storage, handling, and logistics. Collaborating with over 225,000 customers worldwide, we aim to preserve valuable artifacts, optimize inventory, and safeguard data privacy through innovative and socially responsible practices. If you are interested in being part of our growth journey and expanding your skills in a culture that values diverse contributions, let's have a conversation. As Iron Mountain progresses with its digital transformation, we are expanding our Enterprise Data Platform Team, which plays a crucial role in supporting data integration solutions, reporting, and analytics. The team focuses on maintaining and enhancing data platform components essential for delivering our data solutions. As a Data Platform Engineer at Iron Mountain, you will leverage your advanced knowledge of cloud big data technologies, software development expertise, and strong SQL skills. The ideal candidate will have a background in software development and big data engineering, with experience working in a remote environment and supporting both on-shore and off-shore engineering teams. Key Responsibilities: - Building and operationalizing cloud-based platform components - Developing production-quality ingestion pipelines with automated quality checks to centralize access to all data sets - Assessing current system architecture and recommending solutions for improvement - Building automation using Python modules to support product development and data analytics initiatives - Ensuring maximum uptime of the platform by utilizing cloud technologies such as Kubernetes, Terraform, Docker, etc. - Resolving technical issues promptly and providing guidance to development teams - Researching current and emerging technologies and proposing necessary changes - Assessing the business impact of technical decisions and participating in collaborative environments to foster new ideas - Maintaining comprehensive documentation on processes and decision-making Your Qualifications: - Experience with DevOps/Automation tools to minimize operational overhead - Ability to contribute to self-organizing teams within the Agile/Scrum project methodology - Bachelor's Degree in Computer Science or related field - 3+ years of related IT experience - 1+ years of experience building complex ETL pipelines with dependency management - 2+ years of experience in Big Data technologies such as Spark, Hive, Hadoop, etc. - Industry-recognized certifications - Strong familiarity with PaaS services, containers, and orchestrations - Excellent verbal and written communication skills What's in it for you - Be part of a global organization focused on transformation and innovation - A supportive environment where you can voice your opinions and be your authentic self - Global connectivity to learn from teammates across 52 countries - Embrace diversity, inclusion, and differences within a winning team - Competitive Total Reward offerings to support your career, family, wellness, and retirement Iron Mountain is a global leader in storage and information management services, trusted by organizations worldwide. We safeguard critical business information, sensitive data, and cultural artifacts. Our services help lower costs, mitigate risks, comply with regulations, and enable digital solutions. If you require accommodations due to a disability, please reach out to us. Category: Information Technology,
Posted 6 days ago
6.0 - 10.0 years
0 Lacs
haryana
On-site
As a Machine Learning Manager at our company, you will be responsible for leading a high-performing team and driving the development of innovative machine learning solutions. Your expertise in machine learning techniques, programming languages like Python, R, Java, and Scala, data structures, data modeling, and software architecture will be crucial for success in this role. Your role will involve developing and implementing machine learning models to solve complex business problems, collaborating with cross-functional teams to translate business objectives into technical solutions, and designing scalable machine learning systems. You will also conduct exploratory data analysis, feature engineering, and model evaluation, staying updated on the latest advancements in machine learning research and technologies. In addition to technical excellence, strong leadership capabilities are essential in this role. You will be required to mentor junior team members, drive best practices in coding, testing, and documentation, and lead end-to-end machine learning initiatives aligned with business goals. Your expertise in defining scalable ML architectures, selecting appropriate tools and frameworks, and leading cross-functional teams to deliver impactful ML solutions will be critical. The preferred candidate for this position will have at least 6 years of experience in Machine Learning, with a Bachelor's or Master's degree in Computer Science, Statistics, Mathematics, or a related field. Proficiency in programming languages, hands-on experience with machine learning frameworks, and a solid understanding of data structures and software architecture are required. Experience with MLOps and MLaaS is highly advantageous, along with strong problem-solving skills and excellent communication abilities. Joining our team will offer you the opportunity to focus on impactful tasks, work in a flexible and supportive environment, maintain a healthy work-life balance, and collaborate with a motivated and goal-oriented team. You will benefit from a competitive compensation package, work with cutting-edge technologies, and see your work make a tangible impact on the lives of millions of customers.,
Posted 6 days ago
5.0 - 9.0 years
0 Lacs
ahmedabad, gujarat
On-site
You will be responsible for designing and developing data solutions using Elasticsearch/OpenSearch, integrating with various data sources and systems. Your role will involve architecting, implementing, and optimizing data solutions, along with applying your expertise in machine learning to develop models, algorithms, and pipelines for data analysis, prediction, and anomaly detection within Elasticsearch/OpenSearch environments. Additionally, you will design and implement data ingestion pipelines to collect, cleanse, and transform data from diverse sources, ensuring data quality and integrity. As part of your responsibilities, you will manage and administer Elasticsearch/OpenSearch clusters, including configuration, performance tuning, index optimization, and monitoring. You will work on optimizing complex queries and search operations in Elasticsearch/OpenSearch to ensure efficient and accurate retrieval of data. Troubleshooting and resolving issues related to Elasticsearch/OpenSearch performance, scalability, and reliability will be a key aspect of your role, requiring close collaboration with DevOps and Infrastructure teams. Collaboration with cross-functional teams, including data scientists, software engineers, and business stakeholders, will be essential to understand requirements and deliver effective data solutions. You will also be responsible for documenting technical designs, processes, and best practices related to Elasticsearch/OpenSearch and machine learning integration, providing guidance and mentorship to junior team members. To qualify for this position, you should hold a Bachelor's or Master's degree in Computer Science, Data Science, or a related field. Strong experience in designing, implementing, and managing large-scale Elasticsearch/OpenSearch clusters is required, along with expertise in machine learning techniques and frameworks such as TensorFlow, PyTorch, or scikit-learn. Proficiency in programming languages like Python, Java, or Scala, and experience with data processing frameworks and distributed computing are necessary. A solid understanding of data engineering concepts, cloud platforms, and containerization technologies is highly desirable. The ideal candidate will possess strong analytical and problem-solving skills, with the ability to work effectively in a fast-paced, collaborative environment. Excellent communication skills are crucial, enabling you to translate complex technical concepts into clear explanations for both technical and non-technical stakeholders. A proven track record of successfully delivering data engineering projects on time and within budget is also expected. If you have 5+ years of experience in Data Ingestion and Transformation, Elastic Search/Open Search Administration, Machine Learning Integration, and related areas, we invite you to send your CV to careers@eventussecurity.com. Join us in Ahmedabad and be part of our SOC - Excellence team.,
Posted 6 days ago
7.0 - 11.0 years
0 Lacs
ahmedabad, gujarat
On-site
As an accomplished Lead Data Engineer with 7 to 10 years of experience in data engineering, we are looking for you to join our dynamic team in either Ahmedabad or Pune. Your expertise in Databricks will play a crucial role in enhancing our data engineering capabilities and working with advanced technologies, including Generative AI. Your key responsibilities will include leading the design, development, and optimization of data solutions using Databricks to ensure scalability, efficiency, and security. You will collaborate with cross-functional teams to gather and analyze data requirements, translating them into robust data architectures and solutions. Developing and maintaining ETL pipelines, leveraging Databricks, and integrating with Azure Data Factory when necessary will also be part of your role. Furthermore, you will implement machine learning models and advanced analytics solutions, incorporating Generative AI to drive innovation. Ensuring data quality, governance, and security practices are adhered to will be essential to maintain the integrity and reliability of data solutions. Providing technical leadership and mentorship to junior engineers to foster an environment of learning and growth will also be a key aspect of your role. It is crucial to stay updated on the latest trends and advancements in data engineering, Databricks, Generative AI, and Azure Data Factory to continually enhance team capabilities. To qualify for this role, you should hold a Bachelor's or Master's degree in Computer Science, Information Technology, or a related field. Your proven expertise in building and optimizing data solutions using Databricks, integrating with Azure Data Factory/AWS Glue, proficiency in SQL, and programming languages like Python or Scala are essential. A strong understanding of data modeling, ETL processes, Data Warehousing/Data Lakehouse concepts, cloud platforms (particularly Azure), and containerization technologies such as Docker are required. Excellent analytical, problem-solving, and communication skills are a must, along with demonstrated leadership ability and experience mentoring junior team members. Preferred qualifications include experience with Generative AI technologies and applications, familiarity with other cloud platforms like AWS or GCP, and knowledge of data governance frameworks and tools. In return, we offer flexible timings, 5 days working week, a healthy environment, celebrations, opportunities to learn and grow, build a community, and medical insurance benefits. Join us and be part of a team that values innovation, collaboration, and professional development.,
Posted 6 days ago
10.0 - 14.0 years
0 Lacs
karnataka
On-site
Capgemini Invent is the digital innovation, consulting, and transformation brand of the Capgemini Group. As a global business line, Capgemini Invent combines expertise in strategy, technology, data science, and creative design to assist CxOs in envisioning and constructing what's next for their businesses. In this role, you will be responsible for developing and maintaining scalable data pipelines using AWS services. Your tasks will include optimizing data storage and retrieval processes, ensuring data security and compliance with industry standards, and handling large volumes of data while maintaining accuracy, security, and accessibility. Additionally, you will be involved in developing data set processes for data modeling, mining, and production, implementing data quality and validation processes, and collaborating closely with data scientists, analysts, and IT departments to understand data requirements. You will work with data architects, modelers, and IT team members on project goals, monitor and troubleshoot data pipeline issues, conduct performance tuning and optimization of data solutions, and implement disaster recovery procedures. Your role will also involve ensuring the seamless integration of HR data from various sources into the cloud environment, researching opportunities for data acquisition and new uses for existing data, and staying up to date with the latest cloud technologies and best practices. You will be expected to recommend ways to improve data reliability, efficiency, and quality. To be successful in this position, you should have 10+ years of experience in cloud data engineering and proficiency in cloud platforms such as AWS, Azure, or Google Cloud. Experience with data pipeline tools like Apache Spark and AWS Glue, strong programming skills in languages such as Python, SQL, Java, or Scala, and familiarity with Snowflake or Informatica are advantageous. Knowledge of data privacy laws, security best practices, database technologies, and a demonstrated learner attitude are also essential. Strong communication, teamwork skills, and the ability to work in an Agile framework while managing multiple projects simultaneously will be key to excelling in this role. At Capgemini, we value flexible work arrangements to support a healthy work-life balance. We offer various career growth programs and diverse professions to help you explore a world of opportunities. Additionally, you will have the opportunity to equip yourself with valuable certifications in the latest technologies such as Generative AI. Capgemini is a global business and technology transformation partner, dedicated to helping organizations accelerate their transition to a digital and sustainable world. With a diverse team of over 340,000 members in more than 50 countries, Capgemini leverages its strong heritage and expertise in AI, cloud, and data to address clients" business needs comprehensively. We are committed to unlocking technology's value and creating tangible impact for enterprises and society.,
Posted 6 days ago
5.0 - 9.0 years
0 Lacs
haryana
On-site
As a Data Scientist at GlobalLogic, you will be responsible for working as a Full-stack AI Engineer. You must have proficiency in programming languages like Python, Java/Scala, and experience with data processing libraries such as Pandas, NumPy, and Scikit-learn. Additionally, you should be proficient in distributed computing platforms like Apache Spark (PySpark, Scala), and Torch. It is essential to have expertise in API development using Fast API, Spring Boot, and a good understanding of O&M - logging, monitoring, fault management, security, etc. Furthermore, it would be beneficial to have hands-on experience with deployment and orchestration tools like Docker, Kubernetes, Helm. Experience with cloud platforms such as AWS (Sagemaker/ Bedrock), GCP, or Azure is also advantageous. Strong programming skills in TensorFlow, PyTorch, or similar ML frameworks for training and deployment are considered good-to-have qualities for this role. At GlobalLogic, we prioritize a culture of caring, where you will experience an inclusive environment of acceptance and belonging. Continuous learning and development opportunities are provided to help you grow personally and professionally. You will have the chance to work on interesting and meaningful projects that make an impact for clients worldwide. We believe in the importance of work-life balance and flexibility, offering various career areas, roles, and work arrangements to help you achieve the perfect balance. As a high-trust organization, integrity is key, and you can trust GlobalLogic to provide a safe, reliable, and ethical work environment. By joining us, you become part of a team that values truthfulness, candor, and integrity in everything we do. GlobalLogic, a Hitachi Group Company, is a trusted digital engineering partner known for collaborating with some of the world's largest and most innovative companies. Since 2000, we have been at the forefront of the digital revolution, creating innovative digital products and experiences. Join us in transforming businesses and redefining industries through intelligent products, platforms, and services.,
Posted 6 days ago
6.0 - 10.0 years
0 Lacs
hyderabad, telangana
On-site
As a Senior Software Engineer at Dun & Bradstreet, you will play a crucial role in developing and maintaining systems that support our core services across legacy Datacenters, AWS, and GCP. Your responsibilities will include software development for our Big Data Platform, ensuring the creation of Unit Tests for your code, and actively participating in daily Pull Request Reviews. The ideal candidate for this role will exhibit a passion for Developing and a curious nature towards Big Data Platforms, coupled with a strong development and problem-solving mindset. Collaboration with Development, SRE, and DevOps teams will be essential to translate business requirements and functional specifications into innovative solutions, implementing performant, scalable program designs, code modules, and stable systems. Your Key Responsibilities will include: - Developing scalable, distributed software systems and engaging in projects that demand research, interactivity, and the ability to pose pertinent questions. - Designing, developing, debugging, supporting, maintaining, and testing software applications. - Working closely with diverse teams to contribute to a wide range of applications and solutions. - Assisting Subject Matter Experts in offering consultation to ensure that new and existing software solutions adhere to industry best practices, strategies, and architectures, while actively pursuing professional growth. - Enhancing code quality through activities like writing unit tests, automation, and conducting code reviews. - Identifying opportunities for continuous improvement in technology solutions, be it optimizing system performance or addressing technical debt. Key Requirements for this role include: - 6+ years of experience in developing commercial software within an agile SDLC environment. - Proficiency in triaging data and performance issues, possessing strong analytical and problem-solving skills to explore, analyze, and interpret large datasets. - Familiarity with cloud-based technologies such as AWS/GCP, with certifications being a plus. - Expertise in distributed processing systems like Spark/Hadoop, programming skills in Python and Scala, experience with ETL tools, and data pipeline orchestration Frameworks such as Airflow. - Strong understanding of SQL and NoSQL databases, along with familiarity with data warehousing solutions like Big Query. - Demonstrating an ownership mindset, problem-solving skills, curiosity, and proactive behavior to drive success through collaboration and connection with team members. Fluency in English and relevant working market languages is advantageous where applicable. This position is internally titled as Senior Software Engineer at Dun & Bradstreet. For more information on Dun & Bradstreet job postings, please visit: - https://www.dnb.com/about-us/careers-and-people/joblistings.html - https://jobs.lever.co/dnb Kindly note that all official communication from Dun & Bradstreet will originate from an email address ending in @dnb.com. Please be aware that this job posting page is hosted and powered by Lever, and your usage of this page is subject to Lever's Privacy Notice and Cookie Policy, which govern the processing of visitor data on this platform.,
Posted 6 days ago
3.0 - 7.0 years
0 Lacs
maharashtra
On-site
You are being offered an exciting opportunity to join an esteemed Investment Bank as a Java Scala Developer in either Mumbai or Bengaluru. As a part of the Distributed Risk Storage team, you will be responsible for the development and maintenance of libraries and components essential for storing, retrieving, and processing vast amounts of data efficiently. The team you will be working with deals with billions of data points generated daily across numerous grid engines both in-house and on public cloud platforms. Your main task will involve contributing to the creation of a customized data storage and processing platform that can meet the intricate demands regarding data encryption, entitlements, lineage, retention, volume, and cost. To excel in this role, you should be a technically proficient and enthusiastic developer with a keen interest in working on large data systems. Collaborating within an agile team operating across various time zones, you will primarily utilize Scala and Java for developing and enhancing new and existing components. Additionally, you will play a pivotal role in different stages of the product lifecycle ranging from analysis to development and testing. Your responsibilities will also include proposing enhancements to existing systems and processes while taking ownership of specific areas. By closely collaborating with seasoned developers, you will have ample opportunities to enhance your skills and expertise in this role. Your key clients will be other developers, and you will work closely with them to comprehend their evolving needs and deliver top-notch solutions. The ideal candidate for this role should possess proficiency in Scala and Java, although expertise in other languages like C++ or Python will also be considered. Strong problem-solving abilities, analytical skills, and technical inquisitiveness are crucial for this role. A solid understanding of core computer science concepts and code optimization techniques is essential. While prior experience in the financial sector is not mandatory, familiarity with working on large enterprise systems is advantageous. Knowledge of Public Cloud and Kubernetes can be beneficial, as well as experience with distributed systems or low-level coding. It is imperative to recognize the significance of testing and documentation in ensuring the delivery of high-quality solutions.,
Posted 6 days ago
2.0 - 6.0 years
0 Lacs
karnataka
On-site
You will be joining one of the Big Four companies in India at either Bangalore or Mumbai. As a Spark/Scala Developer specializing in Big Data, you will play a key role in designing and implementing scalable data solutions while ensuring optimal performance. Your responsibilities will include translating business requirements into technical deliverables and contributing to the overall success of the team. To excel in this role, you should have 3 to 5 years of experience as a Big Data Engineer or in a similar position. Additionally, a minimum of 2 years of experience in Scala programming and SQL is required. You will be expected to design, modify, and implement solutions for handling data in Hadoop Data Lake for both batch and streaming workloads using Scala & Apache Spark. Alongside this, debugging, optimization, and performance tuning of Spark jobs will be a part of your daily tasks. Your ability to translate functional requirements and user-stories into technical solutions will be crucial. Furthermore, your expertise in developing and debugging complex SQL queries to extract valuable business insights will be highly beneficial. While not mandatory, any prior development experience with cloud services such as AWS, Azure, or GCP will be considered advantageous for this role.,
Posted 6 days ago
3.0 - 7.0 years
0 Lacs
delhi
On-site
Eskimi is a full-stack programmatic advertising platform that reaches 96% of the open web, enabling the planning, building, and execution of high-performing advertising campaigns across 162 markets. The company's commitment to premium creativity and innovative formats sets it apart, delivering optimal outcomes for advertising agencies and brands worldwide. At Eskimi, growth, ownership, innovation, drive, and collaboration define how things are done. With a global team spanning 30+ countries and 5 continents, diversity and inclusion thrive in a dynamic environment. The engineering department at Eskimi is a hub of creativity and innovation, comprising curious engineers navigating adtech complexities with persistence and ingenuity. Empowering team members to contribute unique perspectives is a core belief, emphasizing collaboration for brainstorming solutions and refining algorithms. The team explores cutting-edge technologies and methodologies, fostering camaraderie and support to achieve extraordinary results. As a Backend Developer at Eskimi, you will play a pivotal role in shaping the Ad Exchange by building features that directly impact performance and revenue generation. Your work will involve fine-tuning real-time bidding and scaling systems to handle substantial traffic, contributing to the core of the platform. Joining a supportive team dedicated to continuous learning and improvement, you will be part of a collaborative environment. Key Responsibilities: - Delve into the fast-paced world of Ad Tech - Design and implement revenue-generating features - Collaborate with cross-functional teams - Work in an Agile environment with frequent iterations - Develop backend services in Scala, with support for learning if unfamiliar - Enhance and refactor the existing codebase for cleanliness, scalability, and robustness Qualifications: - Proficiency in modern programming languages like Java, C#, Go, or Scala - Strong communication skills for effective collaboration - Solid background in various storage systems and knowing when to use each - Familiarity with Git, testing practices, and system monitoring tools - Commitment to writing durable code for long-term user service - Ability to work with remote, distributed teams across time zones Benefits: - Flexible work arrangements, including hybrid models and remote work options - Professional development opportunities and mentorship programs - Recognition culture with bonus systems and celebration of achievements - Additional perks such as private health insurance, volunteer days, and birthday leave - Opportunities to be part of a fast-growing AdTech company and contribute to changing the digital advertising landscape globally Join Eskimi and be part of a team that pushes boundaries in digital advertising, where growth knows no limits.,
Posted 6 days ago
3.0 - 7.0 years
0 Lacs
chennai, tamil nadu
On-site
You will be responsible for designing, developing, and optimizing data processing solutions using a combination of Big Data technologies. Your focus will be on building scalable and efficient data pipelines for handling large datasets and enabling batch & real-time data streaming and processing. Your responsibilities will include developing Spark applications using Scala or Python (Pyspark) for data transformation, aggregation, and analysis. You will also need to develop and maintain Kafka-based data pipelines, which involves designing Kafka Streams, setting up Kafka Clusters, and ensuring efficient data flow. Additionally, you will create and optimize Spark applications using Scala and PySpark to process large datasets and implement data transformations and aggregations. Another important aspect of your role will be integrating Kafka with Spark for real-time processing. You will be building systems that ingest real-time data from Kafka and process it using Spark Streaming or Structured Streaming. Collaboration with data teams including data engineers, data scientists, and DevOps is essential to design and implement data solutions effectively. You will also need to tune and optimize Spark and Kafka clusters to ensure high performance, scalability, and efficiency of data processing workflows. Writing clean, functional, and optimized code while adhering to coding standards and best practices will be a key part of your daily tasks. Troubleshooting and resolving issues related to Kafka and Spark applications, as well as maintaining documentation for Kafka configurations, Spark jobs, and other processes are also important aspects of the role. Continuous learning and applying new advancements in functional programming, big data, and related technologies is crucial. Proficiency in the Hadoop ecosystem big data tech stack (HDFS, YARN, MapReduce, Hive, Impala), Spark (Scala, Python), Kafka, ETL processes, and data ingestion tools is required. Deep hands-on expertise in Pyspark, Scala, Kafka, programming languages such as Scala, Python, or Java for developing Spark applications, and SQL for data querying and analysis are necessary. Additionally, familiarity with data warehousing concepts, Linux/Unix operating systems, problem-solving, analytical skills, and version control systems will be beneficial in performing your duties effectively. This is a full-time position in the Technology job family group, specifically in Applications Development. If you require a reasonable accommodation to use search tools or apply for a career opportunity due to a disability, please review Accessibility at Citi. You can also refer to Citis EEO Policy Statement and the Know Your Rights poster for more information.,
Posted 6 days ago
6.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description Senior Data Engineer – Databricks (Azure/AWS) Role Overview: We are looking for a hands-on Senior Data Engineer experienced in migrating and building large-scale data pipelines on Databricks using Azure or AWS platforms. The role will focus on implementing batch and streaming pipelines, applying the bronze-silver-gold data lakehouse model, and ensuring scalable and reliable data solutions. Required Skills and Experience: 6+ years of hands-on data engineering experience, with 2+ years specifically working on Databricks in Azure or AWS. Proficiency in building and optimizing Spark pipelines (batch and streaming). Strong experience implementing bronze/silver/gold data models. Working knowledge of cloud storage systems (ADLS, S3) and compute services. Experience migrating data from RDBMS (Oracle, SQL Server) or Hadoop ecosystems. Familiarity with Airflow, Azure Data Factory, or AWS Glue for orchestration. Good scripting skills (Python, Scala, SQL) and version control (Git). Preferred Qualifications: Databricks Certified Data Engineer Associate or Professional. Experience with Delta Live Tables (DLT) and Databricks SQL. Understanding of cloud security best practices (IAM roles, encryption, ACLs). Responsibilities Key Responsibilities: Design, develop, and operationalize scalable data pipelines on Databricks following medallion architecture principles. Migrate and transform large data volumes from traditional on-prem systems (Oracle, Hadoop, Exadata) into cloud data platforms. Develop efficient Spark (PySpark/Scala) jobs for ingestion, transformation, aggregation, and publishing of data. Implement data quality checks, error handling, retries, and data validation frameworks. Build automation scripts and CI/CD pipelines for Databricks workflows and deployment. Tune Spark jobs and optimize cost and performance in cloud environments. Collaborate with data architects, product owners, and analytics teams. Attributes for Success: Strong analytical and problem-solving skills. Attention to scalability, resilience, and cost efficiency. Collaborative attitude and passion for clean, maintainable code. Diversity and Inclusion: An Oracle career can span industries, roles, Countries, and cultures, allowing you to flourish in new roles and innovate while blending work life in. Oracle has thrived through 40+ years of change by innovating and operating with integrity while delivering for the top companies in almost every industry. To nurture the talent that makes this happen, we are committed to an inclusive culture that celebrates and values diverse insights and perspectives, a workforce that inspires thought leadership and innovation. Oracle offers a highly competitive suite of Employee Benefits designed on the principles of parity, consistency, and affordability. The overall package includes certain core elements such as Medical, Life Insurance, access to Retirement Planning, and much more. We also encourage our employees to engage in the culture of giving back to the communities where we live and do business.At Oracle, we believe that innovation starts with diversity and inclusion and to create the future we need talent from various backgrounds, perspectives, and abilities. We ensure that individuals with disabilities are provided reasonable accommodation to successfully participate in the job application, and interview process, and in potential roles. To perform crucial job functions. That’s why we’re committed to creating a workforce where all individuals can do their best work. It’s when everyone’s voice is heard and valued that we’re inspired to go beyond what’s been done before. About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.
Posted 6 days ago
8.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description Data Architect – Databricks (Azure/AWS) Role Overview: We are seeking an experienced Data Architect specializing in Databricks to lead the architecture, design, and migration of enterprise data workloads from on-premises systems (e.g., Oracle, Exadata, Hadoop) to Databricks on Azure or AWS . The role involves designing scalable, secure, and high-performing data platforms based on the medallion architecture (bronze, silver, gold layers), supporting large-scale ingestion, transformation, and publishing of data. Required Skills and Experience: 8+ years of experience in data architecture or engineering roles, with at least 3+ years specializing in cloud-based big data solutions. Hands-on expertise with Databricks on Azure or AWS. Deep understanding of Delta Lake, medallion architecture (bronze/silver/gold zones), and data governance tools (e.g., Unity Catalog, Purview). Strong experience migrating large datasets and batch/streaming pipelines from on-prem to Databricks. Expertise with Spark (PySpark/Scala) at scale and optimizing Spark jobs. Familiarity with ingestion from RDBMS (Oracle, SQL Server) and legacy Hadoop ecosystems. Proficiency in orchestration tools (Databricks Workflows, Airflow, Azure Data Factory, AWS Glue Workflows). Strong understanding of cloud-native services for storage, compute, security, and networking. Preferred Qualifications: Databricks Certified Data Engineer or Architect. Azure/AWS cloud certifications. Experience with real-time/streaming ingestion (Kafka, Event Hubs, Kinesis). Familiarity with data quality frameworks (e.g., Deequ, Great Expectations). Responsibilities Key Responsibilities: Define and design cloud-native data architecture on Databricks using Delta Lake, Unity Catalog, and related services. Develop migration strategies for moving on-premises data workloads (Oracle, Hadoop, Exadata, etc.) to Databricks on Azure/AWS. Architect and oversee data pipelines supporting ingestion, curation, transformation, and analytics in a multi-layered (bronze/silver/gold) model. Lead data modeling, schema design, performance optimization, and data governance best practices. Collaborate with data engineering, platform, and security teams to build production-ready solutions. Create standards for ingestion frameworks, job orchestration (e.g., workflows, Airflow), and data quality validation. Support cost optimization, scalability design, and operational monitoring frameworks. Guide and mentor engineering teams during the build and migration phases. Attributes for Success: Ability to lead architecture discussions with technical and business stakeholders. Passion for modern cloud data architectures and continuous learning. Pragmatic and solution-driven approach to migrations. Diversity and Inclusion : An Oracle career can span industries, roles, Countries, and cultures, allowing you to flourish in new roles and innovate while blending work life in. Oracle has thrived through 40+ years of change by innovating and operating with integrity while delivering for the top companies in almost every industry. To nurture the talent that makes this happen, we are committed to an inclusive culture that celebrates and values diverse insights and perspectives, a workforce that inspires thought leadership and innovation. Oracle offers a highly competitive suite of Employee Benefits designed on the principles of parity, consistency, and affordability. The overall package includes certain core elements such as Medical, Life Insurance, access to Retirement Planning, and much more. We also encourage our employees to engage in the culture of giving back to the communities where we live and do business. At Oracle, we believe that innovation starts with diversity and inclusion and to create the future we need talent from various backgrounds, perspectives, and abilities. We ensure that individuals with disabilities are provided reasonable accommodation to successfully participate in the job application, and interview process, and in potential roles. To perform crucial job functions. That’s why we’re committed to creating a workforce where all individuals can do their best work. It’s when everyone’s voice is heard and valued that we’re inspired to go beyond what’s been done before. About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.
Posted 6 days ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description Data Architect – Databricks (Azure/AWS) Role Overview: We are seeking an experienced Data Architect specializing in Databricks to lead the architecture, design, and migration of enterprise data workloads from on-premises systems (e.g., Oracle, Exadata, Hadoop) to Databricks on Azure or AWS . The role involves designing scalable, secure, and high-performing data platforms based on the medallion architecture (bronze, silver, gold layers), supporting large-scale ingestion, transformation, and publishing of data. Required Skills and Experience: 8+ years of experience in data architecture or engineering roles, with at least 3+ years specializing in cloud-based big data solutions. Hands-on expertise with Databricks on Azure or AWS. Deep understanding of Delta Lake, medallion architecture (bronze/silver/gold zones), and data governance tools (e.g., Unity Catalog, Purview). Strong experience migrating large datasets and batch/streaming pipelines from on-prem to Databricks. Expertise with Spark (PySpark/Scala) at scale and optimizing Spark jobs. Familiarity with ingestion from RDBMS (Oracle, SQL Server) and legacy Hadoop ecosystems. Proficiency in orchestration tools (Databricks Workflows, Airflow, Azure Data Factory, AWS Glue Workflows). Strong understanding of cloud-native services for storage, compute, security, and networking. Preferred Qualifications: Databricks Certified Data Engineer or Architect. Azure/AWS cloud certifications. Experience with real-time/streaming ingestion (Kafka, Event Hubs, Kinesis). Familiarity with data quality frameworks (e.g., Deequ, Great Expectations). Responsibilities Key Responsibilities: Define and design cloud-native data architecture on Databricks using Delta Lake, Unity Catalog, and related services. Develop migration strategies for moving on-premises data workloads (Oracle, Hadoop, Exadata, etc.) to Databricks on Azure/AWS. Architect and oversee data pipelines supporting ingestion, curation, transformation, and analytics in a multi-layered (bronze/silver/gold) model. Lead data modeling, schema design, performance optimization, and data governance best practices. Collaborate with data engineering, platform, and security teams to build production-ready solutions. Create standards for ingestion frameworks, job orchestration (e.g., workflows, Airflow), and data quality validation. Support cost optimization, scalability design, and operational monitoring frameworks. Guide and mentor engineering teams during the build and migration phases. Attributes for Success: Ability to lead architecture discussions with technical and business stakeholders. Passion for modern cloud data architectures and continuous learning. Pragmatic and solution-driven approach to migrations. Diversity and Inclusion : An Oracle career can span industries, roles, Countries, and cultures, allowing you to flourish in new roles and innovate while blending work life in. Oracle has thrived through 40+ years of change by innovating and operating with integrity while delivering for the top companies in almost every industry. To nurture the talent that makes this happen, we are committed to an inclusive culture that celebrates and values diverse insights and perspectives, a workforce that inspires thought leadership and innovation. Oracle offers a highly competitive suite of Employee Benefits designed on the principles of parity, consistency, and affordability. The overall package includes certain core elements such as Medical, Life Insurance, access to Retirement Planning, and much more. We also encourage our employees to engage in the culture of giving back to the communities where we live and do business. At Oracle, we believe that innovation starts with diversity and inclusion and to create the future we need talent from various backgrounds, perspectives, and abilities. We ensure that individuals with disabilities are provided reasonable accommodation to successfully participate in the job application, and interview process, and in potential roles. To perform crucial job functions. That’s why we’re committed to creating a workforce where all individuals can do their best work. It’s when everyone’s voice is heard and valued that we’re inspired to go beyond what’s been done before. About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.
Posted 6 days ago
6.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description Senior Data Engineer – Databricks (Azure/AWS) Role Overview: We are looking for a hands-on Senior Data Engineer experienced in migrating and building large-scale data pipelines on Databricks using Azure or AWS platforms. The role will focus on implementing batch and streaming pipelines, applying the bronze-silver-gold data lakehouse model, and ensuring scalable and reliable data solutions. Required Skills and Experience: 6+ years of hands-on data engineering experience, with 2+ years specifically working on Databricks in Azure or AWS. Proficiency in building and optimizing Spark pipelines (batch and streaming). Strong experience implementing bronze/silver/gold data models. Working knowledge of cloud storage systems (ADLS, S3) and compute services. Experience migrating data from RDBMS (Oracle, SQL Server) or Hadoop ecosystems. Familiarity with Airflow, Azure Data Factory, or AWS Glue for orchestration. Good scripting skills (Python, Scala, SQL) and version control (Git). Preferred Qualifications: Databricks Certified Data Engineer Associate or Professional. Experience with Delta Live Tables (DLT) and Databricks SQL. Understanding of cloud security best practices (IAM roles, encryption, ACLs). Responsibilities Key Responsibilities: Design, develop, and operationalize scalable data pipelines on Databricks following medallion architecture principles. Migrate and transform large data volumes from traditional on-prem systems (Oracle, Hadoop, Exadata) into cloud data platforms. Develop efficient Spark (PySpark/Scala) jobs for ingestion, transformation, aggregation, and publishing of data. Implement data quality checks, error handling, retries, and data validation frameworks. Build automation scripts and CI/CD pipelines for Databricks workflows and deployment. Tune Spark jobs and optimize cost and performance in cloud environments. Collaborate with data architects, product owners, and analytics teams. Attributes for Success: Strong analytical and problem-solving skills. Attention to scalability, resilience, and cost efficiency. Collaborative attitude and passion for clean, maintainable code. Diversity and Inclusion: An Oracle career can span industries, roles, Countries, and cultures, allowing you to flourish in new roles and innovate while blending work life in. Oracle has thrived through 40+ years of change by innovating and operating with integrity while delivering for the top companies in almost every industry. To nurture the talent that makes this happen, we are committed to an inclusive culture that celebrates and values diverse insights and perspectives, a workforce that inspires thought leadership and innovation. Oracle offers a highly competitive suite of Employee Benefits designed on the principles of parity, consistency, and affordability. The overall package includes certain core elements such as Medical, Life Insurance, access to Retirement Planning, and much more. We also encourage our employees to engage in the culture of giving back to the communities where we live and do business.At Oracle, we believe that innovation starts with diversity and inclusion and to create the future we need talent from various backgrounds, perspectives, and abilities. We ensure that individuals with disabilities are provided reasonable accommodation to successfully participate in the job application, and interview process, and in potential roles. To perform crucial job functions. That’s why we’re committed to creating a workforce where all individuals can do their best work. It’s when everyone’s voice is heard and valued that we’re inspired to go beyond what’s been done before. About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.
Posted 6 days ago
6.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Job Description Senior Data Engineer – Databricks (Azure/AWS) Role Overview: We are looking for a hands-on Senior Data Engineer experienced in migrating and building large-scale data pipelines on Databricks using Azure or AWS platforms. The role will focus on implementing batch and streaming pipelines, applying the bronze-silver-gold data lakehouse model, and ensuring scalable and reliable data solutions. Required Skills and Experience: 6+ years of hands-on data engineering experience, with 2+ years specifically working on Databricks in Azure or AWS. Proficiency in building and optimizing Spark pipelines (batch and streaming). Strong experience implementing bronze/silver/gold data models. Working knowledge of cloud storage systems (ADLS, S3) and compute services. Experience migrating data from RDBMS (Oracle, SQL Server) or Hadoop ecosystems. Familiarity with Airflow, Azure Data Factory, or AWS Glue for orchestration. Good scripting skills (Python, Scala, SQL) and version control (Git). Preferred Qualifications: Databricks Certified Data Engineer Associate or Professional. Experience with Delta Live Tables (DLT) and Databricks SQL. Understanding of cloud security best practices (IAM roles, encryption, ACLs). Responsibilities Key Responsibilities: Design, develop, and operationalize scalable data pipelines on Databricks following medallion architecture principles. Migrate and transform large data volumes from traditional on-prem systems (Oracle, Hadoop, Exadata) into cloud data platforms. Develop efficient Spark (PySpark/Scala) jobs for ingestion, transformation, aggregation, and publishing of data. Implement data quality checks, error handling, retries, and data validation frameworks. Build automation scripts and CI/CD pipelines for Databricks workflows and deployment. Tune Spark jobs and optimize cost and performance in cloud environments. Collaborate with data architects, product owners, and analytics teams. Attributes for Success: Strong analytical and problem-solving skills. Attention to scalability, resilience, and cost efficiency. Collaborative attitude and passion for clean, maintainable code. Diversity and Inclusion: An Oracle career can span industries, roles, Countries, and cultures, allowing you to flourish in new roles and innovate while blending work life in. Oracle has thrived through 40+ years of change by innovating and operating with integrity while delivering for the top companies in almost every industry. To nurture the talent that makes this happen, we are committed to an inclusive culture that celebrates and values diverse insights and perspectives, a workforce that inspires thought leadership and innovation. Oracle offers a highly competitive suite of Employee Benefits designed on the principles of parity, consistency, and affordability. The overall package includes certain core elements such as Medical, Life Insurance, access to Retirement Planning, and much more. We also encourage our employees to engage in the culture of giving back to the communities where we live and do business.At Oracle, we believe that innovation starts with diversity and inclusion and to create the future we need talent from various backgrounds, perspectives, and abilities. We ensure that individuals with disabilities are provided reasonable accommodation to successfully participate in the job application, and interview process, and in potential roles. To perform crucial job functions. That’s why we’re committed to creating a workforce where all individuals can do their best work. It’s when everyone’s voice is heard and valued that we’re inspired to go beyond what’s been done before. About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.
Posted 6 days ago
8.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Job Description Data Architect – Databricks (Azure/AWS) Role Overview: We are seeking an experienced Data Architect specializing in Databricks to lead the architecture, design, and migration of enterprise data workloads from on-premises systems (e.g., Oracle, Exadata, Hadoop) to Databricks on Azure or AWS . The role involves designing scalable, secure, and high-performing data platforms based on the medallion architecture (bronze, silver, gold layers), supporting large-scale ingestion, transformation, and publishing of data. Required Skills and Experience: 8+ years of experience in data architecture or engineering roles, with at least 3+ years specializing in cloud-based big data solutions. Hands-on expertise with Databricks on Azure or AWS. Deep understanding of Delta Lake, medallion architecture (bronze/silver/gold zones), and data governance tools (e.g., Unity Catalog, Purview). Strong experience migrating large datasets and batch/streaming pipelines from on-prem to Databricks. Expertise with Spark (PySpark/Scala) at scale and optimizing Spark jobs. Familiarity with ingestion from RDBMS (Oracle, SQL Server) and legacy Hadoop ecosystems. Proficiency in orchestration tools (Databricks Workflows, Airflow, Azure Data Factory, AWS Glue Workflows). Strong understanding of cloud-native services for storage, compute, security, and networking. Preferred Qualifications: Databricks Certified Data Engineer or Architect. Azure/AWS cloud certifications. Experience with real-time/streaming ingestion (Kafka, Event Hubs, Kinesis). Familiarity with data quality frameworks (e.g., Deequ, Great Expectations). Responsibilities Key Responsibilities: Define and design cloud-native data architecture on Databricks using Delta Lake, Unity Catalog, and related services. Develop migration strategies for moving on-premises data workloads (Oracle, Hadoop, Exadata, etc.) to Databricks on Azure/AWS. Architect and oversee data pipelines supporting ingestion, curation, transformation, and analytics in a multi-layered (bronze/silver/gold) model. Lead data modeling, schema design, performance optimization, and data governance best practices. Collaborate with data engineering, platform, and security teams to build production-ready solutions. Create standards for ingestion frameworks, job orchestration (e.g., workflows, Airflow), and data quality validation. Support cost optimization, scalability design, and operational monitoring frameworks. Guide and mentor engineering teams during the build and migration phases. Attributes for Success: Ability to lead architecture discussions with technical and business stakeholders. Passion for modern cloud data architectures and continuous learning. Pragmatic and solution-driven approach to migrations. Diversity and Inclusion : An Oracle career can span industries, roles, Countries, and cultures, allowing you to flourish in new roles and innovate while blending work life in. Oracle has thrived through 40+ years of change by innovating and operating with integrity while delivering for the top companies in almost every industry. To nurture the talent that makes this happen, we are committed to an inclusive culture that celebrates and values diverse insights and perspectives, a workforce that inspires thought leadership and innovation. Oracle offers a highly competitive suite of Employee Benefits designed on the principles of parity, consistency, and affordability. The overall package includes certain core elements such as Medical, Life Insurance, access to Retirement Planning, and much more. We also encourage our employees to engage in the culture of giving back to the communities where we live and do business. At Oracle, we believe that innovation starts with diversity and inclusion and to create the future we need talent from various backgrounds, perspectives, and abilities. We ensure that individuals with disabilities are provided reasonable accommodation to successfully participate in the job application, and interview process, and in potential roles. To perform crucial job functions. That’s why we’re committed to creating a workforce where all individuals can do their best work. It’s when everyone’s voice is heard and valued that we’re inspired to go beyond what’s been done before. About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.
Posted 6 days ago
8.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Description Data Architect – Databricks (Azure/AWS) Role Overview: We are seeking an experienced Data Architect specializing in Databricks to lead the architecture, design, and migration of enterprise data workloads from on-premises systems (e.g., Oracle, Exadata, Hadoop) to Databricks on Azure or AWS . The role involves designing scalable, secure, and high-performing data platforms based on the medallion architecture (bronze, silver, gold layers), supporting large-scale ingestion, transformation, and publishing of data. Required Skills and Experience: 8+ years of experience in data architecture or engineering roles, with at least 3+ years specializing in cloud-based big data solutions. Hands-on expertise with Databricks on Azure or AWS. Deep understanding of Delta Lake, medallion architecture (bronze/silver/gold zones), and data governance tools (e.g., Unity Catalog, Purview). Strong experience migrating large datasets and batch/streaming pipelines from on-prem to Databricks. Expertise with Spark (PySpark/Scala) at scale and optimizing Spark jobs. Familiarity with ingestion from RDBMS (Oracle, SQL Server) and legacy Hadoop ecosystems. Proficiency in orchestration tools (Databricks Workflows, Airflow, Azure Data Factory, AWS Glue Workflows). Strong understanding of cloud-native services for storage, compute, security, and networking. Preferred Qualifications: Databricks Certified Data Engineer or Architect. Azure/AWS cloud certifications. Experience with real-time/streaming ingestion (Kafka, Event Hubs, Kinesis). Familiarity with data quality frameworks (e.g., Deequ, Great Expectations). Responsibilities Key Responsibilities: Define and design cloud-native data architecture on Databricks using Delta Lake, Unity Catalog, and related services. Develop migration strategies for moving on-premises data workloads (Oracle, Hadoop, Exadata, etc.) to Databricks on Azure/AWS. Architect and oversee data pipelines supporting ingestion, curation, transformation, and analytics in a multi-layered (bronze/silver/gold) model. Lead data modeling, schema design, performance optimization, and data governance best practices. Collaborate with data engineering, platform, and security teams to build production-ready solutions. Create standards for ingestion frameworks, job orchestration (e.g., workflows, Airflow), and data quality validation. Support cost optimization, scalability design, and operational monitoring frameworks. Guide and mentor engineering teams during the build and migration phases. Attributes for Success: Ability to lead architecture discussions with technical and business stakeholders. Passion for modern cloud data architectures and continuous learning. Pragmatic and solution-driven approach to migrations. Diversity and Inclusion : An Oracle career can span industries, roles, Countries, and cultures, allowing you to flourish in new roles and innovate while blending work life in. Oracle has thrived through 40+ years of change by innovating and operating with integrity while delivering for the top companies in almost every industry. To nurture the talent that makes this happen, we are committed to an inclusive culture that celebrates and values diverse insights and perspectives, a workforce that inspires thought leadership and innovation. Oracle offers a highly competitive suite of Employee Benefits designed on the principles of parity, consistency, and affordability. The overall package includes certain core elements such as Medical, Life Insurance, access to Retirement Planning, and much more. We also encourage our employees to engage in the culture of giving back to the communities where we live and do business. At Oracle, we believe that innovation starts with diversity and inclusion and to create the future we need talent from various backgrounds, perspectives, and abilities. We ensure that individuals with disabilities are provided reasonable accommodation to successfully participate in the job application, and interview process, and in potential roles. To perform crucial job functions. That’s why we’re committed to creating a workforce where all individuals can do their best work. It’s when everyone’s voice is heard and valued that we’re inspired to go beyond what’s been done before. About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.
Posted 6 days ago
6.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Description Senior Data Engineer – Databricks (Azure/AWS) Role Overview: We are looking for a hands-on Senior Data Engineer experienced in migrating and building large-scale data pipelines on Databricks using Azure or AWS platforms. The role will focus on implementing batch and streaming pipelines, applying the bronze-silver-gold data lakehouse model, and ensuring scalable and reliable data solutions. Required Skills and Experience: 6+ years of hands-on data engineering experience, with 2+ years specifically working on Databricks in Azure or AWS. Proficiency in building and optimizing Spark pipelines (batch and streaming). Strong experience implementing bronze/silver/gold data models. Working knowledge of cloud storage systems (ADLS, S3) and compute services. Experience migrating data from RDBMS (Oracle, SQL Server) or Hadoop ecosystems. Familiarity with Airflow, Azure Data Factory, or AWS Glue for orchestration. Good scripting skills (Python, Scala, SQL) and version control (Git). Preferred Qualifications: Databricks Certified Data Engineer Associate or Professional. Experience with Delta Live Tables (DLT) and Databricks SQL. Understanding of cloud security best practices (IAM roles, encryption, ACLs). Responsibilities Key Responsibilities: Design, develop, and operationalize scalable data pipelines on Databricks following medallion architecture principles. Migrate and transform large data volumes from traditional on-prem systems (Oracle, Hadoop, Exadata) into cloud data platforms. Develop efficient Spark (PySpark/Scala) jobs for ingestion, transformation, aggregation, and publishing of data. Implement data quality checks, error handling, retries, and data validation frameworks. Build automation scripts and CI/CD pipelines for Databricks workflows and deployment. Tune Spark jobs and optimize cost and performance in cloud environments. Collaborate with data architects, product owners, and analytics teams. Attributes for Success: Strong analytical and problem-solving skills. Attention to scalability, resilience, and cost efficiency. Collaborative attitude and passion for clean, maintainable code. Diversity and Inclusion: An Oracle career can span industries, roles, Countries, and cultures, allowing you to flourish in new roles and innovate while blending work life in. Oracle has thrived through 40+ years of change by innovating and operating with integrity while delivering for the top companies in almost every industry. To nurture the talent that makes this happen, we are committed to an inclusive culture that celebrates and values diverse insights and perspectives, a workforce that inspires thought leadership and innovation. Oracle offers a highly competitive suite of Employee Benefits designed on the principles of parity, consistency, and affordability. The overall package includes certain core elements such as Medical, Life Insurance, access to Retirement Planning, and much more. We also encourage our employees to engage in the culture of giving back to the communities where we live and do business.At Oracle, we believe that innovation starts with diversity and inclusion and to create the future we need talent from various backgrounds, perspectives, and abilities. We ensure that individuals with disabilities are provided reasonable accommodation to successfully participate in the job application, and interview process, and in potential roles. To perform crucial job functions. That’s why we’re committed to creating a workforce where all individuals can do their best work. It’s when everyone’s voice is heard and valued that we’re inspired to go beyond what’s been done before. About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.
Posted 6 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
40005 Jobs | Dublin
Wipro
19416 Jobs | Bengaluru
Accenture in India
16187 Jobs | Dublin 2
EY
15356 Jobs | London
Uplers
11435 Jobs | Ahmedabad
Amazon
10613 Jobs | Seattle,WA
Oracle
9462 Jobs | Redwood City
IBM
9313 Jobs | Armonk
Accenture services Pvt Ltd
8087 Jobs |
Capgemini
7830 Jobs | Paris,France