Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 9.0 years
10 - 20 Lacs
Pune, Gurugram, Bengaluru
Work from Office
Job Description: Expertise in GitLab Actions and Git workflows Databricks administration experience Strong scripting skills (Shell, Python, Bash) Experience with Jira integration in CI/CD workflows Familiarity with DORA metrics and performance tracking Proficient with SonarQube and JFrog Artifactory Deep understanding of branching and merging strategies Strong CI/CD and automated testing integration skills Git and Jira integration Infrastructure as Code experience (Terraform, Ansible) Exposure to cloud platform (Azure/AWS) Familiarity with monitoring/logging (Dynatrace, Grafana, Prometheus, ELK) Roles & Responsibilities Build and manage CI/CD pipelines using GitLab Actions for seamless integration and delivery. Administer Databricks workspaces, including access control, cluster management, and job orchestration. Automate infrastructure and deployment tasks using scripts (Shell, Python, Bash, etc.). Implement source control best practices, including branching, merging, and tagging. Integrate Jira with CI/CD pipelines to automate ticket updates and traceability. Track and improve DORA metrics (Deployment Frequency, Lead Time for Changes, Mean Time to Restore, Change Failure Rate). Manage code quality using SonarQube and artifact lifecycle using JFrog Artifactory. Ensure end-to-end testing is integrated into the delivery pipelines. Collaborate across Dev, QA, and Ops teams to streamline DevOps practices. Troubleshoot build and deployment issues and ensure high system reliability. Maintain up-to-date documentation and contribute to DevOps process improvements.
Posted 1 month ago
1.0 - 4.0 years
7 - 10 Lacs
Chennai
Work from Office
Responsibilities: * Design, develop, optimize data pipelines using Data Bricks and Azure Data Factory. * Collaborate with cross-functional teams on project requirements and deliverables.
Posted 1 month ago
4.0 - 8.0 years
10 - 15 Lacs
Mysuru
Work from Office
Identifying business problems, understand the customer issue and fix the issue. Evaluating reoccurring issues and work on for permanent solution Focus on service improvement. Troubleshooting technical issues and design flaws Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise BE / B Tech in any stream, M.Sc. (Computer Science/IT) / M.C.A, with Minimum 3-5 years of experience Azure IAAS, PASS & SAAS services expert and Handson experience on these and below all services. VM, Storage account, Load Balancer, Application Gateway, VNET, Route Table, Azure Bastion, Disaster Recovery, Backup, NSG, Azure update manager, Key Vault etc. Azure Web App, Function App, Logic App, AKS (Azure Kubernetes Service) & containerization, Docker, Event Hub, Redis Cache, Service Mess and ISTIO, App insight, Databricks, AD, DNS, Log Analytic Workspace, ARO (Azure Red Openshift) Orchestration & Containerization Docker, Kubernetes, RedHat OpenShift Security Management - Firewall Mgmt, FortiGate firewall Preferred technical and professional experience Monitoring through Cloud Native tools (CloudWatch, Cloud Trail, Azure Monitor, Activity Log, VRops and Log Insight) Server monitoring and Management (Windows, Linux, AIX AWS Linux, Ubuntu Linux) Storage Monitoring and Management (Blob, s3, EBS, Backups, recovery, Snapshots
Posted 1 month ago
6.0 - 9.0 years
27 - 42 Lacs
Kochi
Work from Office
Skill: - Databricks Experience: 5 to 14 years Location: - Kochi (Walk in on 14th June) Design, develop, and maintain scalable and efficient data pipelines using Azure Databricks platform. Have work experience in Databricks Unity catalog – Collaborate with data scientists and analysts to integrate machine learning models into production pipelines. – Implement data quality checks and ensure data integrity throughout the data ingestion and transformation processes. – Optimize cluster performance and scalability to handle large volumes of data processing. – Troubleshoot and resolve issues related to data pipelines, clusters, and data processing jobs. – Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions – Conduct performance tuning and optimization for Spark jobs on Azure Databricks. – Provide technical guidance and mentorship to junior data engineers.
Posted 1 month ago
10.0 - 16.0 years
25 - 27 Lacs
Chennai
Work from Office
We at Dexian India, are looking to hire a Cloud Data PM with over 10 years of hands-on experience in AWS/Azure, DWH, and ETL. The role is based in Chennai with a shift from 2.00pm to 11.00pm IST. Key qualifications we seek in candidates include: - Solid understanding of SQL and data modeling - Proficiency in DWH architecture, including EDW/DM concepts and Star/Snowflake schema - Experience in designing and building data pipelines on Azure Cloud stack - Familiarity with Azure Data Explorer, Data Factory, Data Bricks, Synapse Analytics, Azure Fabric, Azure Analysis Services, and Azure SQL Datawarehouse - Knowledge of Azure DevOps and CI/CD Pipelines - Previous experience managing scrum teams and working as a Scrum Master or Project Manager on at least 2 projects - Exposure to on-premise transactional database environments like Oracle, SQL Server, Snowflake, MySQL, and/or Postgres - Ability to lead enterprise data strategies, including data lake delivery - Proficiency in data visualization tools such as Power BI or Tableau, and statistical analysis using R or Python - Strong problem-solving skills with a track record of deriving business insights from large datasets - Excellent communication skills and the ability to provide strategic direction to technical and business teams - Prior experience in presales, RFP and RFI responses, and proposal writing is mandatory - Capability to explain complex data solutions clearly to senior management - Experience in implementing, managing, and supporting data warehouse projects or applications - Track record of leading full-cycle implementation projects related to Business Intelligence - Strong team and stakeholder management skills - Attention to detail, accuracy, and ability to meet tight deadlines - Knowledge of application development, APIs, Microservices, and Integration components Tools & Technology Experience Required: - Strong hands-on experience in SQL or PLSQL - Proficiency in Python - SSIS or Informatica (Mandatory one of the tools) - BI: Power BI, or Tableau (Mandatory one of the tools)
Posted 1 month ago
5.0 - 9.0 years
12 - 20 Lacs
Chennai
Remote
USXI is Looking for Big Data Developers that will work on the collecting, storing, processing, and analyzing of huge sets of data. The Data Developers must also have exceptional analytical skills, showing fluency in the use of tools such as MySQL and strong Python, Shell, Java, PHP, and T-SQL programming skills. The candidate must also be technologically adept, demonstrating strong computer skills. Additionally, you must be capable of developing databases using SSIS packages, T-SQL, MSSQL, and MySQL scripts. The candidate will also have an ability to design, build, and maintain the businesss ETL pipeline and data warehouse. The candidate will also demonstrate expertise in data modeling and query performance tuning on SQL Server, MySQL, Redshift, Postgres or similar platforms. Key responsibilities will include: Develop and maintain data pipelines Design and implement ETL processes Hands on experience on Data Modeling Design conceptual, logical and physical data models with type 1 and type2 dimension s. Platform Expertise: Leverage Microsoft Fabric, Snowflake, and Databricks to optimize data storage, transformation, and retrieval processes. Knowledge to move the ETL code base from On-premise to Cloud Architecture Understanding data lineage and governance for different data sources Maintaining clean and consistent access to all our data sources Hands on experience to deploy the code using CI/CD pipelines Assemble large and complex data sets strategically to meet business requirements Enable business users to bring data-driven insights into their business decisions through reports and dashboards Required Qualifications: Hands on experience in big data technologies including Scala or Spark (Azure Databricks preferable), Hadoop, Hive, HDFS. Python, Java & SQL Knowledge of Microsofts Azure Cloud Experience and commitment to development and testing best practices. DevOps experience with continuous integration/delivery best-practices, technologies and tools. Experienced deploying Azure SQL Database, Azure Data Factory and well-acquainted with other Azure services including Azure Data Lake and Azure ML Experience implementing REST API calls and authentication Experienced working with agile project management methodologies Computer Science Degree/Diploma Microsoft Certified: DP203 - Azure Data Engineer Associate
Posted 1 month ago
5.0 - 10.0 years
15 - 30 Lacs
Ahmedabad
Work from Office
Role & responsibilities Senior Data Engineer Job Description GRUBBRR is seeking a mid/senior-level data engineer to help build our next-generation analytical and big data solutions. We strive to build Cloud-native, consumer-first, UX-friendly kiosks and online applications across a variety of verticals supporting enterprise clients and small businesses. Behind our consumer applications, we integrate and interact with a deep-stack of payment, loyalty, and POS systems. In addition, we also provide actionable insights to enable our customers to make informed decisions. Our challenge and goal is to provide a frictionless experience for our end-consumers and easy-to-use, smart management capabilities for our customers to maximize their ROIs. Responsibilities: Develop and maintain data pipelines Ensure data quality and accuracy Design, develop and maintain large, complex sets of data that meet non-functional and functional business requirements Build required infrastructure for optimal extraction, transformation and loading of data from various data sources using cloud technologies Build analytical tools to utilize the data pipelines Skills: Solid experience with SQL & NoSQL Strong Data modeling skills for data lake, data warehouse, data marts including dimensional modeling and star schemas Proficient with Azure Data Factory data integration technology Knowledge of Hadoop or similar Big Data technology Knowledge of Apache Kafka, Spark, Hive or equivalent Knowledge of Azure or AWS analytics technologies Qualifications: BS in Computer Science, Applied Mathematics or related fields (MS preferred) At least 8 years of experience working with OLAPs Microsoft Azure or AWS Data engineer certification a plus
Posted 1 month ago
5.0 - 9.0 years
15 - 30 Lacs
Hyderabad
Hybrid
Hi! Greetings of the day!! We have openings for one of our product based company. Location : Hyderabad Notice Period: Only Immediate - 30 Days Work Mode - Hybrid Key Purpose Statement Core mission The core purpose of a Senior Data Engineer will play a key role in designing, building, and optimizing our data infrastructure and pipelines. This individual will leverage their deep expertise in Azure Synapse , Databricks cloud platforms, and Python programming to deliver high-quality data solutions. RESPONSIBILITIES Data Infrastructure and Pipeline Development: - Develop and maintain complex ETL/ELT pipelines using Databricks and Azure Synapse. - Optimize data pipelines for performance, scalability, and cost-efficiency. - Implement best practices for data governance, quality, and security. Cloud Platform Management: - Design and manage cloud-based data infrastructure on platforms such as Azure - Utilize cloud-native tools and services to enhance data processing and storage capabilities. - understanding and designing CI/CD pipelines for data engineering projects. Programming: - Develop and maintain high-quality, reusable Code on Databricks, and Synapse environment for data processing and automation. - Collaborate with data scientists and analysts to design solutions into data workflows. - Conduct code reviews and mentor junior engineers in Python , PySpark & SQL environments best practices. If interested, please share resume to aparna.ch@v3staffing.in
Posted 1 month ago
6.0 - 11.0 years
15 - 30 Lacs
Hyderabad, Pune, Bengaluru
Hybrid
Warm Greetings from SP Staffing Services Private Limited!! We have an urgent opening with our CMMI Level5 client for the below position. Please send your update profile if you are interested. Relevant Experience: 6 - 15 Yrs Location: Pan India Job Description: Candidate must be experienced working in projects involving Other ideal qualifications include experiences in Primarily looking for a data engineer with expertise in processing data pipelines using Databricks Spark SQL on Hadoop distributions like AWS EMR Data bricks Cloudera etc. Should be very proficient in doing large scale data operations using Databricks and overall very comfortable using Python Familiarity with AWS compute storage and IAM concepts Experience in working with S3 Data Lake as the storage tier Any ETL background Talend AWS Glue etc. is a plus but not required Cloud Warehouse experience Snowflake etc. is a huge plus Carefully evaluates alternative risks and solutions before taking action. Optimizes the use of all available resources Develops solutions to meet business needs that reflect a clear understanding of the objectives practices and procedures of the corporation department and business unit Skills Hands on experience on Databricks Spark SQL AWS Cloud platform especially S3 EMR Databricks Cloudera etc. Experience on Shell scripting Exceptionally strong analytical and problem-solving skills Relevant experience with ETL methods and with retrieving data from dimensional data models and data warehouses Strong experience with relational databases and data access methods especially SQL Excellent collaboration and cross functional leadership skills Excellent communication skills both written and verbal Ability to manage multiple initiatives and priorities in a fast-paced collaborative environment Ability to leverage data assets to respond to complex questions that require timely answers has working knowledge on migrating relational and dimensional databases on AWS Cloud platform Skills Interested can share your resume to sankarspstaffings@gmail.com with below inline details. Over All Exp : Relevant Exp : Current CTC : Expected CTC : Notice Period :
Posted 1 month ago
6.0 - 11.0 years
15 - 30 Lacs
Hyderabad, Pune, Bengaluru
Hybrid
Warm Greetings from SP Staffing Services Private Limited!! We have an urgent opening with our CMMI Level5 client for the below position. Please send your update profile if you are interested. Relevant Experience: 6 - 15 Yrs Location: Pan India Job Description: Candidate must be proficient in Databricks Understands where to obtain information needed to make the appropriate decisions Demonstrates ability to break down a problem to manageable pieces and implement effective timely solutions Identifies the problem versus the symptoms Manages problems that require involvement of others to solve Reaches sound decisions quickly Develops solutions to meet business needs that reflect a clear understanding of the objectives practices and procedures of the corporation department and business unit Roles Responsibilities Provides innovative and cost effective solution using databricks Optimizes the use of all available resources Develops solutions to meet business needs that reflect a clear understanding of the objectives practices and procedures of the corporation department and business unit Learn adapt quickly to new Technologies as per the business need Develop a team of Operations Excellence building tools and capabilities that the Development teams leverage to maintain high levels of performance scalability security and availability Skills The Candidate must have 710 yrs of experience in databricks delta lake Hands on experience on Azure Experience on Python scripting Relevant experience with ETL methods and with retrieving data from dimensional data models and data warehouses Strong experience with relational databases and data access methods especially SQL Knowledge of Azure architecture and design Interested can share your resume to sankarspstaffings@gmail.com with below inline details. Over All Exp : Relevant Exp : Current CTC : Expected CTC : Notice Period :
Posted 1 month ago
5.0 - 10.0 years
13 - 23 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
Role & responsibilities Hands-on expertise in provisioning and configuring Azure components Including Data, App Services, Azure Kubernetes Service, Azure AI/ML, and other components. Expert developer of Terraform scripts and ARM templates. Good at GitHub and Azure DevOps for code and infrastructure deployments. Good at API Gateway (Kong) for managing and registering APIs Understand internals of Azure Databricks and Unity Catalog. Good at troubleshooting of Azure Platform issues. Nice to have Snowflake provisioning and configuration skills. Must be an excellent communicator and collaborator. Preferred candidate profile
Posted 1 month ago
5.0 - 10.0 years
12 - 16 Lacs
Pune
Work from Office
COMPANY OVERVIEW Domos AI and Data Products Platform lets people channel AI and data into innovative uses that deliver a measurable impact. Anyone can use Domo to prepare, analyze, visualize, automate, and build data products that are amplified by AI. POSITION SUMMARY Working as a member of Domo s Client Services team, the Senior Technical Consultant will be focused on the implementation of fault tolerant, highly scalable solutions. The successful candidate will have a minimum of 5 years working hands-on with data. This individual will join an enthusiastic, fast-paced and dynamic team at Domo. A successful candidate will have demonstrated sustained exceptional performance, innovation, creativity, insight, good judgment. KEY RESPONSIBILITIES Partner with customers, business users, technical teams to understand the data needs and deliver impactful solutions; Develop strategies for data acquisitions and integration of the new data into Domos Data Engine; Map source system data to Domos data architecture and define integration strategies; Lead database analysis, design, and build effort, if required; Design scalable and efficient data models for the data warehouse or data mart (data structure, storage, and integration); Implement best practices for data ingestion, transformation and semantic modelling; Aggregate, transform and prepare large data sets for use within Domo solutions; Provide guidance on how to design and optimizes complex SQL queries; Provide consultation and mentoring to customers on best practices and skills to drive greater self-sufficiency; Ensure data quality and perform validation across pipelines and reports; Write Python scripts to automate governance processes; Ability to create workflows in DOMO to automate business processes; Build custom Domo applications or custom bricks to support unique client use cases; Develop Agent Catalysts to deliver generative AI-powered insights within Domo, enabling intelligent data exploration, narrative generation, and proactive decision support through embedded AI features; Capable of thoroughly reviewing and documenting existing data pipelines, and guiding customers through them to ensure a seamless transition and operational understanding. JOB REQUIREMENTS 5+ years of experience supporting business intelligence systems in a BI or ETL Developer role; Expert SQL skills required; Expertise with Windows and Linux environments; Expertise with at least one of the following database technologies and familiarity with the others: relational, columnar and NoSQL (i.e. MySQL, Oracle, MSSQL, Vertica, MongoDB); Understanding of data modelling skills (i.e. conceptual, logical and physical model design - with both traditional 3rd normal form as well as dimensional modelling, such as star and snowflake); Experience dealing with large data sets; Goal oriented with strong attention to detail; Proven experience in effectively partnering with business teams to deliver their goals and outcomes; Bachelors Degree in in Information Systems, Statistics, Computer Science or related field preferred OR equivalent professional experience; Excellent problem-solving skills and creativity; Ability to think outside the box; Ability to learn and adapt quickly to varied requirements; Thrive in a fast-paced environment. NICE TO HAVE Experience working with APIs; Experience working with Web Technologies (JavaScript, Html, CSS); Experience with scripting technologies (Java, Python, R, etc.); Experience working with Snowflake, Data Bricks or Big Query is a plus; Experience defining scope and requirements for projects; Excellent oral and written communication skills, and comfort presenting to everyone from entry-level employees to senior vice presidents; Experience with statistical methodologies; Experience with a wide variety of business data (Marketing, Finance, Operations, etc); Experience with Large ERP systems (SAP, Oracle JD Edwards, Microsoft Dynamics, NetSuite, etc); Understanding of Data Science, Data Modelling and analytics. LOCATION: Pune, Maharashtra, India INDIA BENEFITS & PERKS Medical insurance provided Maternity and paternity leave policies Baby bucks: a cash allowance to spend on anything for every newborn or child adopted Haute Mama : cash allowance for maternity wardrobe benefit (only for women employees) Annual leave of 18 days + 10 holidays + 12 sick leaves Sodexo Meal Pass Health and Wellness Benefit One-time Technology Benefit: cash allowance towards the purchase of a tablet or smartwatch Corporate National Pension Scheme Employee Assistance Programme (EAP) Marriage leaves up to 3 days Bereavement leaves up to 5 days Domo is an equal opportunity employer. #LI-PD1 #LI-Onsite
Posted 1 month ago
4.0 - 5.0 years
3 - 7 Lacs
Mumbai, Pune, Chennai
Work from Office
Job Category: IT Job Type: Full Time Job Location: Bangalore Chennai Mumbai Pune Exp:- 4 to 5 Years Location:- Pune/Mumbai/Bangalore/Chennai JD : Azure Data Engineer with QA: Must Have - Azure Data Bricks, Azure Data Factory, Spark SQL Years - 4-5 years of development experience in Azure Data Bricks Strong experience in SQL along with performing Azure Data bricks Quality Assurance. Understand complex data system by working closely with engineering and product teams Develop scalable and maintainable applications to extract, transform, and load data in various formats to SQL Server, Hadoop Data Lake or other data storage locations. Kind Note: Please apply or share your resume only if it matches the above criteria.
Posted 1 month ago
5.0 - 8.0 years
9 - 14 Lacs
Bengaluru
Work from Office
Role Purpose The purpose of the role is to support process delivery by ensuring daily performance of the Production Specialists, resolve technical escalations and develop technical capability within the Production Specialists. Do Oversee and support process by reviewing daily transactions on performance parameters Review performance dashboard and the scores for the team Support the team in improving performance parameters by providing technical support and process guidance Record, track, and document all queries received, problem-solving steps taken and total successful and unsuccessful resolutions Ensure standard processes and procedures are followed to resolve all client queries Resolve client queries as per the SLA’s defined in the contract Develop understanding of process/ product for the team members to facilitate better client interaction and troubleshooting Document and analyze call logs to spot most occurring trends to prevent future problems Identify red flags and escalate serious client issues to Team leader in cases of untimely resolution Ensure all product information and disclosures are given to clients before and after the call/email requests Avoids legal challenges by monitoring compliance with service agreements Handle technical escalations through effective diagnosis and troubleshooting of client queries Manage and resolve technical roadblocks/ escalations as per SLA and quality requirements If unable to resolve the issues, timely escalate the issues to TA & SES Provide product support and resolution to clients by performing a question diagnosis while guiding users through step-by-step solutions Troubleshoot all client queries in a user-friendly, courteous and professional manner Offer alternative solutions to clients (where appropriate) with the objective of retaining customers’ and clients’ business Organize ideas and effectively communicate oral messages appropriate to listeners and situations Follow up and make scheduled call backs to customers to record feedback and ensure compliance to contract SLA’s Build people capability to ensure operational excellence and maintain superior customer service levels of the existing account/client Mentor and guide Production Specialists on improving technical knowledge Collate trainings to be conducted as triage to bridge the skill gaps identified through interviews with the Production Specialist Develop and conduct trainings (Triages) within products for production specialist as per target Inform client about the triages being conducted Undertake product trainings to stay current with product features, changes and updates Enroll in product specific and any other trainings per client requirements/recommendations Identify and document most common problems and recommend appropriate resolutions to the team Update job knowledge by participating in self learning opportunities and maintaining personal networks Deliver NoPerformance ParameterMeasure1ProcessNo. of cases resolved per day, compliance to process and quality standards, meeting process level SLAs, Pulse score, Customer feedback, NSAT/ ESAT2Team ManagementProductivity, efficiency, absenteeism3Capability developmentTriages completed, Technical Test performance Mandatory Skills: DataBricks - Data Engineering. Experience5-8 Years.
Posted 1 month ago
3.0 - 5.0 years
5 - 9 Lacs
Bengaluru
Work from Office
Role Purpose The purpose of this role is to design, test and maintain software programs for operating systems or applications which needs to be deployed at a client end and ensure its meet 100% quality assurance parameters Do 1. Instrumental in understanding the requirements and design of the product/ software Develop software solutions by studying information needs, studying systems flow, data usage and work processes Investigating problem areas followed by the software development life cycle Facilitate root cause analysis of the system issues and problem statement Identify ideas to improve system performance and impact availability Analyze client requirements and convert requirements to feasible design Collaborate with functional teams or systems analysts who carry out the detailed investigation into software requirements Conferring with project managers to obtain information on software capabilities 2. Perform coding and ensure optimal software/ module development Determine operational feasibility by evaluating analysis, problem definition, requirements, software development and proposed software Develop and automate processes for software validation by setting up and designing test cases/scenarios/usage cases, and executing these cases Modifying software to fix errors, adapt it to new hardware, improve its performance, or upgrade interfaces. Analyzing information to recommend and plan the installation of new systems or modifications of an existing system Ensuring that code is error free or has no bugs and test failure Preparing reports on programming project specifications, activities and status Ensure all the codes are raised as per the norm defined for project / program / account with clear description and replication patterns Compile timely, comprehensive and accurate documentation and reports as requested Coordinating with the team on daily project status and progress and documenting it Providing feedback on usability and serviceability, trace the result to quality risk and report it to concerned stakeholders 3. Status Reporting and Customer Focus on an ongoing basis with respect to project and its execution Capturing all the requirements and clarifications from the client for better quality work Taking feedback on the regular basis to ensure smooth and on time delivery Participating in continuing education and training to remain current on best practices, learn new programming languages, and better assist other team members. Consulting with engineering staff to evaluate software-hardware interfaces and develop specifications and performance requirements Document and demonstrate solutions by developing documentation, flowcharts, layouts, diagrams, charts, code comments and clear code Documenting very necessary details and reports in a formal way for proper understanding of software from client proposal to implementation Ensure good quality of interaction with customer w.r.t. e-mail content, fault report tracking, voice calls, business etiquette etc Timely Response to customer requests and no instances of complaints either internally or externally Deliver No. Performance Parameter Measure 1. Continuous Integration, Deployment & Monitoring of Software 100% error free on boarding & implementation, throughput %, Adherence to the schedule/ release plan 2. Quality & CSAT On-Time Delivery, Manage software, Troubleshoot queries, Customer experience, completion of assigned certifications for skill upgradation 3. MIS & Reporting 100% on time MIS & report generation Mandatory Skills: DataBricks - Data Engineering. Experience3-5 Years.
Posted 1 month ago
5.0 - 8.0 years
9 - 14 Lacs
Pune
Work from Office
Role Purpose The purpose of the role is to support process delivery by ensuring daily performance of the Production Specialists, resolve technical escalations and develop technical capability within the Production Specialists. Do Oversee and support process by reviewing daily transactions on performance parameters Review performance dashboard and the scores for the team Support the team in improving performance parameters by providing technical support and process guidance Record, track, and document all queries received, problem-solving steps taken and total successful and unsuccessful resolutions Ensure standard processes and procedures are followed to resolve all client queries Resolve client queries as per the SLA’s defined in the contract Develop understanding of process/ product for the team members to facilitate better client interaction and troubleshooting Document and analyze call logs to spot most occurring trends to prevent future problems Identify red flags and escalate serious client issues to Team leader in cases of untimely resolution Ensure all product information and disclosures are given to clients before and after the call/email requests Avoids legal challenges by monitoring compliance with service agreements Handle technical escalations through effective diagnosis and troubleshooting of client queries Manage and resolve technical roadblocks/ escalations as per SLA and quality requirements If unable to resolve the issues, timely escalate the issues to TA & SES Provide product support and resolution to clients by performing a question diagnosis while guiding users through step-by-step solutions Troubleshoot all client queries in a user-friendly, courteous and professional manner Offer alternative solutions to clients (where appropriate) with the objective of retaining customers’ and clients’ business Organize ideas and effectively communicate oral messages appropriate to listeners and situations Follow up and make scheduled call backs to customers to record feedback and ensure compliance to contract SLA’s Build people capability to ensure operational excellence and maintain superior customer service levels of the existing account/client Mentor and guide Production Specialists on improving technical knowledge Collate trainings to be conducted as triage to bridge the skill gaps identified through interviews with the Production Specialist Develop and conduct trainings (Triages) within products for production specialist as per target Inform client about the triages being conducted Undertake product trainings to stay current with product features, changes and updates Enroll in product specific and any other trainings per client requirements/recommendations Identify and document most common problems and recommend appropriate resolutions to the team Update job knowledge by participating in self learning opportunities and maintaining personal networks Deliver NoPerformance ParameterMeasure1ProcessNo. of cases resolved per day, compliance to process and quality standards, meeting process level SLAs, Pulse score, Customer feedback, NSAT/ ESAT2Team ManagementProductivity, efficiency, absenteeism3Capability developmentTriages completed, Technical Test performance Mandatory Skills: DataBricks - Data Engineering. Experience5-8 Years.
Posted 1 month ago
2.0 - 6.0 years
14 - 19 Lacs
Hyderabad
Work from Office
Job Area: Information Technology Group, Information Technology Group > Systems Analysis General Summary: General Summary: Proven experience in testing, particularly in data engineering. Strong coding skills in languages such as Python/ Java Proficiency in SQL and NoSQL databases. Hands on experience in data engineering, ETL processes, and data warehousing QA activities. Design and develop automated test frameworks for data pipelines and ETL processes. Use tools and technologies such as Selenium, Jenkins, and Python to automate test execution. Experience with cloud platforms such as AWS, Azure, or Google Cloud. Familiarity with data technologies like Data Bricks, Hadoop, PySpark, and Kafka. Understanding of CI/CD pipelines and DevOps practices. Knowledge of containerization technologies like Docker and Kubernetes. Experience with performance testing and monitoring tools. Familiarity with version control systems like Git. Exposure to Agile and DevOps methodologies. Experience on Test cases creation, Functional and regression testing, Defects creation and analyzing root cause. Good verbal and written communication, Analytical, and Problem-solving skills. Ability to work with team members around the globe (US, Taiwan, India, etc...), to provide required support. Overall 10+ years of experience. Principal Duties and Responsibilities: Manages project priorities, deadlines, and deliverables with minimal supervision. Determines which work tasks are most important for self and junior personnel, avoids distractions, and independently deals with setbacks in a timely manner. Understands relevant business and IT strategies, contributes to cross-functional discussion, and maintains relationships with IT and customer peers. Seeks out learning opportunities to increase own knowledge and skill within and outside of domain of expertise. Serves as a technical lead on a sub system or small feature, assigns work to a small project team, and works on advanced tasks to complete a project. Communicates with project lead via email and direct conversation to make recommendations about overcoming impending obstacles. Adapts to significant changes and setbacks in order to manage pressure and meet deadlines independently. Collaborates with more senior Systems Analysts and/or business partners to document and present recommendations for improvements to existing applications and systems. Acts as a technical resource for less knowledgeable personnel Manages projects of small to medium size and complexity, performs tasks, and applies expertise in subject area to meet deadlines. Anticipates complex issues and discusses within and outside of project team to maintain open communication. Identifies test scenarios and/or cases, oversees test execution, and provides QA results to the business across a few projects, and assists with defining test strategies and testing methods, and conducts business risk assessment. Performs troubleshooting, assists on complex issues related to bugs in production systems or applications, and collaborates with business subject matter experts on issues. Assists and/or mentors other team members for training and performance management purposes, disseminates subject matter knowledge, and trains business on how to use tools. Level of Responsibility: Working under some supervision. Taking responsibility for own work and making decisions that are moderate in impact; errors may have relatively minor financial impact or effect on projects, operations, or customer relationships; errors may require involvement beyond immediate work group to correct. Using verbal and written communication skills to convey complex and/or detailed information to multiple individuals/audiences with differing knowledge levels. Role may require strong negotiation and influence, communication to large groups or high-level constituents. Having moderate amount of influence over key organizational decisions (e.g., is consulted by senior leadership to provide input on key decisions). Using deductive and inductive problem solving is required; multiple approaches may be taken/necessary to solve the problem; often information is missing or incomplete; intermediate data analysis/interpretation skills may be required. Exercising creativity to draft original documents, imagery, or work products within established guidelines. Minimum Qualifications: 4+ years of IT-relevant work experience with a Bachelor's degree. OR 6+ years of IT-relevant work experience without a Bachelors degree. Minimum Qualifications: Minimum 6-8 years of proven experience in testing, particularly in data engineering. Preferred Qualifications: Proven experience in testing, particularly in data engineering. 10+ years QA/testing experience. Strong coding skills in languages such as Python/ Java Proficiency in SQL and NoSQL databases. Applicants Qualcomm is an equal opportunity employer. If you are an individual with a disability and need an accommodation during the application/hiring process, rest assured that Qualcomm is committed to providing an accessible process. You may e-mail disability-accomodations@qualcomm.com or call Qualcomm's toll-free number found here. Upon request, Qualcomm will provide reasonable accommodations to support individuals with disabilities to be able participate in the hiring process. Qualcomm is also committed to making our workplace accessible for individuals with disabilities. (Keep in mind that this email address is used to provide reasonable accommodations for individuals with disabilities. We will not respond here to requests for updates on applications or resume inquiries). Qualcomm expects its employees to abide by all applicable policies and procedures, including but not limited to security and other requirements regarding protection of Company confidential information and other confidential and/or proprietary information, to the extent those requirements are permissible under applicable law. To all Staffing and Recruiting Agencies Please do not forward resumes to our jobs alias, Qualcomm employees or any other company location. Qualcomm is not responsible for any fees related to unsolicited resumes/applications. If you would like more information about this role, please contact Qualcomm Careers.
Posted 1 month ago
2.0 - 4.0 years
9 - 14 Lacs
Hyderabad
Work from Office
Overview We are PepsiCo We believe that acting ethically and responsibly is not only the right thing to do, but also the right thing to do for our business. At PepsiCo, we aim to deliver top-tier financial performance over the long term by integrating sustainability into our business strategy, leaving a positive imprint on society and the environment. We call this Winning with Pep+ Positive . For more information on PepsiCo and the opportunities it holds, visitwww.pepsico.com. PepsiCo Data Analytics & AI Overview With data deeply embedded in our DNA, PepsiCo Data, Analytics and AI (DA&AI) transforms data into consumer delight. We build and organize business-ready data that allows PepsiCos leaders to solve their problems with the highest degree of confidence. Our platform of data products and services ensures data is activated at scale. This enables new revenue streams, deeper partner relationships, new consumer experiences, and innovation across the enterprise. The Data Science Pillar in DA&AI will be the organization where Data Scientist and ML Engineers report to in the broader D+A Organization. Also DS will lead, facilitate and collaborate on the larger DS community in PepsiCo. DS will provide the talent for the development and support of DS component and its life cycle within DA&AI Products. And will support pre-engagement activities as requested and validated by the prioritization framework of DA&AI. Data Scientist-Gurugram and Hyderabad The role will work in developing Machine Learning (ML) and Artificial Intelligence (AI) projects. Specific scope of this role is to develop ML solution in support of ML/AI projects using big analytics toolsets in a CI/CD environment. Analytics toolsets may include DS tools/Spark/Databricks, and other technologies offered by Microsoft Azure or open-source toolsets. This role will also help automate the end-to-end cycle with Machine Learning Services and Pipelines. Responsibilities Delivery of key Advanced Analytics/Data Science projects within time and budget, particularly around DevOps/MLOps and Machine Learning models in scope Collaborate with data engineers and ML engineers to understand data and models and leverage various advanced analytics capabilities Ensure on time and on budget delivery which satisfies project requirements, while adhering to enterprise architecture standards Use big data technologies to help process data and build scaled data pipelines (batch to real time) Automate the end-to-end ML lifecycle with Azure Machine Learning and Azure/AWS/GCP Pipelines. Setup cloud alerts, monitors, dashboards, and logging and troubleshoot machine learning infrastructure Automate ML models deployments Qualifications Minimum 3years of hands-on work experience in data science / Machine learning Minimum 3year of SQL experience Experience in DevOps and Machine Learning (ML) with hands-on experience with one or more cloud service providers BE/BS in Computer Science, Math, Physics, or other technical fields. Data Science Hands on experience and strong knowledge of building machine learning models supervised and unsupervised models Programming Skills Hands-on experience in statistical programming languages like Python and database query languages like SQL Statistics Good applied statistical skills, including knowledge of statistical tests, distributions, regression, maximum likelihood estimators Any Cloud Experience in Databricks and ADF is desirable Familiarity with Spark, Hive, Pigis an added advantage Model deployment experience will be a plus Experience with version control systems like GitHub and CI/CD tools Experience in Exploratory data Analysis Knowledge of ML Ops / DevOps and deploying ML models is required Experience using MLFlow, Kubeflow etc. will be preferred Experience executing and contributing to ML OPS automation infrastructure is good to have Exceptional analytical and problem-solving skills
Posted 1 month ago
2.0 - 4.0 years
9 - 14 Lacs
Hyderabad, Gurugram
Work from Office
Overview We are PepsiCo We believe that acting ethically and responsibly is not only the right thing to do, but also the right thing to do for our business. At PepsiCo, we aim to deliver top-tier financial performance over the long term by integrating sustainability into our business strategy, leaving a positive imprint on society and the environment. We call this Winning with Pep+ Positive . For more information on PepsiCo and the opportunities it holds, visitwww.pepsico.com. PepsiCo Data Analytics & AI Overview With data deeply embedded in our DNA, PepsiCo Data, Analytics and AI (DA&AI) transforms data into consumer delight. We build and organize business-ready data that allows PepsiCos leaders to solve their problems with the highest degree of confidence. Our platform of data products and services ensures data is activated at scale. This enables new revenue streams, deeper partner relationships, new consumer experiences, and innovation across the enterprise. The Data Science Pillar in DA&AI will be the organization where Data Scientist and ML Engineers report to in the broader D+A Organization. Also DS will lead, facilitate and collaborate on the larger DS community in PepsiCo. DS will provide the talent for the development and support of DS component and its life cycle within DA&AI Products. And will support pre-engagement activities as requested and validated by the prioritization framework of DA&AI. Data Scientist-Gurugram and Hyderabad The role will work in developing Machine Learning (ML) and Artificial Intelligence (AI) projects. Specific scope of this role is to develop ML solution in support of ML/AI projects using big analytics toolsets in a CI/CD environment. Analytics toolsets may include DS tools/Spark/Databricks, and other technologies offered by Microsoft Azure or open-source toolsets. This role will also help automate the end-to-end cycle with Machine Learning Services and Pipelines. Responsibilities Delivery of key Advanced Analytics/Data Science projects within time and budget, particularly around DevOps/MLOps and Machine Learning models in scope Collaborate with data engineers and ML engineers to understand data and models and leverage various advanced analytics capabilities Ensure on time and on budget delivery which satisfies project requirements, while adhering to enterprise architecture standards Use big data technologies to help process data and build scaled data pipelines (batch to real time) Automate the end-to-end ML lifecycle with Azure Machine Learning and Azure/AWS/GCP Pipelines. Setup cloud alerts, monitors, dashboards, and logging and troubleshoot machine learning infrastructure Automate ML models deployments Qualifications Minimum 3years of hands-on work experience in data science / Machine learning Minimum 3year of SQL experience Experience in DevOps and Machine Learning (ML) with hands-on experience with one or more cloud service providers. BE/BS in Computer Science, Math, Physics, or other technical fields. Data Science Hands on experience and strong knowledge of building machine learning models supervised and unsupervised models Programming Skills Hands-on experience in statistical programming languages like Python and database query languages like SQL Statistics Good applied statistical skills, including knowledge of statistical tests, distributions, regression, maximum likelihood estimators Any Cloud Experience in Databricks and ADF is desirable Familiarity with Spark, Hive, Pigis an added advantage Model deployment experience will be a plus Experience with version control systems like GitHub and CI/CD tools Experience in Exploratory data Analysis Knowledge of ML Ops / DevOps and deploying ML models is required Experience using MLFlow, Kubeflow etc. will be preferred Experience executing and contributing to ML OPS automation infrastructure is good to have Exceptional analytical and problem-solving skills
Posted 1 month ago
10.0 - 15.0 years
4 - 7 Lacs
Hyderabad, Chennai, Bengaluru
Work from Office
Job Title:Power BI Experience:10-15Years Location:Chennai, Bangalore, Hyderabad : 1. Strong Power BI technical expertise with power automate and Paginated reports skillset[These skills are mandatory] 2.Strong Stakeholder management -given this role will also help us define requirement/ wireframes – working with business 3. Strong Delivery Management 4. Life insurance experience is desirable but not essential. A bachelor’s degree in information technology or related discipline with 10+ years of managingdelivery and operation of BI and analytics platforms and services, preferably in insurance or financial industry. Deep understanding of Power BI, Data visualization practices, and underlying data engineering and modelling to support reporting data layer preferably in Databricks or similar Experience in reporting within life insurance domain – covering claims, policy, underwriting is not mandatory but is highly valued. Proven experience in independently leading the technical delivery of a team of BI engineers, both onshore and offshore, while effectively managing delivery risks and issues. Excellent communication skills with the ability to convey technical concepts to both technical and non-technical stakeholders. Strong leadership skills, including the ability to mentor, coach, and develop a high-performing Business intelligence team. Migration experience from Cognos and Tableau to Power BI will be highly regarded. 5. To develop and guide the team members in enhancing their technical capabilities and increasing productivity to prepare and submit status reports for minimizing exposure and risks on the project or closure of escalations. To be responsible for providing technical guidance / solutions ;define, advocate, and implement best practices and coding standards for the team. To ensure process compliance in the assigned module| and participate in technical discussions/review as a technical consultant for feasibility study (technical alternatives, best packages, supporting architecture best practices, technical risks, breakdown into components, estimations). 6. Technical lead candidate who will be able to support and resolve any queries to the team members raised from the project for power automate and Paginated reports.
Posted 1 month ago
12.0 - 15.0 years
8 - 12 Lacs
Bengaluru
Work from Office
Job Title ARCHITECT - AWS Databricks, SQL Experience 12-15 Years Location Bangalore : ARCHITECT, AWS, Databricks, SQL
Posted 1 month ago
12.0 - 20.0 years
3 - 7 Lacs
Bengaluru
Work from Office
Job Title Senior Software Engineer Experience 12-20 Years Location Bangalore : Strong knowledge & hands-on experience in AWS Data Bricks Nice to have Worked in hp eco system (FDL architecture) Technically strong to help the team on any technical issues they face during the execution. Owns the end-to-end technical deliverables Hands on data bricks + SQL knowledge Experience in AWS S3, Redshift, EC2 and Lambda services Extensive experience in developing and deploying Bigdata pipelines Experience in Azure data lake Strong hands on in SQL development / Azure SQL and in-depth understanding of optimization and tuning techniques in SQL with Redshift Development in Notebooks (like Jupyter, DataBricks, Zeppelin etc) Development experience in Spark Experience in scripting language like python and any other programming language Roles and Responsibilities Candidate must have hands on experience in AWS Data Databricks Good development experience using Python/Scala, Spark SQL and Data Frames Hands-on with Databricks, Data Lake and SQL knowledge is a must. Performance tuning, troubleshooting, and debugging SparkTM Process Skills: Agile – Scrum Qualification: Bachelor of Engineering (Computer background preferred)
Posted 1 month ago
10.0 - 12.0 years
9 - 13 Lacs
Chennai
Work from Office
Job Title Data Architect Experience 10-12 Years Location Chennai : 10-12 years experience as Data Architect Strong expertise in streaming data technologies like Apache Kafka, Flink, Spark Streaming, or Kinesis. ProficiencyinprogramminglanguagessuchasPython,Java,Scala,orGo ExperiencewithbigdatatoolslikeHadoop,Hive,anddatawarehousessuchas Snowflake,Redshift,Databricks,MicrosoftFabric. Proficiencyindatabasetechnologies(SQL,NoSQL,PostgreSQL,MongoDB,DynamoDB,YugabyteDB). Should be flexible to work as anIndividual contributor
Posted 1 month ago
5.0 - 10.0 years
12 - 22 Lacs
Hyderabad, Bengaluru
Work from Office
Experience Range : 4 - 12+ Year's Work Location : Bangalore (Proffered ) Must Have Skills : Airflow, big query, Hadoop, PySpark, Spark/Scala, Python, Spark - SQL, Snowflake, ETL, Data Modelling, Erwin OR Erwin Studio, Snowflake, Stored Procedure & Functions, AWS, Azure Databricks, Azure Data Factory. No Of Opening's : 10+ Job Description : We are having multiple Salesforce roles with our clients. Role 1 : Data Engineer Role 2 : Support Data Engineer Role 3 : ETL Support Engineer Role 4 : Senior Data Modeler Role 5 : Data Engineer Data Bricks Please find below the JD's for each role Role 1 : Data Engineer 5+ years of experience in data engineering or a related role. Proficiency in Apache Airflow for workflow scheduling and management. Strong experience with Hadoop ecosystems, including HDFS, MapReduce, and Hive. Expertise in Apache Spark/ Scala for large-scale data processing. Proficient in Python Advanced SQL skills for data analysis and reporting. Experience with cloud platforms (e.g., AWS, Google Cloud, Azure) is a plus. Designs, proposes, builds, and maintains databases and datalakes, data pipelines that transform and model data, and reporting and analytics solutions Understands business problems and processes based on direct conversations with customers, can see the big picture, and translate that into specific solutions Identifies issues early, proposes solutions, and tactfully raises concerns and proposes solutions Participates in code peer reviews Articulates clearly pros/cons of various tools/approaches Documents and diagrams proposed solutions Role 2 : Support Data Engineer Prioritize and resolve Business-As-Usual (BAU) support queries within agreed Service Level Agreements (SLA) while ensuring application stability. Drive engineering delivery to reduce technical debt across the production environment, collaborating with development and infrastructure teams Perform technical analysis of the production platform to identify and address performance and resiliency issues Participate in the Software Development Lifecycle (SDLC) to improve production standards and controls Build and maintain the support knowledge database, updating the application runbook with known tasks and managing event monitoring Create health check monitors, dashboards, synthetic transactions and alerts to increase monitoring and observability of systems at scale. Participate in on-call rotation supporting application release validation, alert response, and incident management Collaborate with development, product, and customer success teams to identify and resolve technical problems. Research and implement recommendations from post-mortem analyses for continuous improvement. Document issue details and solutions in our ticketing system (JIRA and ServiceNow) Assist in creating and maintaining technical documentation, runbooks, and knowledge base articles Navigate a complex system, requiring deep troubleshooting/debugging skills and an ability to manage multiple contexts efficiently. Oversee the collection, storage, and maintenance of production data, ensuring its accuracy and availability for analysis. Monitor data pipelines and production systems to ensure smooth operation and quickly address any issues that arise. Implement and maintain data quality standards, conducting regular checks to ensure data integrity. Identify and resolve technical issues related to data processing and production systems. Work closely with data engineers, analysts, and other stakeholders to optimize data workflows and improve production efficiency. Contribute to continuous improvement initiatives by analyzing data to identify areas for process optimization Role 3 : ETL Support Engineer 6+ years of experience with ETL support and development ETL Tools: Experience with popular ETL tools like Talend, Microsoft SSIS, Experience with relational databases (e.g., SQL Server, Postgres). Experience with Snowflake Dataware house. Proficiency in writing complex SQL queries for data validation, comparison, and manipulation Familiarity with version control systems like Git, Github to manage changes in test cases and scripts. Knowledge of defect tracking tools like JIRA, ServiceNow. Banking domain experience is a must. Understanding of the ETL process Perform functional, Integration and Regression testing for ETL Processes. Validate and ensure data quality and consistency across different data sources and targets. Develop and execute test cases for ETL workflows and data pipeline. Load Testing: Ensuring that the data warehouse can handle the volume of data being loaded and queried under normal and peak conditions. Scalability: Testing for the scalability of the data warehouse in terms of data growth and system performance. Role 4 : Senior Data Modeler 7+ experience in metadata management, data modelling, and related tools (Erwin or ER Studio or others). Overall 10+ Experience in IT. Hands-on relational, dimensional, and/or analytic experience (using RDBMS, dimensional data platform technologies, and ETL and data ingestion). Experience with data warehouse, data lake, and enterprise big data platforms in multi-data-center contexts required. Communication, and presentation skills. Help team to Implement business and IT data requirements through new data strategies and designs across all data platforms (relational, dimensional) and data tools (reporting, visualization, analytics, and machine learning). Work with business and application/solution teams to implement data strategies develop the conceptual/logical/physical data models Define and govern data modelling and design standards, tools, best practices, and related development for enterprise data models. Hands-on modelling in modelling and mappings between source system data model and Datawarehouse data models. Work proactively and independently to address project requirements and articulate issues/challenges to reduce project delivery risks with respect to modelling and mappings. Hands on experience in writing complex SQL queries. Good to have experience in data modelling for NOSQL objects Role 5 : Data Engineer Data Bricks Design and build data pipelines using Spark-SQL and PySpark in Azure Databricks Design and build ETL pipelines using ADF Build and maintain a Lakehouse architecture in ADLS / Databricks. Perform data preparation tasks including data cleaning, normalization, deduplication, type conversion etc. Work with DevOps team to deploy solutions in production environments. Control data processes and take corrective action when errors are identified. Corrective action may include executing a work around process and then identifying the cause and solution for data errors. Participate as a full member of the global Analytics team, providing solutions for and insights into data related items. Collaborate with your Data Science and Business Intelligence colleagues across the world to share key learnings, leverage ideas and solutions and to propagate best practices. You will lead projects that include other team members and participate in projects led by other team members. Apply change management tools including training, communication and documentation to manage upgrades, changes and data migrations.
Posted 1 month ago
12.0 - 16.0 years
18 - 25 Lacs
Hyderabad
Remote
JD for Fullstack Developer. Exp: 10+yrs Front End Development Design and implement intuitive and responsive user interfaces using React.js or similar front-end technologies. Collaborate with stakeholders to create a seamless user experience. Create mockups and UI prototypes for quick turnaround using Figma, Canva, or similar tools. Strong proficiency in HTML, CSS, JavaScript, and React.js. Experience with styling and graph libraries such as Highcharts, Material UI, and Tailwind CSS. Solid understanding of React fundamentals, including Routing, Virtual DOM, and Higher-Order Components (HOC). Knowledge of REST API integration. Understanding of Node.js is a big advantage. Middleware Development Experience with REST API development, preferably using FastAPI. Proficiency in programming languages like Python. Integrate APIs and services between front-end and back-end systems. Experience with Docker and containerized applications. Back End Development Experience with orchestration tools such as Apache Airflow or similar. Design, develop, and manage simple data pipelines using Databricks, PySpark, and Google BigQuery. Medium-level expertise in SQL. Basic understanding of authentication methods such as JWT and OAuth. Bonus Skills Experience working with cloud platforms such as AWS, GCP, or Azure. Familiarity with Google BigQuery and Google APIs. Hands-on experience with Kubernetes for container orchestration. Contact : Sandeep Nunna Ph No : 9493883212 Email : sandeep.nunna@clonutsolutions,com
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
31458 Jobs | Dublin
Wipro
16542 Jobs | Bengaluru
EY
10788 Jobs | London
Accenture in India
10711 Jobs | Dublin 2
Amazon
8660 Jobs | Seattle,WA
Uplers
8559 Jobs | Ahmedabad
IBM
7988 Jobs | Armonk
Oracle
7535 Jobs | Redwood City
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi
Capgemini
6091 Jobs | Paris,France