Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 years
4 - 6 Lacs
Bengaluru
On-site
Experience: 5 to 8 years Location: Bengaluru, Gurgaon, Pune Job code: 101299 Posted on: Aug 04, 2025 About Us: AceNet Consulting is a fast-growing global business and technology consulting firm specializing in business strategy, digital transformation, technology consulting, product development, start-up advisory and fund-raising services to our global clients across banking & financial services, healthcare, supply chain & logistics, consumer retail, manufacturing, eGovernance and other industry sectors. We are looking for hungry, highly skilled and motivated individuals to join our dynamic team. If you’re passionate about technology and thrive in a fast-paced environment, we want to hear from you. Job Summary: We are seeking a skilled and experienced Azure + Databricks Engineer to design, develop, and maintain scalable data solutions in cloud-based environments. This role requires a deep understanding of Azure services, Databricks platform, data engineering best practices, and cloud infrastructure. The ideal candidate will have experience building data pipelines, implementing data governance, and optimizing data workflows in enterprise-scale environments. Key Responsibilities: Build and maintain data pipelines using Azure Data Factory and Databricks. Develop scalable ETL/ELT workflows with Spark (PySpark/Scala). Manage and optimize Databricks clusters and job performance. Ensure data security, governance, and compliance (RBAC, Unity Catalog). Collaborate with cross-functional teams and document data solutions. Role Requirements and Qualifications: Bachelor’s or Master’s degree in Computer Science, Data Engineering, Information Technology, or a related field. 4–8 years of hands-on experience with Azure and Databricks. Proven experience in Azure cloud services with a focus on data engineering and analytics. Strong hands-on experience with Databricks, Apache Spark, and Delta Lake. Proficiency in Python, SQL, and optionally Scala or PySpark. Solid understanding of CI/CD pipelines, Terraform, or ARM templates for infrastructure automation. Strong knowledge of data warehousing concepts and data lake architectures. Experience with Databricks REST APIs, Azure DevOps, and monitoring tools (Log Analytics, Azure Monitor). Why Join Us: Opportunities to work on transformative projects, cutting-edge technology and innovative solutions with leading global firms across industry sectors. Continuous investment in employee growth and professional development with a strong focus on up & re-skilling. Competitive compensation & benefits, ESOPs and international assignments. Supportive environment with healthy work-life balance and a focus on employee well-being. Open culture that values diverse perspectives, encourages transparent communication and rewards contributions. How to Apply: If you are interested in joining our team and meet the qualifications listed above, please apply and submit your resume highlighting why you are the ideal candidate for this position.
Posted 5 days ago
3.0 - 6.0 years
0 Lacs
Bengaluru
On-site
Imagine what you could do here. At Apple, we believe new insights have a way of becoming excellent products, services, and customer experiences very quickly. Bring passion and dedication to your job and there's no telling what you could accomplish. The people here at Apple don’t just build products - they build the kind of wonder that’s revolutionised entire industries. It’s the diversity of those people and their ideas that inspires the innovation that runs through everything we do, from amazing technology to industry-leading environmental efforts. Join Apple, and help us leave the world better than we found it. As a Full Stack developer with Manufacturing Systems and Infrastructure (MSI) team, you will be part of a ground up innovation team creating new and innovative experiences for human interaction with autonomous technology. We are looking to hire extraordinary individuals with strong focus on core application development, great communication and interpersonal skills and ability to work effectively across multiple business and technical teams. As a part of this team, you will build full stack applications capturing and processing large scale data.. We work in a fast paced, startup-like environment and you will be part of every stage of development working with user interface designers, operations test engineering, product quality management teams from concept phase to deployment. Description - Lead the architecture, design, and development of robust full-stack solutions. - Collaborate with product managers, designers, and other partners to understand project requirements. - Mentor and provide technical guidance to junior developers. - Participate in code reviews to maintain code quality and ensure best practices are followed. - Continuously research and evaluate new technologies to drive innovation. - Troubleshoot, debug, and optimise application performance. - Stay ahead of with industry trends and emerging technologies. Key Qualifications We are looking for someone with the following qualifications: Demonstrated experience as a Full Stack Developer, with a focus on both front-end and back-end technologies. Strong proficiency in front-end frameworks such as React, Angular, or Vue.js. Experience with at least one back-end technologies such as Java, Scala, Go, Python etc. Solid understanding of database systems (SQL, NoSQL). Extensive use of APIs and strong understanding of HTTP(S) and REST architecture. Knowledge of cloud services and deployment (AWS, Azure, Google Cloud). Familiarity with DevOps practices and CI/CD pipelines. Experience with micro-services architecture. Familiarity with containerization and orchestration (Docker, Kubernetes). Strong problem-solving and analytical skills. Excellent communication and leadership skills. NICE TO HAVE Experience with data stream processing, data platforms at scale and distributed systems i.e. Spark, Kafka, Hadoop. Contributions to open-source projects. Education & Experience B.Tech. Degree in computer science or equivalent field with of 3 - 6 years hands-on programming experience. Additional Requirements Apple is an Equal Opportunity Employer that is committed to inclusion and diversity. We also take affirmative action to offer employment and advancement opportunities to all applicants, including minorities, women, protected veterans, and individuals with disabilities. Apple will not discriminate or retaliate against applicants who inquire about, disclose, or discuss their compensation or that of other applicants. Submit CV
Posted 5 days ago
3.0 - 7.0 years
0 Lacs
Andhra Pradesh
On-site
About Evernorth: Evernorth Health Services, a division of The Cigna Group (NYSE: CI), creates pharmacy, care, and benefits solutions to improve health and increase vitality. We relentlessly innovate to make the prediction, prevention, and treatment of illness and disease more accessible to millions of people. Data Engineering Advisor Position Summary: We are looking for a Databricks Data Engineer to join our Pharmacy Benefit management Clinical space (PBS) Engineering team as part of Care Delivery Solutions. As a Data Engineer, the candidate will work with a highly agile team of developers to develop, execute, validate, and maintain Pharmacy Benefit management Clinical space eco system. The candidate needs to be creative, responsive, flexible, and willing to participate in an open collaborative peer environment and guide the team as necessary. The candidate enjoys working in a team of high performers, who hold each other accountable to perform to their very best and does not shy away from opportunities to provide and take feedback with team members. The candidate works towards delivering a Minimal Viable Product with proper testing, avoids scope creep, and follows Software Engineering best practices as defined by Evernorth. The candidate is expected to actively participate in all ceremonies like Daily Stand-ups, Story grooming, review user stories & sprint retrospectives. About PBS Org: The current PBS Engineering focuses on enabling the product capabilities for PBS business. These include the conceptualization, architecture, design, development and support functions for the Pharmacy Benefit management Clinical space Business Products. The strategic roadmap for PBS focuses on patient activation and routine care for various LOBs of Pharmacy Benefit management Clinical space. Following are the different capabilities of PBS Engineering Organization. Clinilcal Data mart management and development of Integrations with POS and Router applications Development of Non-Clinical Apps Data integrations for Home-based Care Engineering business Data Interoperability Shared Services Capabilities Responsibilities: Work with Solution Architects to drive the definition of the data solution design, mapping business and technical requirements to define data assets that meet both business and operational expectations. Own and manage data models, data design artefacts and provide guidance and consultancy on best practice and standards for customer focused data delivery and data management practices. Be an advocate for data driven design within an agile delivery framework. Plan and implement procedures that will maximize engineering and operating efficiency for application integration technologies. Identify and drive process improvement opportunities. Actively participate in the full project lifecycle from early shaping of high-level estimates and delivery plans through to active governance of the solution as it is developed and built in later phases. Capture and manage risks, issues and assumptions identified through the lifecycle, articulating the financial and other impacts associated with these concerns. Complete accountability for the technology assets owned by the team. Provide leadership to the team ensuring the team is meeting the following objectives: Design, configuration, implementation of middleware products and application design/development within the supported technologies and products. Proactive monitoring and management design of supported assets assuring performance, availability, security, and capacity. Sizes User Stories based on time / difficulty to complete. Provides input on specific challenges facing User Stories. Discuss risks, dependencies, and assumptions. Selects User Stories to be completed in the Iteration, based on User Story priority and team capacity and velocity. Qualifications: Experience of leading data design delivering significant assets to an organization e.g. Data Warehouse, Data Lake, Customer 360 Data Platform. Be able to demonstrate experience within data capabilities such as data modelling, data migration, data quality management, data integration, with a preference for ETL/ELT and data streaming experience. Experience with ETL tools such as Databricks , Apache Airflow, automation of data pipeline processes, AWS , SQL Server, Tableau, Bhoomi, Power BI tool sets. Experience in Python, Java, or Scala. Proficiency in SQL is crucial for database management. Experience with Big Data Technologies like Hadoop, Spark, and Apache Kafka. Experience with Data Warehousing solutions like Amazon Redshift or Google Big Query. Track record of working successfully in a globally dispersed team would be beneficial. Familiarity with agile methodology including SCRUM team leadership. Familiarity with modern delivery practices such as continuous integration, behavior/test driven development, and specification by example. Proven experience with architecture, design, and development of large-scale enterprise application solutions. Strong written and verbal communication skills with the ability to interact with all levels of the organization. Proactive participation in design sessions, Program Increments (PI) Planning and sprint refinement meetings Required Experience & Education: 3 to 7 years of IT experience and 2 to 6 years in a Data Architecture or Data Engineering role is required. College degree (Bachelor) in related technical/business areas or equivalent work experience. Desired Experience: Exposure to serverless AWS Exposure to EKS About Evernorth Health Services Evernorth Health Services, a division of The Cigna Group, creates pharmacy, care and benefit solutions to improve health and increase vitality. We relentlessly innovate to make the prediction, prevention and treatment of illness and disease more accessible to millions of people. Join us in driving growth and improving lives.
Posted 5 days ago
7.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About The Role We are seeking a highly skilled and experienced Machine Learning Engineer to join our dynamic team. As a Machine Learning Engineer, you will be responsible for the design, development, deployment, and maintenance of machine learning models and systems that drive our [mention specific business area or product, e.g., recommendation engine, fraud detection system, autonomous vehicles]. You will work closely with data scientists, software engineers, and product managers to translate business needs into scalable and reliable machine learning solutions. This is a key role in shaping the future of and requires a strong technical foundation combined with a passion for innovation and problem-solving. Responsibilities Model Development & Deployment: * Design, develop, and deploy machine learning models using various algorithms (e.g., regression, classification, clustering, deep learning) to solve complex business problems. * Select appropriate datasets and features for model training, ensuring data quality and integrity. * Implement and optimize model training pipelines, including data preprocessing, feature engineering, model selection, and hyperparameter tuning. * Deploy models to production environments using containerization technologies (e.g., Docker, Kubernetes) and cloud platforms (e.g., AWS, GCP, Azure). * Monitor model performance in production, identify and troubleshoot issues, and implement model retraining and updates as needed. * Infrastructure & Engineering: * Develop and maintain APIs for model serving and integration with other systems. * Write clean, well-documented, and testable code. * Collaborate with software engineers to integrate models into existing products and services. * Research & Innovation: * Stay up-to-date with the latest advancements in machine learning and related technologies. * Research and evaluate new algorithms, tools, and techniques to improve model performance and efficiency. * Contribute to the development of new machine learning solutions and features. * Proactively identify opportunities to leverage machine learning to solve business challenges. * Collaboration & Communication: * Collaborate effectively with data scientists, software engineers, product managers, and other stakeholders. * Communicate technical concepts and findings clearly and concisely to both technical and non-technical audiences. * Participate in code reviews and contribute to the team's knowledge sharing. Qualifications Experience: 7+ years of experience in machine learning engineering or a related field. * Technical Skills: * Programming Languages: Proficient in Python and experience with other languages (e.g., Java, Scala, R) is a plus. * Machine Learning Libraries: Strong experience with machine learning libraries and frameworks such as scikit-learn, TensorFlow, PyTorch, Keras, etc. * Data Processing: Experience with data manipulation and processing using libraries like Pandas, NumPy, and Spark. * Model Deployment: Experience with model deployment frameworks and platforms (e.g., TensorFlow Serving, TorchServe, Seldon, AWS SageMaker, Google AI Platform, Azure Machine Learning). Databases: Experience with relational and NoSQL databases (e.g., SQL, MongoDB, Cassandra). * Version Control: Experience with Git and other version control systems. * DevOps: Familiarity with DevOps practices and tools. * Strong understanding of machine learning concepts and algorithms: Regression, Classification, Clustering, Deep Learning etc. * Soft Skills: * Excellent problem-solving and analytical skills. * Strong communication and collaboration skills. * Ability
Posted 5 days ago
4.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Role : Senior AI/ML Engineer Experience : 4 - 8 years Location: Chennai Work Mode: WFO (all 5 days) About the client: Join a fast-growing global product company that's redefining life sciences with AI-driven innovation. Their platforms power top pharma firms in 30+ countries—ensuring safety, speed, and compliance. From Gen AI to predictive systems, the Chennai tech hub is building cutting-edge solutions that matter. If you're ready to apply AI where it truly impacts lives, this is your opportunity. Job Description: We are seeking a highly skilled Senior AI/ML Engineer to join our dynamic team to build the next gen applications for our global customers. If you are a technology enthusiast and highly passionate, we are eager to discuss with you about the potential role. Roles & Responsibilities: Design, implement, and deploy Machine Learning solutions to solve complex problems and deliver real business value, i.e. revenue, engagement, and customer satisfaction. Collaborate with data product managers, software engineers and SMEs to identify AI/ML opportunities for improving process efficiency. Develop production-grade ML models to enhance customer experience, content recommendation, content generation, and predictive analysis. Monitor and improve model performance via data enhancement, feature engineering, experimentation and online/offline evaluation. Stay up to date with the latest in machine learning and artificial intelligence and influence AI/ML for the Life science industry. Mentor junior engineers, fostering a culture of continuous learning and improvement. Required Skills: Experience in the life science domain or a related field is preferable A minimum of 4-8 years experience in AI/ML engineering, with a track record of handling increasingly complex projects. Expertise in one or more of the following AI/ML domains: Causal AI, Reinforcement Learning, Generative AI, NLP, Dimension Reduction, Computer Vision, Sequential Models. Expertise in building, deploying, measuring, and maintaining machine learning models to address real-world problems. Thorough understanding of software development lifecycle, DevOps (build, continuous integration, deployment tools) and best practices. Strong programming skills in Python, Scala, Go, Rust or other languages. Excellent written and verbal communication skills and interpersonal skills. Experience with ML Ops platforms, such as Kubeflow or ML Flow. Experience with ML frameworks, such as scikit-learn, Tensor flow, PyTorch. Experience with Gen AI tools, such as Lang chain, Llama Index, and open-source Vector Dbs.' Advanced degree in Computer Science, Machine Learning or related field. If you are ready create an impact and feel this is the right opportunity for you, then write to me at aishwarya.saravanan@antal.com and let's connect!
Posted 5 days ago
5.0 - 9.0 years
0 Lacs
India
On-site
Experience: 5 to 9 years Must have Skills: Kotlin/Scala/Java Spark SQL Spark Streaming Any cloud (AWS preferable) Kafka /Kinesis/Any streaming services Object-Oriented Programming Hive, ETL/ELT design experience CICD experience (ETL pipeline deployment) Data Modeling experience Good to Have Skills: Git/similar version control tool Knowledge in CI/CD, Microservices Role Objective: Big Data Engineer will be responsible for expanding and optimizing our data and database architecture, as well as optimizing data flow and collection for cross functional teams. The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building. The Data Engineer will support our software developers, database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple teams, systems, and products Roles & Responsibilities: Sound knowledge in Spark architecture and distributed computing and Spark streaming. Proficient in Spark – including RDD and Data frames core functions, troubleshooting and performance tuning. Good understanding in object-oriented concepts and hands on experience on Kotlin/Scala/Java with excellent programming logic and technique. Good in functional programming and OOPS concept on Kotlin/Scala/Java Good experience in SQL Managing the team of Associates and Senior Associates and ensuring the utilization is maintained across the project. Able to mentor new members for onboarding to the project. Understand the client requirement and able to design, develop from scratch and deliver. AWS cloud experience would be preferable. Experience in analyzing, re-architecting, and re-platforming on-premises data warehouses to data platforms on cloud (AWS is preferred) Leading the client calls to flag off any delays, blockers, escalations and collate all the requirements. Managing project timing, client expectations and meeting deadlines. Should have played project and team management roles. Facilitate meetings within the team on regular basis. Understand business requirement and analyze different approaches and plan deliverables and milestones for the project. Optimization, maintenance, and support of pipelines. Strong analytical and logical skills. Ability to comfortably tackling new challenges and learn
Posted 5 days ago
0 years
0 Lacs
Kochi, Kerala, India
On-site
Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your Role And Responsibilities Create Solution Outline and Macro Design to describe end to end product implementation in Data Platforms including, System integration, Data ingestion, Data processing, Serving layer, Design Patterns, Platform Architecture Principles for Data platform Contribute to pre-sales, sales support through RfP responses, Solution Architecture, Planning and Estimation Contribute to reusable components / asset / accelerator development to support capability development Participate in Customer presentations as Platform Architects / Subject Matter Experts on Big Data, Azure Cloud and related technologies Participate in customer PoCs to deliver the outcomes Participate in delivery reviews / product reviews, quality assurance and work as design authority. Preferred Education Master's Degree Required Technical And Professional Expertise Experience in designing of data products providing descriptive, prescriptive, and predictive analytics to end users or other systems Experience in data engineering and architecting data platforms Experience in architecting and implementing Data Platforms Azure Cloud Platform Experience on Azure cloud is mandatory (ADLS Gen 1 / Gen2, Data Factory, Databricks, Synapse Analytics, Azure SQL, Cosmos DB, Event hub, Snowflake), Azure Purview, Microsoft Fabric, Kubernetes, Terraform, Airflow Experience in Big Data stack (Hadoop ecosystem Hive, HBase, Kafka, Spark, Scala PySpark, Python etc.) with Cloudera or Hortonworks. Preferred Technical And Professional Experience Experience in architecting complex data platforms on Azure Cloud Platform and On-Prem Experience and exposure to implementation of Data Fabric and Data Mesh concepts and solutions like Microsoft Fabric or Starburst or Denodo or IBM Data Virtualisation or Talend or Tibco Data Fabric Exposure to Data Cataloging and Governance solutions like Collibra, Alation, Watson Knowledge Catalog, dataBricks unity Catalog, Apache Atlas, Snowflake Data Glossary etc.
Posted 5 days ago
0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Gen24 Flybiz offers comprehensive services for aspiring pilots, airlines, and training organisations. In 2025, the Avion Flight Training Centre Mumbai - operated by Gen24 - will be opened. At our facility, pilots can train on state-of-the-art Full Flight Simulators (FFS) and Flight Navigation Procedures Trainer (FNPTII) devices. Currently, our centre operates two Airbus A320neo Full Flight Simulators from Avion and an A320 FNPTII for APS MCC training, built by Simnest. Over the coming years, we will expand to six to eight Full Flight Simulators, including additional Airbus A320s and Boeing 737 MAX devices, providing comprehensive training solutions for airlines and individual pilots. About the job Gen24 is looking for a Core Software Engineer to help develop core software for Full Flight Simulators. The core software facilitates the distributed real-time simulation of all models required for the simulation. It allows the user to interact with the simulation via the Instructor Operating System and generates the simulated graphics for the cockpit displays. It also consists of several Graphical User Interfaces (GUIs) used by developers and simulator maintenance personnel. Tasks Responsibilities Design and develop supporting tools for the core framework: Real-time monitoring Graphical User Interfaces Graphics Generator Editor Diagnostic Tools Mobile and Web Applications Maintain and upgrade key components of the core framework: Real-time scheduling Shared memory Multi-node syncing Graphics Generator Mobile and Web Applications Requirements Required Skills and Experience High analytical skills. Ability to translate high-level functional requirements and technical specifications into working products. Demonstrated experience with software development in C++, Scala, Java or a related language. Experience with software development for Windows, Linux and/or mobile platforms. Experience with GUI development, preferably in JavaFX or QT. Good verbal and written communication skills in English. Strong work ethic: comfortable in a fast-paced, entrepreneurial company environment. Ability to learn and adapt quickly to maximise productivity. Desirable Skills and Experience Affinity with Real-time simulation, distributed computing and multithreading. Understanding of data structures in memory and network protocols such as UDP and TCP. Understanding of Object-Oriented Programming and Design Patterns. Knowledge of the Scala (or Java) programming language. Knowledge of OpenGL. Familiarity with reverse engineering of code and troubleshooting. Experience in full-stack web development (MEAN, MERN, and/or others) is considered a big plus. Experience with Python and JavaScript. Experience with Scala and Svelte. Experience with markup languages (HTML, XML, LaTeX) and web application design. Experience with developing mobile applications, front- and backend. Location This job position is based at the Avion Flight Training Centre (operated by Gen24) in Mumbai, India. Benefits Become a part of Gen24 Working at Gen24 means having a challenging job in a successful and entrepreneurial environment where initiative and a high degree of freedom in acting are basic principles. Working together within and between teams is essential for our success. Likewise, we cooperate closely with our partners and customers to achieve the best results. You will have significant influence and responsibility for the outcome of technically challenging projects. Gen24 will create the conditions that enable you to truly grow as a (technical) specialist. We will do so by providing support, training, and opportunities to further develop your talents in a stimulating and inspiring environment. Gen24 is an equal-opportunity employer. We celebrate our inclusive work environment and encourage people of all backgrounds and perspectives to apply. At Gen24, we are committed to having an inclusive and transparent environment where every voice is heard and acknowledged. We embrace our differences and know that our diverse team is a strength that drives our success. Do you think you meet the criteria, and are you up for a new challenge? We look forward to hearing from you! You can apply using the Join.com webpage. Please include your motivation letter and resume.
Posted 5 days ago
12.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Title : Engineering Manager(Application) Location: Bangalore, India Department: Engineering Reports to: VP Engineering We are currently seeking an experienced Engineering Manager with a robust technical background and a track record of hands-on development, particularly in technologies such as Apache Spark and Kubernetes. The ideal candidate will possess profound coding skills in Scala and have a history of leading design innovations that significantly enhance application performance, system availability, and elastic scaling. As a strict yet supportive manager and mentor, this role demands a balance of driving on-time, high-quality releases, while fostering an environment of growth and development among team members. Roles & responsibilities: Technical Leadership: Guide the engineering team in the adoption and mastery of specific tech stacks like Apache Spark, Reactjs, Flink, Kubernetes, and Scylla DB. Ensure the team is updated with the latest technological advancements and best practices. Hands-on Development: Maintain an active role in coding, particularly in Scala, and lead by example. Participate in and lead design innovations that result in significant improvements in application performance, system availability, and the ability to scale elastically. Project Management: Oversee multiple projects from inception to completion, ensuring that all deliverables meet the highest quality standards and are delivered on schedule. Implement and refine project management processes to streamline workflow and increase efficiency. Mentorship and Team Development: Serve as a strict yet supportive mentor to the engineering team. Foster professional growth and development by setting clear expectations, providing regular feedback, and conducting performance evaluations. Quality Assurance: Uphold high standards for code quality, documentation, and testing. Implement best practices for continuous integration and deployment and ensure that all releases meet rigorous quality assurance criteria. Collaboration and Communication: Facilitate effective communication within the engineering team and across departments. Work closely with product managers, designers, and other stakeholders to ensure alignment of goals and seamless collaboration. Requirements: Bachelor's or Master's degree in Computer Science from top institutions in India At least 12 years of experience in software development, with a minimum of 5 years in a leadership role managing 6-8 direct reportees. Extensive hands-on experience with technologies such as Scala, Reactjs, Apache Spark, Flink, Kubernetes, and Scylla DB. Proven track record of being an open-source committer for Apache Spark: this is a must-have. Proven track record as a hands-on coder in Scala, with preferred knowledge in AI or machine learning. Demonstrated ability to lead teams to deliver on-time and high-quality software releases. This is a must-have. Strong project management skills, with experience in agile methodologies. Exceptional problem-solving abilities and a keen attention to detail. Excellent verbal and written communication skills. Personal Attributes: Strategic Thinker: Able to see the big picture and develop long-term strategies while also paying attention to detail. Problem Solver: Capable of troubleshooting and resolving issues quickly and efficiently. Adaptable: Able to thrive in a fast-paced, dynamic environment and manage changing priorities. Collaborative: Works well with others and can build strong relationships with team members, and stakeholders. Creative : Brings innovative ideas to the table and is always looking for new ways to engage clients and improve sales strategies. Key Metrics to Track: Team Velocity: Measures the team's productivity and output over time. On-time Delivery : Tracks the percentage of projects or features delivered on schedule. Code Quality : Assesses code reviews, defect rates, and technical debt. Team Retention and Satisfaction: Evaluates employee turnover and satisfaction levels. System Reliability and Uptime : Measures downtime and system performance. Innovation and Improvement : Assesses contributions to new ideas, processes, or technologies. Cross-functional Collaboration : Evaluates effective communication and collaboration with other departments. Introducing Tookitak i Tookitaki is positioned as one of the most intelligent financial crime prevention platforms available. This distinction is driven by our innovative use of collective intelligence and a federated approach. Our Anti-Financial Crime (AFC) Ecosystem leverages an expert network that continuously updates and shares knowledge, acting as a force multiplier. This collaborative model significantly outperforms the siloed approaches used by our competitors, ensuring our clients benefit from the most comprehensive and up-to-date financial crime prevention strategies. Our Anti-Financial Crime (AFC) Ecosystem leverages a vast, community-driven repository of financial crime patterns, continuously updated by industry experts. Leading digital banks and payment platforms across Asia, including GXS, Tencent, Maya trust this approach to stay protected against evolving money laundering and fraud tactics. By joining this ecosystem, our clients can benefit from the collective intelligence of top industry players, ensuring unparalleled protection.
Posted 5 days ago
2.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About The Role Gojek is a leading on-demand services company in Indonesia providing a variety of services like bike hailing, car hailing, food delivery, etc. We leverage cutting-edge technology and data-driven insights to deliver unparalleled user experiences and operational efficiency. If you’re a data scientist at heart, this role is for you! because you’ll be mining insights from the sea of data, building data products, and designing experiments with the ability to see the real-time impact of your contribution. As a Data Scientist at Gojek, you will be at the forefront of leveraging data to drive strategic and operational improvements. You will lead complex analytical projects, mentor junior data scientists, and collaborate with cross-functional teams to develop and implement data-driven strategies that enhance our service offerings and operational efficiency. What You Will Do Design and implement sophisticated statistical and machine learning models to solve complex business problems, optimize service delivery, and predict user behaviour. Use techniques such as deep learning, natural language processing, and time-series analysis. Partner with senior stakeholders, including product managers, engineers, and executives, to understand business objectives and translate them into actionable data insights. Provide strategic recommendations to drive business growth and operational excellence. Lead and mentor a team of data scientists and analysts. Provide guidance on best practices, model development, and analytical techniques. Foster a collaborative and high-performance environment within the data science team. Develop and enforce data governance and quality standards. Oversee data pipeline development, ensuring data accuracy, consistency, and accessibility. Advocate for and implement best practices in data management and analytics. Design and execute A/B tests and other experimentation methodologies to assess the impact of changes in product features, user interactions, and service delivery. Analyze results and make data-driven recommendations for optimization. Create high-impact visualizations and dashboards to communicate complex data insights to non-technical stakeholders. Present findings and recommendations in a clear, actionable manner to drive decision-making. Stay abreast of the latest trends and advancements in data science, machine learning, and analytics. Apply innovative techniques and tools to enhance analytical capabilities and contribute to the company’s competitive edge. What You Will Need Master’s in Data Science, Statistics, Computer Science, Mathematics, or a related field. Advanced academic qualifications are highly desirable. 2-4 years of experience in data science or a related field, preferably within the on-demand services or technology industry. Expertise in programming languages such as Python, R, or Scala, as well as proficiency with data manipulation and visualization libraries (e.g., pandas, NumPy, matplotlib, seaborn). Understanding of statistical concepts and techniques, with experience applying them to real-world problems. Excellent communication and interpersonal skills, with the ability to effectively convey complex technical concepts to both technical and non-technical audiences. A passion for learning and innovation, with a desire to stay ahead of the curve in the rapidly evolving field of data science and technology. About The Team Our Data Science team currently consists of 40+ people based in India, Indonesia and Singapore who run Southeast Asia’s leading Gojek business. We oversee all things data and work to become a thought partner for our Business Users, Product Team, and Decision Makers. It’s our job to ensure that they have a structural approach to data-driven problem-solving. Right now, our focus revolves: how to make customers, drivers, and merchants happy and delighted. We have so far created millions of dollar impact across different journeys of customers, drivers and merchants We work with the Engineering, PMs and strategy functions hand-in-glove - be it constructing a new product or brainstorming on a problem like how do we reduce the wait time for the drive, how do we improve assortment, should we treat convenience seeking customer differently from value seeking customer etc As a team, we’re concerned not only with the growth of the company, but each other’s personal and professional growths, too. Along with us coming from diverse backgrounds, we often have fun sessions to talk about everything and anything from data information to our current movie list. About GoTo Group GoTo Group is the largest digital ecosystem in Indonesia with its mission to “Empower Progress’ by offering technological infrastructure and solutions for everyone to access and thrive in the digital economy. The GoTo ecosystem consists of on-demand transportation services, food and grocery delivery, logistics and fulfillment, as well as financial and payment services through the Gojek and GoTo Financial platforms.It is the first platform in Southeast Asia that hosts these crucial cases in a single ecosystem, capturing the majority of Indonesia’s vast consumer household. About Gojek Gojek is Southeast Asia’s leading on-demand platform and pioneer of the multi-service ecosystem with over 2.5 million driver partners across the regions offering a wide range of services such as transportation, food delivery, logistics and more. With its mission to create impact at scale, Gojek is committed to resolving consumer problems and raising standards of living by connecting consumers to the best providers of goods and services in the market. About GoTo Financial GoTo Financial accelerates financial inclusion through its leading financial services and merchants solutions. Its consumer services include GoPay and GoPayLater and serve businesses of all sizes through Midtrans, Moka, GoBiz Plus, GoBiz, and Selly. With its trusted and inclusive ecosystem of products, GoTo Financial is open to new growth opportunities and aims to empower everyone to Make It Happen, Make It Together, Make It Last. GoTo and its business units, including Gojek and GoToFinancial ("GoTo") only post job opportunities on our official channels on our respective company websites and on LinkedIn. GoTo is not liable for any job postings or job offers that did not originate from us. You should conduct your own due diligence to prevent being victims of any fake job scams, if they did not originate from GoTo's official recruitment channels.
Posted 5 days ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
P-928 At Databricks, we are passionate about enabling data teams to solve the world's toughest problems — from making the next mode of transportation a reality to accelerating the development of medical breakthroughs. We do this by building and running the world's best data and AI infrastructure platform so our customers can use deep data insights to improve their business. Founded by engineers — and customer obsessed — we leap at every opportunity to solve technical challenges, from designing next-gen UI/UX for interfacing with data to scaling our services and infrastructure across millions of virtual machines. And we're only getting started in Bengaluru , India ! As a Software Engineer at Databricks India, you can get to work across : Backend DDS (Distributed Data Systems) Full Stack The Impact You'll Have Our Backend teams span many domains across our essential service platforms. For instance, you might work on challenges such as: Problems that span from product to infrastructure including: distributed systems, at-scale service architecture and monitoring, workflow orchestration, and developer experience. Deliver reliable and high performance services and client libraries for storing and accessing humongous amount of data on cloud storage backends, e.g., AWS S3, Azure Blob Store. Build reliable, scalable services, e.g. Scala, Kubernetes, and data pipelines, e.g. Apache Spark™, Databricks, to power the pricing infrastructure that serves millions of cluster-hours per day and develop product features that empower customers to easily view and control platform usage. Our DDS team spans across: Apache Spark™ Data Plane Storage Delta Lake Delta Pipelines Performance Engineering As a Full Stack software engineer, you will work closely with your team and product management to bring that delight through great user experience. What We Look For BS (or higher) in Computer Science, or a related field 3+ years of production level experience in one of: Python, Java, Scala, C++, or similar language. Experience developing large-scale distributed systems from scratch Experience working on a SaaS platform or with Service-Oriented Architectures. About Databricks Databricks is the data and AI company. More than 10,000 organizations worldwide — including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 — rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark™, Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn and Facebook. Benefits At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, please visit https://www.mybenefitsnow.com/databricks. Our Commitment to Diversity and Inclusion At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards. Individuals looking for employment at Databricks are considered without regard to age, color, disability, ethnicity, family or marital status, gender identity or expression, language, national origin, physical and mental ability, political affiliation, race, religion, sexual orientation, socio-economic status, veteran status, and other protected characteristics. Compliance If access to export-controlled technology or source code is required for performance of job duties, it is within Employer's discretion whether to apply for a U.S. government license for such positions, and Employer may decline to proceed with an applicant on this basis alone.
Posted 5 days ago
4.0 years
0 Lacs
India
Remote
Job Title: Databricks Engineer Location: Remote Experience Level: 4-5 Years Employment Type: Full-time Required Qualifications: 6–7 years of experience in data engineering, with at least 3+ years working with Databricks in production environments. Strong proficiency in Python and SQL . Experience with Spark (PySpark/Scala), preferably in a Databricks environment. Experience building and managing data pipelines on AWS , Azure , or GCP . Solid understanding of data lake , data warehouse , and data mesh architectures. Familiarity with modern data formats like Parquet , Avro , Delta Lake , etc. Experience with containerization and orchestration tools (e.g., Docker, Kubernetes) is a plus. Strong understanding of data quality, observability, and governance best practices.
Posted 5 days ago
4.0 - 9.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Title:- Consultant (Sr. Executive)/Sr. Consultant (AM) location:- Bangalore/Mumbai Job Mode:- Permanent Exp:- 4yrs to 10 yrs Role Overview: Spark/Scala Developer (Big Data) We are seeking experienced Big Data Engineers with strong expertise in Scala and Apache Spark to join our team. The ideal candidate will be responsible for designing and implementing scalable data solutions, optimizing performance, and translating business requirements into technical deliverables. Key Requirements 4 to 9 years of experience as a Big Data Engineer or similar role Minimum of 2 years’ experience in Scala programming and SQL Experience in designing, modifying, and implementing solutions for ingesting, provisioning and processing data in Hadoop Data Lake for batch and streaming workloads using Scala & Apache Spark Experience in debugging, optimization & performance tuning of Spark jobs. Able to translate functional requirements / user-stories into technical solutions. Good experience in developing and debugging complex SQL queries to derive business critical insights Good to have – Development experience in any of the cloud services: AWS/Azure/GCP
Posted 5 days ago
3.0 - 5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Role Overview: Spark/Scala Developer (Big Data) We are seeking experienced Big Data Engineers with strong expertise in Scala and Apache Spark to join our team. The ideal candidate will be responsible for designing and implementing scalable data solutions, optimizing performance, and translating business requirements into technical deliverables. Key Requirements 3 to 5 years of experience as a Big Data Engineer or similar role Minimum of 2 years’ experience in Scala programming and SQL Experience in designing, modifying, and implementing solutions for ingesting, provisioning and processing data in Hadoop Data Lake for batch and streaming workloads using Scala & Apache Spark Experience in debugging, optimization & performance tuning of Spark jobs. Able to translate functional requirements / user-stories into technical solutions. Good experience in developing and debugging complex SQL queries to derive business critical insights Good to have – Development experience in any of the cloud services: AWS/Azure/GCP
Posted 5 days ago
3.0 - 7.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Evernorth Evernorth Health Services, a division of The Cigna Group (NYSE: CI), creates pharmacy, care, and benefits solutions to improve health and increase vitality. We relentlessly innovate to make the prediction, prevention, and treatment of illness and disease more accessible to millions of people. Data Engineering Advisor Position Summary We are looking for a Databricks Data Engineer to join our Pharmacy Benefit management Clinical space (PBS) Engineering team as part of Care Delivery Solutions. As a Data Engineer, the candidate will work with a highly agile team of developers to develop, execute, validate, and maintain Pharmacy Benefit management Clinical space eco system. The candidate needs to be creative, responsive, flexible, and willing to participate in an open collaborative peer environment and guide the team as necessary. The candidate enjoys working in a team of high performers, who hold each other accountable to perform to their very best and does not shy away from opportunities to provide and take feedback with team members. The candidate works towards delivering a Minimal Viable Product with proper testing, avoids scope creep, and follows Software Engineering best practices as defined by Evernorth. The candidate is expected to actively participate in all ceremonies like Daily Stand-ups, Story grooming, review user stories & sprint retrospectives. About PBS Org: The current PBS Engineering focuses on enabling the product capabilities for PBS business. These include the conceptualization, architecture, design, development and support functions for the Pharmacy Benefit management Clinical space Business Products. The strategic roadmap for PBS focuses on patient activation and routine care for various LOBs of Pharmacy Benefit management Clinical space. Following are the different capabilities of PBS Engineering Organization. Clinilcal Data mart management and development of Integrations with POS and Router applications Development of Non-Clinical Apps Data integrations for Home-based Care Engineering business Data Interoperability Shared Services Capabilities Responsibilities Work with Solution Architects to drive the definition of the data solution design, mapping business and technical requirements to define data assets that meet both business and operational expectations. Own and manage data models, data design artefacts and provide guidance and consultancy on best practice and standards for customer focused data delivery and data management practices. Be an advocate for data driven design within an agile delivery framework. Plan and implement procedures that will maximize engineering and operating efficiency for application integration technologies. Identify and drive process improvement opportunities. Actively participate in the full project lifecycle from early shaping of high-level estimates and delivery plans through to active governance of the solution as it is developed and built in later phases. Capture and manage risks, issues and assumptions identified through the lifecycle, articulating the financial and other impacts associated with these concerns. Complete accountability for the technology assets owned by the team. Provide leadership to the team ensuring the team is meeting the following objectives: Design, configuration, implementation of middleware products and application design/development within the supported technologies and products. Proactive monitoring and management design of supported assets assuring performance, availability, security, and capacity. Sizes User Stories based on time / difficulty to complete. Provides input on specific challenges facing User Stories. Discuss risks, dependencies, and assumptions. Selects User Stories to be completed in the Iteration, based on User Story priority and team capacity and velocity. Qualifications Experience of leading data design delivering significant assets to an organization e.g. Data Warehouse, Data Lake, Customer 360 Data Platform. Be able to demonstrate experience within data capabilities such as data modelling, data migration, data quality management, data integration, with a preference for ETL/ELT and data streaming experience. Experience with ETL tools such as Databricks, Apache Airflow, automation of data pipeline processes, AWS, SQL Server, Tableau, Bhoomi, Power BI tool sets. Experience in Python, Java, or Scala. Proficiency in SQL is crucial for database management. Experience with Big Data Technologies like Hadoop, Spark, and Apache Kafka. Experience with Data Warehousing solutions like Amazon Redshift or Google Big Query. Track record of working successfully in a globally dispersed team would be beneficial. Familiarity with agile methodology including SCRUM team leadership. Familiarity with modern delivery practices such as continuous integration, behavior/test driven development, and specification by example. Proven experience with architecture, design, and development of large-scale enterprise application solutions. Strong written and verbal communication skills with the ability to interact with all levels of the organization. Proactive participation in design sessions, Program Increments (PI) Planning and sprint refinement meetings Required Experience & Education 3 to 7 years of IT experience and 2 to 6 years in a Data Architecture or Data Engineering role is required. College degree (Bachelor) in related technical/business areas or equivalent work experience. Desired Experience Exposure to serverless AWS Exposure to EKS About Evernorth Health Services Evernorth Health Services, a division of The Cigna Group, creates pharmacy, care and benefit solutions to improve health and increase vitality. We relentlessly innovate to make the prediction, prevention and treatment of illness and disease more accessible to millions of people. Join us in driving growth and improving lives.
Posted 5 days ago
4.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About This Role Wells Fargo is seeking a Senior Software Engineer. We believe in the power of working together because great ideas can come from anyone. Through collaboration, any employee can have an impact and make a difference for the entire company. Explore opportunities with us for a career in a supportive environment where you can learn and grow In This Role, You Will Lead moderately complex initiatives and deliverables within technical domain environments Contribute to large scale planning of strategies Design, code, test, debug, and document for projects and programs associated with technology domain, including upgrades and deployments Review moderately complex technical challenges that require an in-depth evaluation of technologies and procedures Resolve moderately complex issues and lead a team to meet existing client needs or potential new clients needs while leveraging solid understanding of the function, policies, procedures, or compliance requirements Collaborate and consult with peers, colleagues, and mid-level managers to resolve technical challenges and achieve goals Lead projects and act as an escalation point, provide guidance and direction to less experienced staff Required Qualifications: 4+ years of Software Engineering experience, or equivalent demonstrated through one or a combination of the following: work experience, training, military experience, education Desired Qualifications: 4+ years of Spark development experience 4+ years of Scala/Java development for Spark focusing on functional programming paradigm Spark SQL, Streaming and dataframe/dataset API experience Spark query tuning and performance optimization SQL &NOSQL database integration with Spark (MS SQL server and MongoDB) Deep understanding of distributed systems (CAP theorem, partition and bucketing, replication memory layouts, consistency) Deep understanding of Hadoop / Cloud platforms, HDFS, ETL/ELT process and Unix shell scripting Good to have Java, .net experience. Good to have any of the cloud data engineering certifications. Job Expectations: Experience in working Agile development methodology, GIT and JIRA Experience/working knowledge of technologies like Kafka, Cassandra, Oracle RDBMS and JSON structures Python development with/without Spark Experience of Banking/Financial domain Posting End Date: 5 Aug 2025 Job posting may come down early due to volume of applicants. We Value Equal Opportunity Wells Fargo is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other legally protected characteristic. Employees support our focus on building strong customer relationships balanced with a strong risk mitigating and compliance-driven culture which firmly establishes those disciplines as critical to the success of our customers and company. They are accountable for execution of all applicable risk programs (Credit, Market, Financial Crimes, Operational, Regulatory Compliance), which includes effectively following and adhering to applicable Wells Fargo policies and procedures, appropriately fulfilling risk and compliance obligations, timely and effective escalation and remediation of issues, and making sound risk decisions. There is emphasis on proactive monitoring, governance, risk identification and escalation, as well as making sound risk decisions commensurate with the business unit's risk appetite and all risk and compliance program requirements. Candidates applying to job openings posted in Canada: Applications for employment are encouraged from all qualified candidates, including women, persons with disabilities, aboriginal peoples and visible minorities. Accommodation for applicants with disabilities is available upon request in connection with the recruitment process. Applicants With Disabilities To request a medical accommodation during the application or interview process, visit Disability Inclusion at Wells Fargo . Drug and Alcohol Policy Wells Fargo maintains a drug free workplace. Please see our Drug and Alcohol Policy to learn more. Wells Fargo Recruitment And Hiring Requirements Third-Party recordings are prohibited unless authorized by Wells Fargo. Wells Fargo requires you to directly represent your own experiences during the recruiting and hiring process. Reference Number R-475748
Posted 5 days ago
2.0 - 4.0 years
0 Lacs
Bengaluru East, Karnataka, India
On-site
Training-related experience Must have Teaching experience: conducting training sessions in classroom and dynamically responding to different capabilities of learners; experience in analyzing the feedback from sessions and identifying action areas for self-improvement Developing teaching material: Experience in developing teaching material, including exercises and assignments Good presentation skills, excellent oral / written communication skills Nice to have Teaching experience: Experience in delivering session over virtual classrooms Instructional Design: Developing engaging content Designing Assessments: Experience in designing assessments to evaluate the effectiveness of training and gauging the proficiency of the learner Participated in activities of the software development lifecycle like development, testing, configuration management Job Responsibilities Develop teaching materials including exercises & assignments Conduct classroom training / virtual training Design assessments Enhance course material & course delivery based on feedback to improve training effectiveness Location: Mysore, Mangalore, Bangalore, Chennai, Pune, Hyderabad, Chandigarh Description of the Profile We are looking for trainers with 2 to 4 years of teaching experience and technology know-how in one or more of the following areas: Java – Java programming, Spring, Angular / React, Bootstrap Microsoft – C# programming, SQL Server, ADO.NET, ASP.NET, MVC design pattern, Azure, MS Power platforms, MS Dynamics 365 CRM, MS Dynamics 365 ERP, SharePoint Testing – Selenium, Microfocus - UFT, Microfocus-ALM tools, SOA testing, SOAPUI, Rest assured, Appium Big Data – Python programming, Hadoop, Spark, Scala, Mongo DB, NoSQL SAP – SAP ABAP programming / SAP MM / SAP SD /SAP BI / SAP S4 HANA Oracle – Oracle E-Business Suite (EBS) / PeopleSoft / Siebel CRM / Oracle Cloud / OBIEE / Fusion Middleware API and integration – API, Microservices, TIBCO, APIGee, Mule Digital Commerce – SalesForce, Adobe Experience Manager Digital Process Automation - PEGA, Appian, Camunda, Unqork, UIPath MEAN / MERN stacks Business Intelligence – SQL Server, ETL using SQL Server, Analysis using SQL Server, Enterprise reporting using SQL, Visualization Data Science – Python for data science, Machine learning, Exploratory data analysis, Statistics & Probability Cloud & Infrastructure Management – Network administration / Database administration / Windows administration / Linux administration / Middleware administration / End User Computing / ServiceNow Cloud platforms like AWS / GCP/ Azure / Oracle Cloud, Virtualization Cybersecurity - Infra Security / Identity & Access Management / Application Security / Governance & Risk Compliance / Network Security Mainframe – COBOL, DB2, CICS, JCL Open source – Python, PHP, Unix / Linux, MySQL, Apache, HTML5, CSS3, JavaScript DBMS – Oracle / SQL Server / MySQL / DB2 / NoSQL Design patterns, Agile, DevOps
Posted 6 days ago
2.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Location: Hyderabad India Type of Position: Full Time Introduction: Pragmatic Play is one of the world’s leading suppliers of online slots, casinos, live dealer and bingo games with new and exciting products and verticals added on a continuous basis. Pragmatic Play currently employs over 2,000 people in over 12 locations and has seen consistent, triple-digit growth year on year.Our award-winning slots portfolio contains unique in-house content consisting of over 200 proven HTML5 games, available in many currencies, 31 languages and all major certified markets. Millions of players play our games every day across hundreds of operators such as Flutter, Bet365, Entain, Kindred, Gamesys, LeoVegas, Betsson and many others. We are a team of passionate individuals with the mission to succeed and create industry-leading games that players love. The Role: This role will be required to build ETL data pipelines for Data warehousing. The candidate needs to have Hands-on experience with SQL and any of standard ETL tools like Talend, Informatica, Pentaho, etc. One is expected to have good data modelling skills to transform data as per business needs. And should be aware of scripting languages like Python, Shell Scripting or Java and exposure to big data technologies like Spark/Scala/PySpark will be added advantage. The successful candidate will have experience in building and managing complex Datamart’s and develop database solutions to ensure company information is stored effectively and securely. Also, the candidate should be a self-starter, with strong attention to detail, vocally self-critical, with an ability to work in a fast-paced environment. Key Responsibilities: Develop database solutions to store and retrieve company information Design conceptual and logical data models and flowcharts Improve system performance by conducting tests, troubleshooting, and integrating new elements Optimize new and current database systems Use scripting languages to automate key processes governing data movement, cleansing, and processing activities Key skills: Excellent analytical skills & Quantitative skills: Proven analytical and quantitative skills, and tools to perform analysis. Ability to use hard data and metrics to back up assumptions and develop project business cases Attention to detail: While understanding the big picture, you are an organized and detail-oriented person. Nothing gets missed on your watch and you can lead projects to completion. Working to the highest standards, even under pressure: You’re ambitious, hold yourself to high standards and thrive in a dynamic, high-energy environment. Passion for i-gaming: You love games and want to be part of a winning team. Requirements: 2+ years of strong experience with data transformation & ETL on large data sets 2+ years of Data Modeling experience (i.e., Relational, Dimensional, Columnar, Big Data) 2+ years of complex SQL or NoSQL experience Experience in advanced Data Warehouse concepts Expertise in SQL and advance SQL. Experience in industry ETL tools (i.e., Informatica, Talend) Experience with Reporting Technologies (i.e., Tableau, PowerBI) Strong verbal & written communication skills Must be self-managed, proactive and customer focused Experience in programming languages (Python, Java, or Bash scripting) Good to have Experience with Big Data technologies (i.e., Hadoop, Spark, Redshift, Vertica, Hive, etc.) Experience as an enterprise technical or engineer consultant Degree in Computer Science, Information Systems, Data Science, or related field
Posted 6 days ago
3.0 years
7 - 10 Lacs
Bengaluru
On-site
DESCRIPTION Are you passionate about solving business challenges at a global scale? Amazon Employee Services is looking for an experienced Business Analyst to join Retail Business Services team and help unlock insights which take our business to the next level. The candidate will be excited about understanding and implementing new and repeatable processes to improve our employee global work authorization experiences. They will do this by partnering with key stakeholders to be curious and comfortable digging deep into the business challenges to understand and identify insights that will enable us to figure out standards to improve our ability to globally scale this program. They will be comfortable delivering/presenting these recommended solutions by retrieving and integrating artifacts in a format that is immediately useful to improve the business decision-making process. This role requires an individual with excellent analytical abilities as well as an outstanding business acumen. The candidate knows and values our customers (internal and external) and will work back from the customer to create structured processes for global expansions of work authorization, and help integrate new countries/new acquisitions into the existing program. They are experts in partnering and earning trust with operations/business leaders to drive these key business decisions. Responsibilities: Own the development and maintenance of new and existing artifacts focused on analysis of requirements, metrics, and reporting dashboards. Partner with operations/business teams to consult, develop and implement KPI’s, automated reporting/process solutions, and process improvements to meet business needs. Enable effective decision making by retrieving and aggregating data from multiple sources and compiling it into a digestible and actionable format. Prepare and deliver business requirements reviews to the senior management team regarding progress and roadblocks. Participate in strategic and tactical planning discussions. Design, develop and maintain scaled, automated, user-friendly systems, reports, dashboards, etc. that will support our business needs. Excellent writing skills, to create artifacts easily digestible by business and tech partners. Key job responsibilities Design and develop highly available dashboards and metrics using SQL and Excel/Tableau/QuickSight Understand the requirements of stakeholders and map them with the data sources/data warehouse Own the delivery and backup of periodic metrics, dashboards to the leadership team Draw inferences and conclusions, and create dashboards and visualizations of processed data, identify trends, anomalies Execute high priority (i.e. cross functional, high impact) projects to improve operations performance with the help of Operations Analytics managers Perform business analysis and data queries using appropriate tools Work closely with internal stakeholders such as business teams, engineering teams, and partner teams and align them with respect to your focus area BASIC QUALIFICATIONS 3+ years of Excel or Tableau (data manipulation, macros, charts and pivot tables) experience Experience defining requirements and using data and metrics to draw business insights Experience with SQL or ETL Knowledge of data visualization tools such as Quick Sight, Tableau, Power BI or other BI packages 1+ years of tax, finance or a related analytical field experience PREFERRED QUALIFICATIONS Experience in Amazon Redshift and other AWS technologies Experience creating complex SQL queries joining multiple datasets, ETL DW concepts Experience in SCALA and Pyspark Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Posted 6 days ago
2.0 years
4 - 8 Lacs
Bengaluru
On-site
DESCRIPTION About Amazon.com: Amazon.com strives to be Earth's most customer-centric company where people can find and discover virtually anything they want to buy online. By giving customers more of what they want - low prices, vast selection, and convenience - Amazon.com continues to grow and evolve as a world-class e-commerce platform. Amazon's evolution from Web site to e-commerce partner to development platform is driven by the spirit of innovation that is part of the company's DNA. The world's brightest technology minds come to Amazon.com to research and develop technology that improves the lives of shoppers and sellers around the world. Overview of the role The Business Research Analyst will be responsible for Data and Machine learning part of continuous improvement projects across the Discoverability space. This will require collaboration with local and global teams. The Research Analyst should be a self-starter who is passionate about discovering and solving complicated problems, learning complex systems, working with numbers, and organizing and communicating data and reports. The Research Analyst will perform Big data analysis to identify patterns, train model to generate product to product relationship and product to brand & model relationship. The Research Analyst is also expected to continuously improve the ML/LLM solutions in terms of precision & recall, efficiency and scalability. The Research Analyst should be able to write clear and detailed functional specifications based on business requirements. Key job responsibilities As a Research Analyst, you'll collaborate with experts to develop advance machine learning or large language model (ML/LLM) solutions for business needs. You'll drive product pilots, demonstrating innovative thinking and customer focus. You'll build scalable solutions, write high-quality code, and develop state-of-the-art ML/LLM models. You'll coordinate between science and software teams, optimizing solutions. The role requires thriving in ambiguous, fast-paced environments and working independently with ML/LLM models. Key job responsibilities Collaborate and propose best in class ML/LLM solutions for business requirements Dive deep to drive product pilots, demonstrate innovation and customer obsession to steer the product roadmap Develop scalable solutions by writing high-quality code, building ML/LLM models using current research breakthroughs and implementing performance optimization techniques Coordinate design efforts between Sciences and Software teams to deliver optimized solutions Communicate technical concepts to stakeholders at all levels Ability to thrive in an ambiguous, uncertain and fast moving ML/LLMuse case developments Familiar with ML/LLM models and able to work independently. BASIC QUALIFICATIONS Bachelor's degree in math/statistics/engineering or other equivalent quantitative discipline 2+ years of relevant work experience in solving real world business problems using machine learning, deep learning, data mining and statistical algorithms Strong hands-on programming skills in Python, SQL, Hadoop/Hive. Additional knowledge of Spark, Scala, R, Java desired but not mandatory Strong analytical thinking Ability to creatively solve business problems, innovating new approaches where required and articulating ideas to a wide range of audiences using strong data, written and verbal communication skills Ability to collaborate effectively across multiple teams and stakeholders, including development teams, product management and operations. PREFERRED QUALIFICATIONS Master's degree with specialization in ML, NLP or Computer Vision preferred 3+ years relevant work experience in a related field/s (project management, customer advocate, product owner, engineering, business analysis) - Diverse experience will be favored eg. a mix of experience across different roles - In-depth understanding of machine learning concepts including developing models and tuning the hyper-parameters, as well as deploying models and building ML service - Technical expertise, experience in Data science, ML and Statistics Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Posted 6 days ago
1.0 years
4 - 8 Lacs
Bengaluru
On-site
DESCRIPTION As a Research Analyst, you'll collaborate with experts to develop advance machine learning solutions for business needs. You'll drive product pilots, demonstrating innovative thinking and customer focus. You'll build scalable solutions, write high-quality code, and develop state-of-the-art ML models. You'll coordinate between science and software teams, optimizing solutions. The role requires thriving in ambiguous, fast-paced environments and working independently with ML models. Key job responsibilities Collaborate with Applied Scientists to implement ML/LLM solutions that meet business goals Conduct product pilots demonstrating customer obsession and innovation Develop scalable solutions by writing high-quality code, building ML/LLM models using current research breakthroughs and implementing performance optimization techniques Act as a bridge between science and software teams to deliver optimized solutions Communicate technical concepts to stakeholders at all levels Develop technical documentation for Design specifications, Algorithms, Implementation challenges and Performance metrics Monitor and maintain existing solutions to ensure peak performance About the team The Retail Business Systems (RBS) group is an integral part of Amazon online product lifecycle and buying operations. The team is designed to ensure Amazon remains competitive in the online retail space with the best catalog quality, wide selection, supply chain defects and compliance programs. The team’s primary role is to create and enhance retail selection on the worldwide Amazon online catalog. The tasks handled have a direct impact on customer buying decisions and online user experience. BASIC QUALIFICATIONS Bachelor's degree in Quantitative or STEM disciplines (Science, Technology, Engineering, Mathematics) 1+ years of relevant work experience in solving real world business problems using machine learning, deep learning, data mining and statistical algorithms Strong hands-on programming skills in Python, SQL, Hadoop/Hive. Additional knowledge of Spark, Scala, R, Java desired but not mandatory Strong analytical thinking Ability to creatively solve business problems, innovating new approaches where required and articulating ideas to a wide range of audiences using strong data, written and verbal communication skills Ability to collaborate effectively across multiple teams and stakeholders, including development teams, product management and operations. PREFERRED QUALIFICATIONS Master's degree with specialization in ML, NLP or Computer Vision preferred 1+ years relevant work experience in a related field/s (project management, customer advocate, product owner, engineering, business analysis) - Diverse experience will be favored eg. a mix of experience across different roles - In-depth understanding of machine learning concepts including developing models and tuning the hyper-parameters, as well as deploying models and building ML service - Technical expertise, experience in Data science, ML and Statistics Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Posted 6 days ago
0 years
0 Lacs
Bengaluru, Karnataka, India
Remote
When you join Verizon You want more out of a career. A place to share your ideas freely — even if they’re daring or different. Where the true you can learn, grow, and thrive. At Verizon, we power and empower how people live, work and play by connecting them to what brings them joy. We do what we love — driving innovation, creativity, and impact in the world. Our V Team is a community of people who anticipate, lead, and believe that listening is where learning begins. In crisis and in celebration, we come together — lifting our communities and building trust in how we show up, everywhere & always. Want in? Join the #VTeamLife. What you’ll be doing… We are looking for data engineers who can work with world class team members to help drive telecom business to its full potential . We are building data products / assets for telecom wireless and wireline business which includes consumer analytics, telecom network performance and service assurance analytics etc. We are working on cutting edge technologies like digital twin to build these analytical platforms and provide data support for varied AI ML implementations. As a data engineer you will be collaborating with business product owners , coaches , industry renowned data scientists and system architects to develop strategic data solutions from sources which includes batch, file and data streams As a subject matter expert of solutions & platforms, you will be responsible for providing technical leadership to various projects on the data platform team. You are expected to have depth of knowledge on specified technological areas, which includes knowledge of applicable processes, methodologies, standards, products and frameworks. Driving the technical design of large scale data platforms, utilizing modern and open source technologies, in a hybrid cloud environment Setting standards for data engineering functions; design templates for the data management program which are scalable, repeatable, and simple. Building strong multi-functional relationships and getting recognized as a data and analytics subject matter expert among other teams. Collaborating across teams to settle appropriate data sources, develop data extraction and business rule solutions. Sharing and incorporating best practices from the industry using new and upcoming tools and technologies in data management & analytics. Organizing, planning and developing solutions to sophisticated data management problem statements. Defining and documenting architecture, capturing and documenting non - functional (architectural) requirements, preparing estimates and defining technical solutions to proposals (RFPs). Designing & Developing reusable and scalable data models to suit business deliverables Designing & Developing data pipelines. Providing technical leadership to the project team to perform design to deployment related activities, provide guidance, perform reviews, prevent and resolve technical issues. Collaborating with the engineering, DevOps & admin team to ensure alignment to efficient design practices, and fix issues in dev, test and production environments from infrastructure is highly available and performing as expected. Designing, implementing, and deploying high-performance, custom solutions. Where you'll be working… In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. What we’re looking for... You are curious and passionate about Data and truly believe in the high impact it can create for the business. People count on you for your expertise in data management in all phases of the software development cycle. You enjoy the challenge of solving complex data management problems and challenging priorities in a multifaceted, complex and deadline-oriented environment. Building effective working relationships and collaborating with other technical teams across the organization comes naturally to you. You'll need to have… Six or more years of relevant experience. Knowledge of Information Systems and their applications to data management processes. Experience performing detailed analysis of business problems and technical environments and designing the solution. Experience working with Google Cloud Platform & BigQuery. Experience working with Bigdata Technologies & Utilities - Hadoop/Spark/Scala/Kafka/NiFi. Experience with relational SQL and NoSQL databases. Experience with data pipeline and workflow management & Governance tools. Experience with stream-processing systems. Experience with object-oriented/object function scripting languages. Experience building data solutions for Machine learning and Artificial Intelligence. Knowledge of Data Analytics and modeling tools. Even better if you have… Master’s degree in Computer Science or a related field Experience on Frontend/Web technologies; React JS, CSS, HTML Experience in and backend services; Java Spring Boot, Node JS Experience working with data and Visualization products. Certifications in any Data Warehousing/Analytics solutioning Certifications in GCP Ability to clearly articulate the pros and cons of various technologies and platforms Experience collaborating with multi-functional teams and managing partner expectations Written and verbal communication skills Ability to work in a fast-paced agile development environment #AI&D Where you’ll be working In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. Scheduled Weekly Hours 40 Equal Employment Opportunity Verizon is an equal opportunity employer. We evaluate qualified applicants without regard to race, gender, disability or any other legally protected characteristics.
Posted 6 days ago
0 years
0 Lacs
Greater Kolkata Area
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Associate Job Description & Summary At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In business intelligence at PwC, you will focus on leveraging data and analytics to provide strategic insights and drive informed decision-making for clients. You will develop and implement innovative solutions to optimise business performance and enhance competitive advantage. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Responsibilities Requirement gathering and analysis Experience with different databases like Synapse, SQL DB, Snowflake etc. Design and implement data pipelines using Azure Data Factory, Databricks, Synapse Create and manage Azure SQL Data Warehouses and Azure Cosmos DB databases Extract, transform, and load (ETL) data from various sources into Azure Data Lake Storage Implement data security and governance measures Monitor and optimize data pipelines for performance and efficiency Troubleshoot and resolve data engineering issues Provide optimized solution for any problem related to data engineering Ability to work with a variety of sources like Relational DB, API, File System, Realtime streams, CDC etc. Strong knowledge on Databricks, Delta tables Mandatory Skill Sets Azure Databricks, ADF, or Synapse Analytics, Python Preferred Skill Sets Experienced in Delta Lake, Power BI, or Azure DevOps. Knowledge of Spark, Scala, or other distributed processing frameworks. Exposure to BI tools like Power BI, Tableau, or Looker. Familiarity with data security and compliance in the cloud. Experience in leading a development team. Years Of Experience Required 4 – 7 yrs Education Qualification B.tech/MBA or MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering, Master of Business Administration Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Microsoft Azure Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Business Case Development, Business Data Analytics, Business Intelligence and Reporting Tools (BIRT), Business Intelligence Development Studio, Communication, Competitive Advantage, Continuous Process Improvement, Data Analysis and Interpretation, Data Architecture, Database Management System (DBMS), Data Collection, Data Pipeline, Data Quality, Data Science, Data Visualization, Emotional Regulation, Empathy, Inclusion, Industry Trend Analysis, Intellectual Curiosity, Java (Programming Language), Market Development {+ 11 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date
Posted 6 days ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About Netskope Today, there's more data and users outside the enterprise than inside, causing the network perimeter as we know it to dissolve. We realized a new perimeter was needed, one that is built in the cloud and follows and protects data wherever it goes, so we started Netskope to redefine Cloud, Network and Data Security. Since 2012, we have built the market-leading cloud security company and an award-winning culture powered by hundreds of employees spread across offices in Santa Clara, St. Louis, Bangalore, London, Paris, Melbourne, Taipei, and Tokyo. Our core values are openness, honesty, and transparency, and we purposely developed our open desk layouts and large meeting spaces to support and promote partnerships, collaboration, and teamwork. From catered lunches and office celebrations to employee recognition events and social professional groups such as the Awesome Women of Netskope (AWON), we strive to keep work fun, supportive and interactive. Visit us at Netskope Careers. Please follow us on LinkedIn and Twitter@Netskope. About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. Netskope One SASE combines Netskope’s market-leading Intelligent SSE with its next-generation Borderless SD-WAN to protect users, applications, and data everywhere with AI-powered zero trust security, while providing fast, reliable access and optimized connectivity to any application from any network location or device, including IoT– at scale. Click here to learn more about Netskope IoT Security. What's In It For You As a member of the IoT Security Team you will be working on some of the most challenging problems in the field of zero trust and IoT security. You will play a key role in the design, development, evolution and operation of a system that analyzes hundreds of parameters from discovered devices and leverages our rich contextual intelligence for device classification, risk assessment, granular access control and network segmentation. What You Will Be Doing Contributing to design and development, scaling and operating Netskope IoT Security. Identifying and incorporating emerging technologies and best practices to the team. Refining existing technologies to make the product more performant Develop OT security part of the solution. Ownership of all cloud components and drive architecture and design. Engaging in cross functional team conversations to help prioritize tasks, communicate goals clearly to team members, and overall project delivery. Required Skills And Experience Scala and Java Writing OOP and Functional Programming Writing UDF Using of Scala with Spark Collection Framework Logging Sending Metrics to Grafana Spark and Kafka Understanding of RDD, DataFrames and DataSets Broadcast Variables Spark Streaming with Kafka Understanding Spark cluster settings Executors and Driver setup Understanding of Kafka Topics and Offsets Good knowledge of Python programming , microservices architecture , REST APIs is also desired Education BSCS or equivalent required, MSCS or equivalent strongly preferred Netskope is committed to implementing equal employment opportunities for all employees and applicants for employment. Netskope does not discriminate in employment opportunities or practices based on religion, race, color, sex, marital or veteran statues, age, national origin, ancestry, physical or mental disability, medical condition, sexual orientation, gender identity/expression, genetic information, pregnancy (including childbirth, lactation and related medical conditions), or any other characteristic protected by the laws or regulations of any jurisdiction in which we operate. Netskope respects your privacy and is committed to protecting the personal information you share with us, please refer to Netskope's Privacy Policy for more details.
Posted 6 days ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In business intelligence at PwC, you will focus on leveraging data and analytics to provide strategic insights and drive informed decision-making for clients. You will develop and implement innovative solutions to optimise business performance and enhance competitive advantage. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Responsibilities Requirement gathering and analysis Experience with different databases like Synapse, SQL DB, Snowflake etc. Design and implement data pipelines using Azure Data Factory, Databricks, Synapse Create and manage Azure SQL Data Warehouses and Azure Cosmos DB databases Extract, transform, and load (ETL) data from various sources into Azure Data Lake Storage Implement data security and governance measures Monitor and optimize data pipelines for performance and efficiency Troubleshoot and resolve data engineering issues Provide optimized solution for any problem related to data engineering Ability to work with a variety of sources like Relational DB, API, File System, Realtime streams, CDC etc. Strong knowledge on Databricks, Delta tables Mandatory Skill Sets Azure Databricks, ADF, or Synapse Analytics, Python Preferred Skill Sets Experienced in Delta Lake, Power BI, or Azure DevOps. Knowledge of Spark, Scala, or other distributed processing frameworks. Exposure to BI tools like Power BI, Tableau, or Looker. Familiarity with data security and compliance in the cloud. Experience in leading a development team. Years Of Experience Required 4 – 7 yrs Education Qualification B.tech/MBA or MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master of Business Administration, Bachelor of Engineering Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Microsoft Azure Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Analytical Thinking, Business Case Development, Business Data Analytics, Business Intelligence and Reporting Tools (BIRT), Business Intelligence Development Studio, Communication, Competitive Advantage, Continuous Process Improvement, Creativity, Data Analysis and Interpretation, Data Architecture, Database Management System (DBMS), Data Collection, Data Pipeline, Data Quality, Data Science, Data Visualization, Embracing Change, Emotional Regulation, Empathy, Inclusion, Industry Trend Analysis {+ 16 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date
Posted 6 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
40005 Jobs | Dublin
Wipro
19416 Jobs | Bengaluru
Accenture in India
16187 Jobs | Dublin 2
EY
15356 Jobs | London
Uplers
11435 Jobs | Ahmedabad
Amazon
10613 Jobs | Seattle,WA
Oracle
9462 Jobs | Redwood City
IBM
9313 Jobs | Armonk
Accenture services Pvt Ltd
8087 Jobs |
Capgemini
7830 Jobs | Paris,France