Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
7.0 - 11.0 years
0 Lacs
pune, maharashtra
On-site
About the job: At Citi, we're not just building technology, we're building the future of banking. Encompassing a broad range of specialties, roles, and cultures, our teams are creating innovations used across the globe. Citi is constantly growing and progressing through our technology, with a laser focus on evolving the ways of doing things. As one of the world's most global banks, we're changing how the world does business. Shape your Career with Citi We're currently looking for a high-caliber professional to join our team as AVP- Data Engineer based in Pune, India. Being part of our team means that we'll provide you with the resources to meet your unique needs, empower you to make healthy decisions, and manage your financial well-being to help plan for your future. For instance: - We provide programs and services for your physical and mental well-being, including access to telehealth options, health advocates, confidential counseling, and more. Coverage varies by country. - We empower our employees to manage their financial well-being and help them plan for the future. - We provide access to an array of learning and development resources to help broaden and deepen your skills and knowledge as your career progresses. In this role, you're expected to: Responsibilities: - Data Pipeline Development, Design & Automation: - Design and implement efficient database structures to ensure optimal performance and support analytics. - Design, implement, and optimize secure data pipelines to ingest, process, and store large volumes of structured and unstructured data from diverse sources, including vulnerability scans, security tools, and assessments. - Work closely with stakeholders to provide clean, structured datasets that enable advanced analytics and insights into cybersecurity risks, trends, and remediation activities. Technical Competencies: - 7+ years of Hands-on experience with Scala & Hands-on experience with Spark. - 10+ years of experience in designing and developing Data Pipelines for Data Ingestion or Transformation using Spark with Scala. - Good experience in Big Data technologies (HDFS, Hive, Apache Spark, Spark-SQL, Spark Streaming, Spark jobs optimization & Kafka). - Good knowledge of Exposure to various file formats (JSON, AVRO, Parquet). - Knowledge of agile (scrum) development methodology is a plus. - Strong development/automation skills. - Right attitude to participate and contribute through all phases of Development Lifecycle. - Secondary Skillset: No SQL, Starburst, Python. - Optional: Java Spring, Kubernetes, Docker. Competencies (Soft skills): - Strong communication skills. - Candidate should be responsible for reporting to both business and technology senior management. - Need to work with stakeholders and keep them updated on developments, estimation, delivery, and issues. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity, review Accessibility at Citi. View Citi's EEO Policy Statement and the Know Your Rights poster.,
Posted 2 weeks ago
1.0 - 5.0 years
0 Lacs
delhi
On-site
As an experienced data analytics professional with 1 to 2 years of experience, you will be responsible for developing and implementing data analytics methodologies. Your role will require good interpersonal skills along with excellent communication abilities. Your technical skills must include proficiency in Python, machine learning, deep learning, data wrangling, and integration with Big Data tools such as Hadoop, Scoop, Impala, Hive, Pig, and Spark R. You should also have a solid understanding of statistics, data mining, algorithms, time series analysis, forecasting, SQL queries, and Tableau data visualization. Having a good grasp of technologies like Hadoop, HBase, Hive, Pig, Mapreduce, Python, R, Java, Apache Spark, Impala, and machine learning algorithms is essential for this role. Your responsibilities will involve developing training content on Big Data and Hadoop technologies for students, working professionals, and corporates. You will conduct both online and classroom training sessions, provide practical use cases and assignments, and design self-paced recorded training sessions. It's important to continuously enhance teaching methodologies for an effective online learning experience and work collaboratively in small teams to make a significant impact. You will be tasked with designing and overseeing the development of real-time projects to provide practical exposure to the trainees. Additionally, you may work as a consultant or architect in developing and training real-time Big Data applications for corporate clients either on a part-time or full-time basis. Hands-on knowledge of tools like Anaconda Navigator, Jupyter Notebook, Hadoop, Hive, Pig, Mapreduce, Apache Spark, Impala, SQL, and Tableau will be required to excel in this role.,
Posted 2 weeks ago
2.0 - 6.0 years
0 Lacs
rajkot, gujarat
On-site
As a Software Developer at our company, you will collaborate with client developers & off-shore developers to deliver project tasks efficiently. Your responsibilities will include analyzing system domain and client requirements, creating and maintaining a robust system to support business operations with a focus on usability, and ensuring the completion of necessary milestones within the constraints of planned projects. You should possess very strong knowledge in Object-Oriented Programming (OOP) and Core Java, along with hands-on experience in JavaEE and Springboot. Experience in technologies such as Spring, Spring Security, Hibernate, JPA, and Restful services is required. Additionally, familiarity with web technologies like jQuery, Angular, React, Validation Engine, JSON, GSON, Ajax, CSS, and HTML5 is essential. The ideal candidate will demonstrate excellent debugging and problem-solving skills, along with a good understanding of Oracle, Postgres, MySQL, and MSSQL Server databases. Knowledge of Microservices Architecture & Design Principles, build tools like Maven or Gradle, application security fundamentals, performance tuning, scalability, and code versioning tools such as Git, SVN, and TFS is crucial. It is desirable for candidates to have knowledge and command over big data technologies like Apache Hadoop, Apache Spark, and AWS EMR. Proficiency in front-end JS frameworks like ReactJS and Angular, as well as knowledge of search engines such as Apache Solr, Elasticsearch, and AWS OpenSearch, are considered advantageous. Familiarity with cloud platforms is also a plus. To excel in this role, you must demonstrate a proactive nature of working with ownership, strong problem-solving skills, and excellent written and verbal communication abilities. If you meet these qualifications and are excited about this opportunity, please send your resume to hr@prominentpixel.com.,
Posted 2 weeks ago
10.0 - 14.0 years
0 Lacs
vadodara, gujarat
On-site
As a Lead Data Engineer at Rearc, you will play a crucial role in establishing and maintaining technical excellence within our data engineering team. Your extensive experience in data architecture, ETL processes, and data modeling will be key in optimizing data workflows for efficiency, scalability, and reliability. Collaborating closely with cross-functional teams, you will design and implement robust data solutions that align with business objectives and adhere to best practices in data management. Building strong partnerships with technical teams and stakeholders is essential as you drive data-driven initiatives and ensure their successful implementation. With over 10 years of experience in data engineering or related fields, you bring a wealth of expertise in managing and optimizing data pipelines and architectures. Your proficiency in Java and/or Python, along with experience in data pipeline orchestration using platforms like Airflow, Databricks, DBT, or AWS Glue, will be invaluable. Hands-on experience with data analysis tools and libraries such as Pyspark, NumPy, Pandas, or Dask is required, while proficiency with Spark and Databricks is highly desirable. Your proven track record of leading complex data engineering projects, coupled with hands-on experience in ETL processes, data warehousing, and data modeling tools, enables you to deliver efficient and robust data pipelines. You possess in-depth knowledge of data integration tools and best practices, as well as a strong understanding of cloud-based data services and technologies like AWS Redshift, Azure Synapse Analytics, and Google BigQuery. Your strategic and analytical skills will enable you to solve intricate data challenges and drive data-driven decision-making. In this role, you will collaborate with stakeholders to understand data requirements and challenges, implement data solutions with a DataOps mindset using modern tools and frameworks, lead data engineering projects, mentor junior team members, and promote knowledge sharing through technical blogs and articles. Your exceptional communication and interpersonal skills will facilitate collaboration with cross-functional teams and effective stakeholder engagement at all levels. At Rearc, we empower engineers to build innovative products and experiences by providing them with the best tools possible. If you are a cloud professional with a passion for problem-solving and a desire to make a difference, join us in our mission to solve problems and drive innovation in the field of data engineering.,
Posted 2 weeks ago
4.0 - 8.0 years
0 Lacs
kochi, kerala
On-site
You will be responsible for capturing user requirements and translating them into business and digitally enabled solutions across various industries. Your key responsibilities will include designing, developing, optimizing, and maintaining data pipelines that adhere to ETL principles and business goals. You will solve complex data problems to deliver insights that help the business achieve its objectives. Your tasks will involve sourcing structured and unstructured data from different touchpoints, formatting and organizing them into an analyzable format. Additionally, you will create data products for analytics team members to enhance productivity and utilize AI services like vision and translation to generate outcomes for further pipeline steps. Moreover, you will be instrumental in fostering a culture of sharing, re-use, design, and operational efficiency of data and analytical solutions. You will prepare data to establish a unified database and construct tracking solutions to ensure data quality. You should be proficient in creating production-grade analytical assets deployed using the guiding principles of CI/CD. The ideal candidate should be an expert in Python, Scala, Pyspark, Pytorch, and at least two other languages like JavaScript. You should have extensive experience in data analysis within Big Data environments, data libraries such as Pandas, SciPy, Tensorflow, Keras, and SQL. A minimum of 2-3 years of hands-on experience with these technologies is required. In addition, you should have experience working with BI tools like Tableau, Power BI, or Looker and possess a good working knowledge of key concepts in data analytics such as dimensional modeling, ETL, reporting/dashboarding, data governance, structured and unstructured data handling, and infrastructure requirements. Experience in cloud data warehouses like Redshift or Synapse and certification in AWS, Azure, Snowflake, or Databricks data analytics is preferred. The role requires 3.5 to 5 years of experience and a graduation degree. This job is based in Kochi, Coimbatore, or Trivandrum and requires expertise in Python/Scala, Pyspark/Pytorch, and familiarity with Redshift.,
Posted 2 weeks ago
12.0 - 17.0 years
0 Lacs
pune, maharashtra
On-site
You are seeking a skilled Azure Databricks Architect with 12 to 17 years of experience in Python, SQL. As an Azure Databricks Architect at our company, you will be responsible for data architecture, data engineering, and analytics. You should have at least 5 years of hands-on experience with Azure Databricks, Apache Spark, and Delta Lake. Your proficiency in Azure Data Lake, Azure Synapse, Azure Data Factory, and Azure SQL is essential. Expertise in Python, Scala, and SQL for data processing, along with a deep understanding of data modeling, ETL/ELT processes, and distributed computing, are key requirements. Experience with CI/CD pipelines and DevOps practices in data engineering is also expected. Excellent communication and stakeholder management skills are crucial for this role. Possessing Azure certifications such as Azure Solutions Architect or Azure Data Engineer would be a plus. Your responsibilities will include implementing ML/AI models in Databricks, utilizing data governance tools like Purview, and working with real-time data processing using Kafka, Event Hubs, or Stream Analytics. Additionally, you will enjoy competitive salary and benefits, a culture focused on talent development, and opportunities to work with cutting-edge technologies. Employee engagement initiatives, annual health check-ups, and insurance coverage are also part of the benefits package. Persistent Ltd. is committed to fostering diversity and inclusion in the workplace. We welcome applications from all qualified individuals, including those with disabilities, and regardless of gender or gender preference. Hybrid work options, flexible working hours, and accessible facilities are available to support employees with diverse needs and preferences. Our inclusive environment aims to enable all employees to thrive while accelerating growth both professionally and personally, impacting the world in positive ways, and enjoying collaborative innovation with diversity and work-life wellbeing at the core. If you are ready to unleash your full potential at Persistent, please contact pratyaksha_pandit@persistent.com. Persistent is an Equal Opportunity Employer that prohibits discrimination and harassment of any kind.,
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
As a Sr. Data Engineer at Lifesight in Bangalore, you will be responsible for building highly scalable, fault-tolerant distributed data processing systems that handle massive amounts of data ingested daily. You will work on processing petabyte-sized data warehouses and Elasticsearch clusters, optimizing data pipelines for quality and resilience, and refining diverse datasets into simplified models to encourage self-service. Your role will involve owning data mapping, business logic, transformations, and ensuring data quality through low-level systems debugging and performance optimization on large production clusters. Additionally, you will participate in architecture discussions, influence product roadmaps, and take ownership of new projects while maintaining and supporting existing platforms and transitioning to newer technology stacks. To excel in this role, you should have proficiency in Python and PySpark, a deep understanding of Apache Spark including tuning and data frame building, and the ability to create Java/Scala Spark jobs for data transformation and aggregation. Your experience with big data technologies such as HDFS, YARN, Map-Reduce, Hive, Kafka, Spark, Airflow, and Presto, as well as distributed environments using tools like Kafka, Spark, Hive, and Hadoop, will be invaluable. Familiarity with distributed database systems, various file formats like Parquet and Avro, and NoSQL databases is essential, along with experience in cloud platforms like AWS and GCP. Ideally, you should have at least 5 years of professional experience as a data or software engineer. Joining Lifesight means being part of a fast-growing Marketing Measurement Platform with a global impact, where you can influence key decisions on tech stack, product development, and scalable solutions. You will work in small, agile teams within a non-bureaucratic, fast-paced environment that values innovation, collaboration, and personal well-being. Competitive compensation, benefits, and a culture of empowerment that prioritizes work-life balance and team camaraderie await you at Lifesight.,
Posted 3 weeks ago
10.0 - 14.0 years
0 Lacs
pune, maharashtra
On-site
The Applications Development Technology Lead Analyst position is a senior-level role that involves establishing and implementing new or revised application systems and programs in coordination with the Technology team. Your main objective will be to lead applications systems analysis and programming activities. As the Applications Development Technology Lead Analyst, your responsibilities will include partnering with multiple management teams to ensure appropriate integration of functions to meet goals, identifying and defining necessary system enhancements for deploying new products and process improvements, and resolving a variety of high-impact problems/projects through in-depth evaluation of complex business processes, system processes, and industry standards. You will also need to provide expertise in the area, possess advanced knowledge of applications programming, ensure application design aligns with the overall architecture blueprint, develop standards for coding, testing, debugging, and implementation, and have a comprehensive understanding of how different areas of business integrate to achieve business goals. Additionally, you will be required to provide in-depth analysis with interpretive thinking to define issues and develop innovative solutions, serve as an advisor or coach to mid-level developers and analysts, and appropriately assess risk when making business decisions. To qualify for this role, you should have 6-10 years of relevant experience in Apps Development or systems analysis, extensive experience in system analysis and programming of software applications, experience in managing and implementing successful projects, and be a Subject Matter Expert (SME) in at least one area of Applications Development. Other qualifications include the ability to adjust priorities quickly, demonstrated leadership and project management skills, and clear and concise written and verbal communication. An educational background should include a Bachelor's degree/University degree or equivalent experience, with a Master's degree preferred. As a Vice President (VP), you will be responsible for leading a technical vertical (Frontend, Backend, or Data), mentoring developers, and ensuring timely, scalable, and testable delivery across your domain. Your responsibilities will involve leading a domain-specific team of 68 engineers, translating architecture into execution with detailed designs and guidance, reviewing complex components built using various programming languages and frameworks, leading data platform migration projects, integrating CI/CD pipelines, enforcing code quality, evaluating AI-based tools for productivity, testing, and code improvement, and demonstrating strong mentoring, conflict resolution, and cross-team communication skills. In terms of required skills, you should have 10-14 years of experience leading development teams and delivering cloud-native solutions, with 2 years in tech leadership, proficiency in programming languages such as Java, Python, JavaScript/TypeScript, familiarity with frameworks like Spring Boot/WebFlux, Angular, Node.js, databases like Oracle, MongoDB, Redis, strong SQL skills, cloud technologies including ECS, S3, Lambda, RDS, Kubernetes, data technologies like Apache Spark with Python, Snowflake, data migration tools, development practices such as TDD, CI/CD pipelines, Git workflows, and quality tools like SonarQube and automated testing frameworks. This job description serves as a high-level overview of the work performed and may require other job-related duties as assigned. If you require a reasonable accommodation due to a disability, please review Accessibility at Citi for assistance.,
Posted 3 weeks ago
3.0 - 8.0 years
0 Lacs
hyderabad, telangana
On-site
As a member of the Dun & Bradstreet team, you will play a crucial role in unlocking the power of data through analytics to create a better tomorrow. Our global community of over 6,000 team members is dedicated to accelerating creativity, innovation, and growth in order to help clients turn uncertainty into confidence, risk into opportunity, and potential into prosperity. We welcome bold and diverse thinkers who are passionate about making a positive impact. You will be responsible for designing and developing data pipelines within our Big Data ecosystem using technologies such as Apache Spark and Apache Airflow. Your role will involve architecting, building, and deploying scalable and efficient data pipelines while ensuring clarity and maintainability through proper documentation. Additionally, you will demonstrate expertise in data architecture and management, including familiarity with data lakes, modern data warehousing practices, and distributed data processing solutions. Your programming and scripting skills in Python will be put to the test as you write clean, efficient, and maintainable code to support cloud-based infrastructures such as AWS and GCP. You will be tasked with managing and optimizing cloud-based data infrastructure to ensure efficient data storage and retrieval. Workflow orchestration using Apache Airflow will also be a key aspect of your responsibilities, requiring you to develop and manage workflows for scheduling and orchestrating data processing jobs. Innovation and optimization are at the core of what we do, and you will be expected to create detailed designs and proof-of-concepts to enable new workloads and technical capabilities on our platform. Collaboration with platform and infrastructure engineers will be essential to implement these capabilities in production. Your strong knowledge of Big Data architecture, coupled with hands-on experience in technologies like Hadoop, Spark, and Hive, will be invaluable in this role. To be successful in this position, you should have a minimum of 8 years of hands-on experience with Big Data technologies, including at least 3 years of experience with Spark. Hands-on experience with dataproc and managing solutions deployed in the Cloud are highly desirable. Additionally, a minimum of 6 years of experience in Cloud environments, preferably GCP, and any experience with NoSQL and Graph databases will be beneficial. Experience working in a Global company, particularly in a DevOps model, is considered a plus. If you are ready to join a dynamic team of passionate individuals who are committed to driving innovation and growth, we invite you to explore career opportunities at Dun & Bradstreet by visiting https://www.dnb.com/about-us/careers-and-people/joblistings.html and https://jobs.lever.co/dnb. Official communication from Dun & Bradstreet will come from an email address ending in @dnb.com.,
Posted 3 weeks ago
4.0 - 8.0 years
0 Lacs
pune, maharashtra
On-site
The Applications Development Intermediate Programmer Analyst position is an intermediate level role where you will be responsible for contributing to the establishment and implementation of new or revised application systems and programs in coordination with the Technology team. Your main objective will be to assist in applications systems analysis and programming activities. You will utilize your knowledge of applications development procedures and concepts, along with basic knowledge of technical areas, to identify and define necessary system enhancements. This includes using script tools, analyzing code, and consulting with users, clients, and other technology groups to recommend programming solutions. Additionally, you will install and support customer exposure systems and apply fundamental knowledge of programming languages for design specifications. As an Intermediate Programmer Analyst, you will analyze applications to identify vulnerabilities and security issues, conduct testing and debugging, and serve as an advisor or coach to new or lower-level analysts. You will be responsible for identifying problems, analyzing information, and making evaluative judgments to recommend and implement solutions. Operating with a limited level of direct supervision, you will exercise independence of judgment and autonomy while acting as a subject matter expert to senior stakeholders and/or other team members. In this role, it is crucial to appropriately assess risk when making business decisions, with a focus on safeguarding Citigroup, its clients, and assets. This includes driving compliance with applicable laws, rules, and regulations, adhering to policies, applying sound ethical judgment, and escalating, managing, and reporting control issues with transparency. Qualifications: - 4-6 years of proven experience in developing and managing Big Data solutions using Apache Spark and Scala is required - Strong programming skills in Scala, Java, or Python - Hands-on experience with technologies like Apache Hive, Apache Kafka, HBase, Couchbase, Sqoop, Flume, etc. - Proficiency in SQL and experience with relational databases (Oracle/PL-SQL) - Experience in working on Kafka, JMS/MQ applications - Familiarity with data warehousing concepts and ETL processes - Knowledge of data modeling, data architecture, and data integration techniques - Experience with Java, Web services, XML, JavaScript, Microservices, SOA, etc. - Strong technical knowledge of Apache Spark, Hive, SQL, and the Hadoop ecosystem - Experience with developing frameworks and utility services, logging/monitoring, and high-quality software delivery - Experience creating large-scale, multi-tiered, distributed applications with Hadoop and Spark - Profound knowledge of implementing different data storage solutions such as RDBMS, Hive, HBase, Impala, and NoSQL databases Education: - Bachelor's degree or equivalent experience This job description provides a high-level overview of the responsibilities and qualifications for the Applications Development Intermediate Programmer Analyst position. Other job-related duties may be assigned as required.,
Posted 3 weeks ago
3.0 - 7.0 years
0 Lacs
telangana
On-site
You will be responsible for designing and building backend components of our MLOps platform in Python on AWS. This includes collaborating with geographically distributed cross-functional teams and participating in an on-call rotation with the rest of the team to handle production incidents. To be successful in this role, you should have at least 3+ years of professional backend development experience with Python. Additionally, you should have experience with web development frameworks such as Flask or FastAPI, as well as working with WSGI & ASGI web servers like Gunicorn and Uvicorn. Experience with concurrent programming designs such as AsyncIO, containers (Docker), AWS ECS or AWS EKS, unit and functional testing frameworks, and public cloud platforms like AWS is also required. Nice-to-have skills include experience with Apache Kafka and developing Kafka client applications in Python, MLOps platforms such as AWS Sagemaker, Kubeflow, or MLflow, big data processing frameworks like Apache Spark, DevOps & IaC tools such as Terraform and Jenkins, various Python packaging options like Wheel, PEX, or Conda, and metaprogramming techniques in Python. You should hold a Bachelor's degree in Computer Science, Information Systems, Engineering, Computer Applications, or a related field. In addition to competitive salaries and benefits packages, Nisum India offers its employees continuous learning opportunities, parental medical insurance, various activities for team building, and free meals including snacks, dinner, and subsidized lunch.,
Posted 3 weeks ago
7.0 - 11.0 years
0 Lacs
hyderabad, telangana
On-site
As a Software Engineer - Backend (Python) with over 7 years of experience, you will be based in Hyderabad and play a crucial role in developing the backend components of the GenAI Platform. Your responsibilities will include designing and constructing backend features for the platform on AWS, collaborating with cross-functional teams spread across different locations, and participating in an on-call rotation for managing production incidents. To excel in this role, you must possess the following skills: - A minimum of 7 years of professional experience in backend web development using Python. - Proficiency in AI, RAG, DevOps, and Infrastructure as Code (IaC) tools like Terraform and Jenkins. - Familiarity with MLOps platforms such as AWS Sagemaker, Kubeflow, or MLflow. - Expertise in web development frameworks like Flask, Django, or FastAPI. - Knowledge of concurrent programming concepts like AsyncIO. - Experience with public cloud platforms such as AWS, Azure, or GCP, preferably AWS. - Understanding of CI/CD practices, tools, and frameworks. Additionally, the following skills would be advantageous: - Experience with Apache Kafka and developing Kafka client applications using Python. - Familiarity with big data processing frameworks, particularly Apache Spark. - Proficiency in containers (Docker) and container platforms like AWS ECS or AWS EKS. - Expertise in unit and functional testing frameworks. - Knowledge of various Python packaging options such as Wheel, PEX, or Conda. - Understanding of metaprogramming techniques in Python. Join our team and contribute to creating a safe, compliant, and efficient access platform for LLMs, leveraging both Opensource and Commercial resources while adhering to Experian standards and policies. Be a part of a dynamic environment where you can utilize your expertise to build innovative solutions and drive the growth of the GenAI Platform.,
Posted 3 weeks ago
3.0 - 7.0 years
0 Lacs
haryana
On-site
The ideal candidate should have 3-5 years of experience in implementing scalable and sustainable data engineering solutions using tools such as Databricks, Snowflake, Teradata, Apache Spark, and Python. Your responsibilities will include creating, maintaining, and optimizing data pipelines as workloads transition from development to production for specific use cases. You will be responsible for end-to-end development, including coding, testing, debugging, and deployment. Automation is key, and you will drive the use of modern tools and techniques to automate repetitive data preparation and integration tasks to enhance productivity. You will be mapping data between source systems, data warehouses, and data marts, as well as training counterparts in data pipelining and preparation techniques. Collaboration is essential, and you will interface with other technology teams to extract, transform, and load data from various sources. You will also play a crucial role in promoting data and analytics capabilities to business unit leaders, educating them on leveraging these capabilities to achieve their business goals. You should be proficient in converting SQL queries into Python code running on a distributed system and developing libraries for code reusability. An eagerness to learn new technologies in a fast-paced environment and excellent communication skills are essential for this role. Experience with data pipeline and workflow management tools such as Rundeck and Airflow, AWS cloud services like EC2, EMR, RDS, Redshift, and stream-processing systems like Spark-Streaming would be advantageous.,
Posted 3 weeks ago
5.0 - 12.0 years
0 Lacs
coimbatore, tamil nadu
On-site
As a Data Software Engineer, you will be responsible for utilizing your 5-12 years of experience in Big Data & Data-related technologies to contribute to the success of projects in Chennai and Coimbatore in a Hybrid work mode. You should possess an expert level understanding of distributed computing principles and a strong knowledge of Apache Spark, with hands-on programming skills in Python. Your role will involve working with technologies such as Hadoop v2, Map Reduce, HDFS, Sqoop, Apache Storm, and Spark-Streaming to build stream-processing systems. You should have a good grasp of Big Data querying tools like Hive and Impala, as well as experience in integrating data from various sources including RDBMS, ERP, and Files. Experience with NoSQL databases such as HBase, Cassandra, MongoDB, and knowledge of ETL techniques and frameworks will be essential for this role. You will be tasked with performance tuning of Spark Jobs, working with AZURE Databricks, and leading a team efficiently. Additionally, your expertise in designing and implementing Big Data solutions, along with a strong understanding of SQL queries, joins, stored procedures, and relational schemas will be crucial. As a practitioner of AGILE methodology, you will play a key role in the successful delivery of data-driven projects.,
Posted 3 weeks ago
3.0 - 7.0 years
0 Lacs
hyderabad, telangana
On-site
The Retail Specialized Data Scientist will play a pivotal role in utilizing advanced analytics, machine learning, and statistical modeling techniques to help the retail business make data-driven decisions. You will work closely with teams across marketing, product management, supply chain, and customer insights to drive business strategies and innovations. The ideal candidate should have experience in retail analytics and the ability to translate data into actionable insights. Key Responsibilities: - Leverage Retail Knowledge: Utilize your deep understanding of the retail industry (merchandising, customer behavior, product lifecycle) to design AI solutions that address critical retail business needs. - Gather and clean data from various retail sources, such as sales transactions, customer interactions, inventory management, website traffic, and marketing campaigns. - Apply machine learning algorithms, such as classification, clustering, regression, and deep learning, to enhance predictive models. - Use AI-driven techniques for personalization, demand forecasting, and fraud detection. - Utilize advanced statistical methods to optimize existing use cases and build new products to serve new challenges and use cases. - Stay updated on the latest trends in data science and retail technology. - Collaborate with executives, product managers, and marketing teams to translate insights into business actions. Professional & Technical Skills: - Strong analytical and statistical skills. - Expertise in machine learning and AI. - Experience with retail-specific datasets and KPIs. - Proficiency in data visualization and reporting tools. - Ability to work with large datasets and complex data structures. - Strong communication skills to interact with both technical and non-technical stakeholders. - A solid understanding of the retail business and consumer behavior. - Programming Languages: Python, R, SQL, Scala - Data Analysis Tools: Pandas, NumPy, Scikit-learn, TensorFlow, Keras - Visualization Tools: Tableau, Power BI, Matplotlib, Seaborn - Big Data Technologies: Hadoop, Spark, AWS, Google Cloud - Databases: SQL, NoSQL (MongoDB, Cassandra) Additional Information: - Job Title: Retail Specialized Data Scientist - Management Level: 09 - Consultant - Location: Bangalore / Gurgaon / Mumbai / Chennai / Pune / Hyderabad / Kolkata - Company: Accenture This position requires a solid understanding of retail industry dynamics, strong communication skills, proficiency in Python for data manipulation, statistical analysis, and machine learning, as well as familiarity with big data processing platforms and ETL processes. The Retail Specialized Data Scientist will be responsible for gathering, cleaning, and analyzing data to provide valuable insights for business decision-making and optimization of pricing strategies based on market demand and customer behavior.,
Posted 3 weeks ago
3.0 - 8.0 years
0 Lacs
chennai, tamil nadu
On-site
This is a data engineer position where you will be responsible for designing, developing, implementing, and maintaining data flow channels and data processing systems to support the collection, storage, batch and real-time processing, and analysis of information in a scalable, repeatable, and secure manner in coordination with the Data & Analytics team. Your main objective will be to define optimal solutions for data collection, processing, and warehousing, particularly within the banking & finance domain. You must have expertise in Spark Java development for big data processing, Python, and Apache Spark. You will be involved in designing, coding, and testing data systems and integrating them into the internal infrastructure. Your responsibilities will include ensuring high-quality software development with complete documentation, developing and optimizing scalable Spark Java-based data pipelines, designing and implementing distributed computing solutions for risk modeling, pricing, and regulatory compliance, ensuring efficient data storage and retrieval using Big Data, implementing best practices for Spark performance tuning, maintaining high code quality through testing, CI/CD pipelines, and version control, working on batch processing frameworks for Market risk analytics, and promoting unit/functional testing and code inspection processes. You will also collaborate with business stakeholders, Business Analysts, and other data scientists to understand and interpret complex datasets. Qualifications: - 5-8 years of experience in working in data ecosystems - 4-5 years of hands-on experience in Hadoop, Scala, Java, Spark, Hive, Kafka, Impala, Unix Scripting, and other Big data frameworks - 3+ years of experience with relational SQL and NoSQL databases such as Oracle, MongoDB, HBase - Strong proficiency in Python and Spark Java with knowledge of core Spark concepts (RDDs, Dataframes, Spark Streaming, etc.), Scala, and SQL - Data integration, migration, and large-scale ETL experience - Data modeling experience - Experience building and optimizing big data pipelines, architectures, and datasets - Strong analytic skills and experience working with unstructured datasets - Experience with various technologies like Confluent Kafka, Redhat JBPM, CI/CD build pipelines, Git, BitBucket, Jira, external cloud platforms, container technologies, and supporting frameworks - Highly effective interpersonal and communication skills - Experience with software development life cycle Education: - Bachelors/University degree or equivalent experience in computer science, engineering, or a similar domain This is a full-time position in the Data Architecture job family group within the Technology sector.,
Posted 3 weeks ago
5.0 - 10.0 years
0 Lacs
haryana
On-site
The Tech Consultant - Data & Cloud role involves supporting a leading international client with expertise in data engineering, cloud platforms, and big data technologies. As a skilled professional, you will contribute to large-scale data initiatives, implement cloud-based solutions, and collaborate with stakeholders to drive data-driven innovation. You will design scalable data architectures, optimize ETL processes, and leverage cloud technologies to deliver impactful business solutions. Key Responsibilities Data Engineering & ETL: Develop and optimize data pipelines using Apache Spark, Airflow, Sqoop, and Databricks for seamless data transformation and integration. Cloud & Infrastructure Management: Design and implement cloud-native solutions using AWS, GCP, or Azure, ensuring scalability, security, and performance. Big Data & Analytics: Work with Hadoop, Snowflake, Data Lake, and Hive to enable advanced analytics and business intelligence capabilities. Technical Excellence: Utilize Python, SQL, and cloud data warehousing solutions to drive efficiency in data processing and analytics. Agile & DevOps Best Practices: Implement CI/CD pipelines, DevOps methodologies, and Agile workflows for seamless development and deployment. Stakeholder Collaboration: Work closely with business and technology teams to translate complex data challenges into business-driven solutions. Required Qualifications & Skills 5 - 10 years of experience in data engineering, analytics, and cloud-based solutions. Strong knowledge of Big Data technologies (Hadoop, Spark, Snowflake, Hive, Databricks, Airflow, AWS). Experience with ETL pipelines, data lakes, and large-scale data processing. Proficiency in Python, SQL, and cloud data warehousing solutions. Hands-on experience in cloud platforms (AWS, Azure, GCP) and infrastructure as code (Terraform, CloudFormation). Familiarity with containerization (Docker, Kubernetes) and BI tools (Tableau, Power BI). Understanding of Agile, Scrum, and DevOps best practices. Strong communication, problem-solving, and collaboration skills. Why Join Us Work on impactful global data projects for a leading international client. Lucrative Retention Bonus: Up to 20% bonus at the end of the first year, based on performance. Career Growth & Training: Access to world-class learning in advanced cloud, AI, and analytics technologies. Collaborative & High-Performance Culture: Work in a dynamic environment that fosters innovation, leadership, and technical excellence. About Us We are a trusted technology partner specializing in enterprise data solutions, cloud transformation, and analytics-driven decision-making. Our expertise in big data, AI, and cloud infrastructure enables us to deliver scalable, high-value solutions to global enterprises.,
Posted 3 weeks ago
6.0 - 10.0 years
0 - 0 Lacs
coimbatore, tamil nadu
On-site
As a Big Data Engineer at KGIS, you will be an integral part of the team dedicated to building cutting-edge digital and analytics solutions for global enterprises. With a focus on designing, developing, and optimizing large-scale data processing systems, you will lead the way in creating scalable data pipelines, driving performance tuning, and spearheading cloud-native big data initiatives. Your responsibilities will include designing and developing robust Big Data solutions using Apache Spark, building both batch and real-time data pipelines utilizing technologies like Spark, Spark Streaming, Kafka, and RabbitMQ, implementing ETL processes for data ingestion and transformation, and optimizing Spark jobs for enhanced performance and scalability. You will also work with NoSQL technologies such as HBase, Cassandra, or MongoDB, query large datasets using tools like Hive and Impala, ensure seamless integration of data from various sources, and lead a team of data engineers while following Agile methodologies. To excel in this role, you must possess deep expertise in Apache Spark and distributed computing, strong programming skills in Python, solid experience with Hadoop v2, MapReduce, HDFS, and Sqoop, proficiency in real-time stream processing using Apache Storm or Spark Streaming, and familiarity with messaging systems like Kafka or RabbitMQ. Additionally, you should have SQL mastery, hands-on experience with NoSQL databases, knowledge of cloud-native services in AWS or Azure, a strong understanding of ETL tools and performance tuning, an Agile mindset, and excellent problem-solving skills. While not mandatory, exposure to data lake and lakehouse architectures, familiarity with DevOps tools for CI/CD and data pipeline monitoring, and certifications in cloud or big data technologies are considered advantageous. Joining KGIS will provide you with the opportunity to work on innovative projects with Fortune 500 clients, be part of a fast-paced and meritocratic culture that values ownership, gain access to cutting-edge tools and technologies, and thrive in a collaborative and growth-focused environment. If you are ready to elevate your Big Data career and contribute to our digital transformation journey, apply now and embark on this exciting opportunity at KGIS.,
Posted 3 weeks ago
5.0 - 9.0 years
0 Lacs
ahmedabad, gujarat
On-site
You will be responsible for designing and developing data solutions using Elasticsearch/OpenSearch, integrating with various data sources and systems. Your role will involve architecting, implementing, and optimizing data solutions, along with applying your expertise in machine learning to develop models, algorithms, and pipelines for data analysis, prediction, and anomaly detection within Elasticsearch/OpenSearch environments. Additionally, you will design and implement data ingestion pipelines to collect, cleanse, and transform data from diverse sources, ensuring data quality and integrity. As part of your responsibilities, you will manage and administer Elasticsearch/OpenSearch clusters, including configuration, performance tuning, index optimization, and monitoring. You will work on optimizing complex queries and search operations in Elasticsearch/OpenSearch to ensure efficient and accurate retrieval of data. Troubleshooting and resolving issues related to Elasticsearch/OpenSearch performance, scalability, and reliability will be a key aspect of your role, requiring close collaboration with DevOps and Infrastructure teams. Collaboration with cross-functional teams, including data scientists, software engineers, and business stakeholders, will be essential to understand requirements and deliver effective data solutions. You will also be responsible for documenting technical designs, processes, and best practices related to Elasticsearch/OpenSearch and machine learning integration, providing guidance and mentorship to junior team members. To qualify for this position, you should hold a Bachelor's or Master's degree in Computer Science, Data Science, or a related field. Strong experience in designing, implementing, and managing large-scale Elasticsearch/OpenSearch clusters is required, along with expertise in machine learning techniques and frameworks such as TensorFlow, PyTorch, or scikit-learn. Proficiency in programming languages like Python, Java, or Scala, and experience with data processing frameworks and distributed computing are necessary. A solid understanding of data engineering concepts, cloud platforms, and containerization technologies is highly desirable. The ideal candidate will possess strong analytical and problem-solving skills, with the ability to work effectively in a fast-paced, collaborative environment. Excellent communication skills are crucial, enabling you to translate complex technical concepts into clear explanations for both technical and non-technical stakeholders. A proven track record of successfully delivering data engineering projects on time and within budget is also expected. If you have 5+ years of experience in Data Ingestion and Transformation, Elastic Search/Open Search Administration, Machine Learning Integration, and related areas, we invite you to send your CV to careers@eventussecurity.com. Join us in Ahmedabad and be part of our SOC - Excellence team.,
Posted 3 weeks ago
10.0 - 14.0 years
0 Lacs
karnataka
On-site
Capgemini Invent is the digital innovation, consulting, and transformation brand of the Capgemini Group. As a global business line, Capgemini Invent combines expertise in strategy, technology, data science, and creative design to assist CxOs in envisioning and constructing what's next for their businesses. In this role, you will be responsible for developing and maintaining scalable data pipelines using AWS services. Your tasks will include optimizing data storage and retrieval processes, ensuring data security and compliance with industry standards, and handling large volumes of data while maintaining accuracy, security, and accessibility. Additionally, you will be involved in developing data set processes for data modeling, mining, and production, implementing data quality and validation processes, and collaborating closely with data scientists, analysts, and IT departments to understand data requirements. You will work with data architects, modelers, and IT team members on project goals, monitor and troubleshoot data pipeline issues, conduct performance tuning and optimization of data solutions, and implement disaster recovery procedures. Your role will also involve ensuring the seamless integration of HR data from various sources into the cloud environment, researching opportunities for data acquisition and new uses for existing data, and staying up to date with the latest cloud technologies and best practices. You will be expected to recommend ways to improve data reliability, efficiency, and quality. To be successful in this position, you should have 10+ years of experience in cloud data engineering and proficiency in cloud platforms such as AWS, Azure, or Google Cloud. Experience with data pipeline tools like Apache Spark and AWS Glue, strong programming skills in languages such as Python, SQL, Java, or Scala, and familiarity with Snowflake or Informatica are advantageous. Knowledge of data privacy laws, security best practices, database technologies, and a demonstrated learner attitude are also essential. Strong communication, teamwork skills, and the ability to work in an Agile framework while managing multiple projects simultaneously will be key to excelling in this role. At Capgemini, we value flexible work arrangements to support a healthy work-life balance. We offer various career growth programs and diverse professions to help you explore a world of opportunities. Additionally, you will have the opportunity to equip yourself with valuable certifications in the latest technologies such as Generative AI. Capgemini is a global business and technology transformation partner, dedicated to helping organizations accelerate their transition to a digital and sustainable world. With a diverse team of over 340,000 members in more than 50 countries, Capgemini leverages its strong heritage and expertise in AI, cloud, and data to address clients" business needs comprehensively. We are committed to unlocking technology's value and creating tangible impact for enterprises and society.,
Posted 3 weeks ago
5.0 - 9.0 years
0 Lacs
haryana
On-site
As a Data Scientist at GlobalLogic, you will be responsible for working as a Full-stack AI Engineer. You must have proficiency in programming languages like Python, Java/Scala, and experience with data processing libraries such as Pandas, NumPy, and Scikit-learn. Additionally, you should be proficient in distributed computing platforms like Apache Spark (PySpark, Scala), and Torch. It is essential to have expertise in API development using Fast API, Spring Boot, and a good understanding of O&M - logging, monitoring, fault management, security, etc. Furthermore, it would be beneficial to have hands-on experience with deployment and orchestration tools like Docker, Kubernetes, Helm. Experience with cloud platforms such as AWS (Sagemaker/ Bedrock), GCP, or Azure is also advantageous. Strong programming skills in TensorFlow, PyTorch, or similar ML frameworks for training and deployment are considered good-to-have qualities for this role. At GlobalLogic, we prioritize a culture of caring, where you will experience an inclusive environment of acceptance and belonging. Continuous learning and development opportunities are provided to help you grow personally and professionally. You will have the chance to work on interesting and meaningful projects that make an impact for clients worldwide. We believe in the importance of work-life balance and flexibility, offering various career areas, roles, and work arrangements to help you achieve the perfect balance. As a high-trust organization, integrity is key, and you can trust GlobalLogic to provide a safe, reliable, and ethical work environment. By joining us, you become part of a team that values truthfulness, candor, and integrity in everything we do. GlobalLogic, a Hitachi Group Company, is a trusted digital engineering partner known for collaborating with some of the world's largest and most innovative companies. Since 2000, we have been at the forefront of the digital revolution, creating innovative digital products and experiences. Join us in transforming businesses and redefining industries through intelligent products, platforms, and services.,
Posted 3 weeks ago
2.0 - 6.0 years
0 Lacs
karnataka
On-site
You will be joining one of the Big Four companies in India at either Bangalore or Mumbai. As a Spark/Scala Developer specializing in Big Data, you will play a key role in designing and implementing scalable data solutions while ensuring optimal performance. Your responsibilities will include translating business requirements into technical deliverables and contributing to the overall success of the team. To excel in this role, you should have 3 to 5 years of experience as a Big Data Engineer or in a similar position. Additionally, a minimum of 2 years of experience in Scala programming and SQL is required. You will be expected to design, modify, and implement solutions for handling data in Hadoop Data Lake for both batch and streaming workloads using Scala & Apache Spark. Alongside this, debugging, optimization, and performance tuning of Spark jobs will be a part of your daily tasks. Your ability to translate functional requirements and user-stories into technical solutions will be crucial. Furthermore, your expertise in developing and debugging complex SQL queries to extract valuable business insights will be highly beneficial. While not mandatory, any prior development experience with cloud services such as AWS, Azure, or GCP will be considered advantageous for this role.,
Posted 3 weeks ago
5.0 - 9.0 years
0 Lacs
chennai, tamil nadu
On-site
The Content and Data Analytics team at Elsevier is an integral part of Global Operations, delivering data analysis services using Databricks to product owners and data scientists. As a Senior Data Analyst, you will work independently to provide advanced insights and recommendations, leading analytics efforts with high complexity. Your responsibilities will include supporting data scientists within the Research Data Platform, performing various analytical activities such as diving into large datasets, data preparation, and evaluating data science algorithms. A keen eye for detail, strong analytical skills, and expertise in data analysis systems are essential, along with curiosity and dedication to high-quality work in the scientific research domain. Requirements for this role include a minimum of 5 years of work experience, coding skills in Python and SQL, familiarity with string manipulation functions like regular expressions, and experience with data analysis tools such as Pandas or Apache Spark/Databricks. Knowledge of basic statistics, visualization tools like Tableau/Power BI, and Agile tools like JIRA are advantageous. You will be expected to build and maintain relationships with stakeholders, present achievements and project updates effectively, and collaborate well within a team. Taking initiative, driving for results, and demonstrating strong stakeholder management skills are key competencies for success in this role. In addition to work-life balance initiatives, Elsevier offers comprehensive health insurance, flexible working arrangements, employee assistance programs, and various leave options. Your well-being and happiness are prioritized, with benefits including group life insurance, modern family support, and subsidized meals. Join us at Elsevier, a global leader in information and analytics supporting science, research, and healthcare. Your work will contribute to addressing global challenges and promoting a sustainable future through innovative technologies and partnerships. We are committed to fair and accessible hiring practices, ensuring a safe and inclusive workplace for all candidates.,
Posted 3 weeks ago
10.0 - 18.0 years
0 Lacs
pune, maharashtra
On-site
We are looking for a seasoned Senior Data Architect with extensive knowledge in Databricks and Microsoft Fabric to join our team. In this role, you will be responsible for leading the design and implementation of scalable data solutions for BFSI and HLS clients. As a Senior Data Architect specializing in Databricks and Microsoft Fabric, you will play a crucial role in architecting and implementing secure, high-performance data solutions on the Databricks and Azure Fabric platforms. Your responsibilities will include leading discovery workshops, designing end-to-end data pipelines, optimizing workloads for performance and cost efficiency, and ensuring compliance with data governance, security, and privacy policies. You will collaborate with client stakeholders and internal teams to deliver technical engagements and provide guidance on best practices for Databricks and Microsoft Azure. Additionally, you will stay updated on the latest industry developments and recommend new data architectures, technologies, and standards to enhance our solutions. As a subject matter expert in Databricks and Azure Fabric, you will be responsible for delivering workshops, webinars, and technical presentations, as well as developing white papers and reusable artifacts to showcase our company's value proposition. You will also work closely with Databricks partnership teams to contribute to co-marketing and joint go-to-market strategies. In terms of business development support, you will collaborate with sales and pre-sales teams to provide technical guidance during RFP responses and identify upsell and cross-sell opportunities within existing accounts. To be successful in this role, you should have a minimum of 10+ years of experience in data architecture, engineering, or analytics roles, with specific expertise in Databricks and Azure Fabric. You should also possess strong communication and presentation skills, as well as the ability to collaborate effectively with diverse teams. Additionally, certifications in cloud platforms such as AWS and Microsoft Azure will be advantageous. In return, we offer a competitive salary and benefits package, a culture focused on talent development, and opportunities to work with cutting-edge technologies. At Persistent, we are committed to fostering diversity and inclusion in the workplace and invite applications from all qualified individuals. We provide a supportive and inclusive environment where all employees can thrive and unleash their full potential. Join us at Persistent and accelerate your growth professionally and personally while making a positive impact on the world with the latest technologies.,
Posted 3 weeks ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
Are you a seasoned engineer with a passion for data science and AI We are seeking a talented individual to lead our AM Analytics platform, a cutting-edge data science and AI platform that empowers our business users to develop innovative data models using the latest technologies. Your role will involve collaborating with business stakeholders across UBS Asset Management to gather and refine requirements, ensuring their successful delivery through technology solutions. You will act as a trusted advisor by identifying opportunities and guiding stakeholders on what's possible through technology. Additionally, you will lead the development of custom capabilities and workflows within the AM Analytics solutions, overseeing the full lifecycle from functional design through development and testing. It will be your responsibility to ensure the AM Analytics platform is secure, scalable, and resilient, supporting business continuity and long-term growth. You will also serve as a techno-functional expert with a strong grasp of both business needs and technical architecture, effectively communicating with diverse audiences, managing stakeholder relationships, and translating complex technical insights into actionable business value. Furthermore, you will have the opportunity to join our Certified Engineer Development Program and participate in various technology guilds focused on data, artificial intelligence, and other domains aligned with your interests. These communities offer a platform to contribute across UBS, foster innovation, and expand your expertise. You will be joining Asset Management Technology, a truly global and diverse organization of around 1,200 professionals, and take on a leadership role within the AM Technology Data Services agile crew. Our mission at Asset Management Technology is to drive sustainable investment outcomes and empower our teams through innovation. We are a client-focused, forward-thinking organization that values creativity and collaboration. Our culture fosters individual growth and team excellence, supported by data-driven, scalable platforms that align with business goals. We take pride in our work, celebrate our achievements, and thrive in a dynamic, high-performing environment. To excel in this role, you should have proven, hands-on experience in data engineering with strong proficiency in Python, SQL, Azure Kubernetes, and Azure Cloud. Exposure to AI, DevOps practices, and web service integration is highly valued. A solid background in relational databases such as PostgreSQL, Oracle SQL, Microsoft SQL Server, and MySQL is required; experience with non-relational databases is a plus. You should possess a strong knowledge of data engineering practices, including data profiling and ETL/ELT pipeline development, as well as experience with big data platforms such as Databricks, Cassandra, Apache Spark, and Hadoop. Skills in working with distributed systems, clustering, and replication technologies are essential, along with practical experience with machine learning frameworks including TensorFlow, PyTorch, and scikit-learn. Familiarity with natural language processing (NLP) and AI model development and deployment workflows (e.g., MLOps) is advantageous. You should be comfortable working in a collaborative, multi-site environment using Agile software development methodologies. A Bachelor's degree in Computer Science, Information Technology, or a related field, or equivalent practical experience is required, along with over 5 years of diverse experience in designing and managing innovative analytics and AI solutions; experience in the financial or fintech industry is a strong advantage. UBS is the world's largest and the only truly global wealth manager, operating through four business divisions: Global Wealth Management, Personal & Corporate Banking, Asset Management, and the Investment Bank. With a presence in all major financial centers in more than 50 countries, UBS stands out due to its global reach and the breadth of expertise, setting it apart from competitors. UBS is an Equal Opportunity Employer that respects and seeks to empower each individual, supporting the diverse cultures, perspectives, skills, and experiences within its workforce.,
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
57101 Jobs | Dublin
Wipro
24505 Jobs | Bengaluru
Accenture in India
19467 Jobs | Dublin 2
EY
17463 Jobs | London
Uplers
12745 Jobs | Ahmedabad
IBM
12087 Jobs | Armonk
Bajaj Finserv
11514 Jobs |
Amazon
11498 Jobs | Seattle,WA
Accenture services Pvt Ltd
10993 Jobs |
Oracle
10696 Jobs | Redwood City