Jobs
Interviews

16161 Spark Jobs - Page 14

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

Join us as a Big Data Engineer at Barclays, where you will spearhead the evolution of the digital landscape, driving innovation and excellence. You will harness cutting-edge technology to revolutionize digital offerings, ensuring unparalleled customer experiences. To be successful as a Big Data Engineer, you should have experience with: - Full Stack Software Development for large-scale, mission-critical applications. - Mastery in distributed big data systems such as Spark, Hive, Kafka streaming, Hadoop, Airflow. - Expertise in Scala, Java, Python, J2EE technologies, Microservices, Spring, Hibernate, REST APIs. - Experience with n-tier web application development and frameworks like Spring Boot, Spring MVC, JPA, Hibernate. - Proficiency with version control systems, preferably Git; GitHub Copilot experience is a plus. - Proficient in API Development using SOAP or REST, JSON, and XML. - Experience developing back-end applications with multi-process and multi-threaded architectures. - Hands-on experience with building scalable microservices solutions using integration design patterns, Dockers, Containers, and Kubernetes. - Experience in DevOps practices like CI/CD, Test Automation, Build Automation using tools like Jenkins, Maven, Chef, Git, Docker. - Experience with data processing in cloud environments like Azure or AWS. - Data Product development experience is essential. - Experience in Agile development methodologies like SCRUM. - Result-oriented with strong analytical and problem-solving skills. - Excellent verbal and written communication and presentation skills. You may be assessed on key critical skills relevant for success in the role, such as risk and controls, change and transformation, business acumen, strategic thinking, digital and technology, as well as job-specific technical skills. This role is for the Pune location. Purpose of the role: To design, develop, and improve software, utilizing various engineering methodologies, that provides business, platform, and technology capabilities for our customers and colleagues. Accountabilities: - Development and delivery of high-quality software solutions by using industry-aligned programming languages, frameworks, and tools. Ensuring that code is scalable, maintainable, and optimized for performance. - Cross-functional collaboration with product managers, designers, and other engineers to define software requirements, devise solution strategies, and ensure seamless integration and alignment with business objectives. - Collaboration with peers, participate in code reviews, and promote a culture of code quality and knowledge sharing. - Stay informed of industry technology trends and innovations and actively contribute to the organization's technology communities to foster a culture of technical excellence and growth. - Adherence to secure coding practices to mitigate vulnerabilities, protect sensitive data, and ensure secure software solutions. - Implementation of effective unit testing practices to ensure proper code design, readability, and reliability. Analyst Expectations: - Perform prescribed activities in a timely manner and to a high standard consistently driving continuous improvement. - Requires in-depth technical knowledge and experience in the assigned area of expertise. - Thorough understanding of the underlying principles and concepts within the area of expertise. - Lead and supervise a team, guiding and supporting professional development, allocating work requirements, and coordinating team resources. - If the position has leadership responsibilities, People Leaders are expected to demonstrate a clear set of leadership behaviors to create an environment for colleagues to thrive and deliver to a consistently excellent standard. - For an individual contributor, develop technical expertise in the work area, acting as an advisor where appropriate. - Will have an impact on the work of related teams within the area. - Partner with other functions and business areas. - Take responsibility for end results of a team's operational processing and activities. - Escalate breaches of policies/procedure appropriately. - Take responsibility for embedding new policies/procedures adopted due to risk mitigation. - Advise and influence decision-making within the own area of expertise. - Take ownership of managing risk and strengthening controls in relation to the work you own or contribute to. All colleagues will be expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence, and Stewardship our moral compass, helping us do what we believe is right. They will also be expected to demonstrate the Barclays Mindset to Empower, Challenge, and Drive the operating manual for how we behave.,

Posted 3 days ago

Apply

1.0 - 3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Experience: 1-3 year Location: Bangalore/Pune/Hyderabad Work mode: Hybrid Founded in the year 2017, CoffeeBeans specializes in offering high end consulting services in technology, product, and processes. We help our clients attain significant improvement in quality of delivery through impactful product launches, process simplification, and help build competencies that drive business outcomes across industries. The company uses new-age technologies to help its clients build superior products and realize better customer value. We also offer data-driven solutions and AI-based products for businesses operating in a wide range of product categories and service domains As a Data Engineer, you will play a crucial role in designing and optimizing data solutions for our clients. The ideal candidate will have a strong foundation in Python, experience with Databricks Warehouse SQL or a similar Spark-based SQL platform, and a deep understanding of performance optimization techniques in the data engineering landscape. Knowledge of serverless approaches, Spark Streaming, Structured Streaming, Delta Live Tables, and related technologies is essential. What are we looking for? Bachelor's degree in Computer Science, Engineering, or a related field. 1-3 year of experience as a Data Engineer. Proven track record of designing and optimizing data solutions. Strong problem-solving and analytical skills. Must haves Python: Proficiency in Python for data engineering tasks and scripting. Performance Optimization: Deep understanding and practical experience in optimizing data engineering performance. Serverless Approaches: Familiarity with serverless approaches in data engineering solutions. Good to have Databricks Warehouse SQL or Equivalent Spark SQL Platform: Hands-on experience with Databricks Warehouse SQL or a similar Spark-based SQL platform. Spark Streaming: Experience with Spark Streaming for real-time data processing. Structured Streaming: Familiarity with Structured Streaming in Apache Spark. Delta Live Tables: Knowledge and practical experience with Delta Live Tables or similar technologies. What will you be doing? Design, develop, and maintain scalable and efficient data solutions. Collaborate with clients to understand data requirements and provide tailored solutions. Implement performance optimization techniques in the data engineering landscape. Work with serverless approaches to enhance scalability and flexibility. Utilize Spark Streaming and Structured Streaming for real-time data processing. Implement and manage Delta Live Tables for efficient change data capture.

Posted 3 days ago

Apply

14.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Locations : Gurgaon | Boston Who We Are Boston Consulting Group partners with leaders in business and society to tackle their most important challenges and capture their greatest opportunities. BCG was the pioneer in business strategy when it was founded in 1963. Today, we help clients with total transformation-inspiring complex change, enabling organizations to grow, building competitive advantage, and driving bottom-line impact. To succeed, organizations must blend digital and human capabilities. Our diverse, global teams bring deep industry and functional expertise and a range of perspectives to spark change. BCG delivers solutions through leading-edge management consulting along with technology and design, corporate and digital ventures—and business purpose. We work in a uniquely collaborative model across the firm and throughout all levels of the client organization, generating results that allow our clients to thrive. What You'll Do BCG is committed to hiring and developing top talent. We are transforming our operating model to leverage SAP S/4 Hana Public Cloud as our new ERP. We are therefore seeking an experienced senior leader as Technical Team Lead for our Finance product team. The Technical Team Lead will be a senior technology leader with extensive experience thinking strategically about technology development and team growth. You will lead the technical strategy for our SAP S4/Hana Public Cloud platform, ensuring the long-term health and stability of the platform while guiding teams in how to best leverage the platform to meet priority business needs. You will carry out regular reviews of performance and data integrity metrics for the platform. From a people-management perspective, you will oversee technical personnel, including management of vendors. You will develop a close working relationship with the Product Team Lead to ensure alignment of technical resources and business priorities. You will inform key stakeholders about the state and direction of the platform strategy adopted by Chapter Leads, Product Owners, and other stakeholders. You will mentor the people around you and drive collaboration and knowledge sharing to benefit your platform team and the broader organization. What You’ll Do The ideal candidate will be responsible for maximizing leverage of the SAP S4/Hana Platform to meet the current and future needs of not just our Finance function, but a rapidly growing firm. Working in an Agile portfolio environment, the candidate will coordinate platform activities and best practices across squads, such as license management, module management, upgrade management and transport management. The candidate will be part of the portfolio’s leadership team, partnering closely with the Portfolio Lead and Technical Area Lead to define and deliver on the portfolio’s broader strategy. As a key technical resource, you will work closely with other groups in building new functionality, assisting with architectural designs, helping to define sprints, and partnering to turn requirements into reality. The candidate will serve as line manager for our ERP chapter leads who are responsible for the ongoing development, configuration, and maintenance of the SAP S4/Hana platform. The candidate will oversee operational support as well as perform all other related tasks, such as: Establishing technological roadmaps and guardrails while regularly reviewing them to inform “how” SAP platforms and processes work and are leveraged across a group of product teams Managing technological change by staying on top of relevant technical developments and innovations in your domain (both internally and externally), and proactively analyzing the impact it will have to drive technological health Sharing relevant insights and developments within area of expertise with all related teams Ensure adherence to firm-wide technology standards, and adapting them to suit Product Teams' needs Working with stakeholders across architecture, security, risk, and other COEs to ensure the platform's adherence to relevant technology standards and guidelines Taking a long-term lens on adherence to standards and tools, and utilizing them to help the Product Teams succeed Supporting and mentoring an organization of deep technical experts where knowledge is shared freely Extensively coordinating with vendors and various other stakeholders, both internally and externally, and monitoring dependencies to create a cohesive tech strategy and set clear direction Maintaining your own technical knowledge through learning and continuous improvement Managing, providing feedback for, and developing Chapter Leads to further their skillset and career development Taking a long-term point of view on tech competency resourcing, identifying knowledge or skill gaps that exist within Product Teams Working with relevant SMEs and Product Team Lead to ensure the Product Team has the relevant capacity, resources, and talent to deliver new functionality in light of large change initiatives and/or changes in demand Modelling behaviours to support the organization’s technical transformation to a new way of working empowered by technological enablers (e.g. GenAI developer tools) Actively creating and maintaining the culture of the Product Teams based on the organization and Agile leadership behaviours What You'll Bring 14+ years’ relevant experience, including proven experience as a technology leader, ideally having led teams spanning multiple products or platforms Passion for technology and your work inspiring teams Deep understanding of SAP S4/Hana platform and the common challenges in deploying and operating in a global firm Experience in managing vendor/partner relationships and external consultants Experience in managing SAP upgrades and navigating complexity of transport management Eagerness to learn new technologies and share knowledge Strength as a mentor, relationship builder, and confident communicator able to facilitate the technical skills of the people you lead Data-driven, pragmatic, and able to demonstrate technical dexterity as a problem solver Affinity for creating structure around product delivery while thinking strategically about tough trade-offs Experience in Agile methodology and comfortable working within in rapidly changing environment Who You'll Work With Your Product Team, by setting their technological strategy, architecture, tooling, & systems, ensuring that you provide the best IT solutions Product Team Lead, your counterpart who will be in charge of transforming business needs into products from a business point of view Technical Area Lead, your line manager that is responsible for the broader technical direction of the overall product portfolio Chapter Leads and technical direct reports that you will manage and guide in their career development Agile Coaches, that will support you with BCG agile principles, mindset and ways of working Additional info YOU’RE GOOD AT Dealing with current technology and willing to know “what’s coming next” in terms of disruptive technologies Imagining the best ways to transform business needs into configurations or lines of code, understanding which are the best digital solutions Leading engineering teams and troubleshooting technical issues that involve software development, platform, configuration management, engineering tasks and product releases Supporting a heterogeneous environment with a mix of SaaS products, custom-developed applications, and complex data integrations Taking a long-term view on managing tech competency resourcing and vendors, including considering large change initiatives and/or changes in demand Overseeing technical architecture, tooling, & systems in coordination with Enterprise Architecture, and ensuring compliance with technology standards, tools, and guardrails Focusing on delivering agreed-upon business results and customer value Operating with a transparency mindset, communicating clearly and openly to all levels of stakeholders Keeping abreast of your domain area and relevant industry trends Bringing a customer-centric approach to your work Committing to cross-functional collaboration to achieve the best results for the organization Experimenting with emerging technologies and understanding how they will impact what comes next Willing to trust and empower your teams to work autonomously to deliver great value to customers Boston Consulting Group is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, age, religion, sex, sexual orientation, gender identity / expression, national origin, disability, protected veteran status, or any other characteristic protected under national, provincial, or local law, where applicable, and those with criminal histories will be considered in a manner consistent with applicable state and local laws. BCG is an E - Verify Employer. Click here for more information on E-Verify.

Posted 3 days ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Description Eucloid is looking for a senior Data Engineer with hands-on expertise in Databricks to join our Data Platform team supporting various business applications. The ideal candidate will support the development of data infrastructure on Databricks for our clients by participating in activities which may include starting from upstream and downstream technology selection to designing and building different components. Candidate will also be involved in projects like integrating data from various sources, managing big data pipelines that are easily accessible with optimized performance of the overall ecosystem. The ideal candidate is an experienced data wrangler who will support our software developers, database architects, and data analysts on business initiatives. You must be self-directed and comfortable supporting the data needs of cross-functional teams, systems, and technical solutions. Qualifications B.Tech/BS degree in Computer Science, Computer Engineering, Statistics, or other Engineering disciplines Min. 5 years of Professional work Experience, with 1+ years of hands-on experience with Databricks Highly proficient in SQL & Data model (conceptual and logical) concepts Highly proficient with Python & Spark (3+ year) Knowledge of distributed computing and cloud databases like Redshift, BigQuery etc. 2+ years of Hands-on experience with one of the top cloud platforms - AWS/GCP/Azure. Experience with Modern Data stack tools like Airflow, Terraform, dbt, Glue, Dataproc, etc. Exposure to Hadoop & Shell scripting is a plus Min 2 years, Databricks 1 year desirable, Python & Spark 1+ years, SQL only, any cloud exp 1+ year Responsibilities Design, implementation, and improvement of processes & automation of Data infrastructure Tuning of Data pipelines for reliability & performance Building tools and scripts to develop, monitor, and troubleshoot ETLs Perform scalability, latency, and availability tests on a regular basis. Perform code reviews and QA data imported by various processes. Investigate, analyze, correct, and document reported data defects. Create and maintain technical specification documentation. (ref:hirist.tech)

Posted 3 days ago

Apply

2.0 - 5.0 years

0 Lacs

Greater Chennai Area

On-site

Job Description Lead and mentor a team of data scientists/analysts. Provide analytical insights by analyzing various types of data, including mining our customer data, review of relevant cases/samples, and incorporation of feedback from others. Work closely with business partners and stakeholders to determine how to design analysis, testing, and measurement approaches that will significantly improve our ability to understand and address emerging business issues. Produce intelligent, scalable, and automated solutions by leveraging Data Science skills. Work closely with Technology teams on the development of new capabilities to define requirements and priorities based on data analysis and business knowledge. Developing expertise in specific areas by leading analytical projects independently, while setting goals, providing benefit estimations, defining workflows, and coordinating timelines in advance. Providing updates to leadership, peers, and other stakeholders that will simplify and clarify complex concepts and the results of analyses effectively, with emphasis on the actionable outcomes and impact on business. Requirements 2 to 5 years in advanced analytics, statistical modelling, and machine learning. Best practice knowledge in credit risk - strong understanding of the full lifecycle from origination to debt collection. Well-versed with ML algos, BIG data concepts, and cloud implementations. High proficiency in Python and SQL/NoSQL. Collections and Digital Channels are a plus. Strong organizational skills and excellent follow-through. Outstanding written, verbal, and interpersonal communication skills. High emotional intelligence, a can-do mentality, and a creative approach to problem solving. Takes personal ownership, Self-starter - ability to drive projects with minimal guidance and focus on high-impact work. Learns continuously; Seeks out knowledge, ideas, and feedback. Look for opportunities to build one's skills, knowledge, and expertise. Experience with big data and cloud computing, viz. Spark, Hadoop (MapReduce, PIG, HIVE) Experience in risk and credit score domains preferred. (ref:hirist.tech)

Posted 3 days ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Description : is looking for a senior Data Engineer with hands-on expertise in Databricks to join our Data Platform team supporting various business applications. The ideal candidate will support the development of data infrastructure on Databricks for our clients by participating in activities which may include starting from upstream and downstream technology selection to designing and building different components. Candidate will also involve in projects like integrating data from various sources, managing big data pipelines that are easily accessible with optimized performance of the overall ecosystem. The ideal candidate is an experienced data wrangler who will support our software developers, database architects, and data analysts on business initiatives. You must be self-directed and comfortable supporting the data needs of cross-functional teams, systems, and technical solutions. Qualifications B.Tech/BS degree in Computer Science, Computer Engineering, Statistics, or other Engineering disciplines Min. 5 years of Professional work Experience, with 1+ years of hands-on experience with Databricks Highly proficient in SQL & Data model (conceptual and logical) concepts Highly proficient with Python & Spark (3+ year) Knowledge of distributed computing and cloud databases like Redshift, Big query etc. 2+ years of Hands-on experience with one of the top cloud platforms - AWS/GCP/Azure. Experience with Modern Data stack tools like Airflow, Terraform, dbt, Glue, Dataproc etc. Exposure to Hadoop & Shell scripting is a plus Min 2 years, Databricks 1 year desirable, Python & Spark 1+ years, SQL, any cloud exp 1+ year Responsibilities Design, implementation, and improvement of processes & automation of the Data infrastructure Tuning of Data pipelines for reliability & performance Building tools and scripts to develop, monitor, and troubleshoot ETLs Perform scalability, latency, and availability tests on a regular basis. Perform code reviews and QA data imported by various processes. Investigate, analyze, correct, and document reported data defects. Create and maintain technical specification documentation. (ref:hirist.tech)

Posted 3 days ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About The Role We are seeking a skilled and passionate Data Engineer to join our team and drive the development of scalable data pipelines for Generative AI (GenAI) and Large Language Model (LLM)-powered applications. This role demands hands-on expertise in Spark, GCP, and data integration with modern AI APIs. What You'll Do Design and develop high-throughput, scalable data pipelines for GenAI and LLM-based solutions. Build robust ETL/ELT processes using Spark (PySpark/Scala) on Google Cloud Platform (GCP). Integrate enterprise and unstructured data with LLM APIs such as OpenAI, Gemini, and Hugging Face. Process and enrich large volumes of unstructured data, including text and document embeddings. Manage real-time and batch workflows using Airflow, Dataflow, and BigQuery. Implement and maintain best practices for data quality, observability, lineage, and API-first designs. What Sets You Apart 3+ years of experience building scalable Spark-based pipelines (PySpark or Scala). Strong hands-on experience with GCP services: BigQuery, Dataproc, Pub/Sub, Cloud Functions. Familiarity with LLM APIs, vector databases (e.g., Pinecone, FAISS), and GenAI use cases. Expertise in text processing, unstructured data handling, and performance optimization. Agile mindset and the ability to thrive in a fast-paced startup or dynamic environment. Nice To Have Experience working with embeddings and semantic search. Exposure to MLOps or data observability tools. Background in deploying production-grade AI/ML workflows (ref:hirist.tech)

Posted 3 days ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

We are seeking a highly skilled and self-driven Site Reliability Engineer to join our dynamic team. This role is ideal for someone with a strong foundation in Kubernetes, DevOps, and observability who can also support machine learning infrastructure, GPU optimization, and Big Data ecosystems. You will play a pivotal role in ensuring the reliability, scalability, and performance of our production systems, while also enabling innovation across ML and data teams. Key Responsibilities Automation & Reliability Design, build, and maintain Kubernetes clusters across hybrid or cloud environments (e.g., EKS, GKE, AKS). Implement and optimize CI/CD pipelines using tools like Jenkins, ArgoCD, and GitHub Actions. Develop and maintain Infrastructure as Code (IaC) using Ansible, Terraform, or & Observability : Deploy and maintain monitoring, logging, and tracing tools (e.g., Thanos, Prometheus, Grafana, Loki, Jaeger). Establish proactive alerting and observability practices to identify and address issues before they impact users. ML Ops & GPU Optimization Support and scale ML workflows using tools like Kubeflow, MLflow, and TensorFlow Serving. Work with data scientists to ensure efficient use of GPU resources, optimizing training and inference & Incident Management : Lead root cause analysis for infrastructure and application-level incidents. Participate in the on-call rotation and improve incident response & Automation : Automate operational tasks and service deployment using Python, Shell, Groovy, or Ansible. Write reusable scripts and tools to improve team productivity and reduce manual Learning & Collaboration : Stay up-to-date with emerging technologies in SRE, ML Ops, and observability. Collaborate with cross-functional teams including engineering, data science, and security to ensure system integrity and : 3+ years of experience as an SRE, DevOps Engineer, or equivalent role. Strong experience with Kubernetes ecosystem and container orchestration. Proficiency in DevOps tooling including Jenkins, ArgoCD, and GitOps workflows. Deep understanding of observability tools, with hands-on experience using Thanos and Prometheus stack. Experience with ML platforms (MLflow, Kubeflow) and supporting GPU workloads. Strong scripting skills in Python, Shell, Ansible, or : CKS (Certified Kubernetes Security Specialist) certification. Exposure to Big Data platforms (e.g., Spark, Kafka, Hadoop). Experience with cloud-native environments (AWS, GCP, or Azure). Background in infrastructure security and compliance. (ref:hirist.tech)

Posted 3 days ago

Apply

4.0 years

0 Lacs

Pune, Maharashtra, India

On-site

The Role The Data Engineer is accountable for developing high quality data products to support the Bank’s regulatory requirements and data driven decision making. A Data Engineer will serve as an example to other team members, work closely with customers, and remove or escalate roadblocks. By applying their knowledge of data architecture standards, data warehousing, data structures, and business intelligence they will contribute to business outcomes on an agile team. Responsibilities Developing and supporting scalable, extensible, and highly available data solutions Deliver on critical business priorities while ensuring alignment with the wider architectural vision Identify and help address potential risks in the data supply chain Follow and contribute to technical standards Design and develop analytical data models Required Qualifications & Work Experience First Class Degree in Engineering/Technology (4-year graduate course) 4-6 years’ experience implementing data-intensive solutions using agile methodologies Experience of relational databases and using SQL for data querying, transformation and manipulation Experience of modelling data for analytical consumers Ability to automate and streamline the build, test and deployment of data pipelines Experience in cloud native technologies and patterns A passion for learning new technologies, and a desire for personal growth, through self-study, formal classes, or on-the-job training Excellent communication and problem-solving skills Technical Skills (Must Have) ETL: Hands on experience of building data pipelines. Proficiency in two or more data integration platforms such as Ab Initio, Apache Spark, Talend and Informatica Big Data : Experience of ‘big data’ platforms such as Hadoop, Hive or Snowflake for data storage and processing Data Warehousing & Database Management : Understanding of Data Warehousing concepts, Relational (Oracle, MSSQL, MySQL) and NoSQL (MongoDB, DynamoDB) database design Data Modeling & Design : Good exposure to data modeling techniques; design, optimization and maintenance of data models and data structures Languages : Proficient in one or more programming languages commonly used in data engineering such as Python, Java or Scala DevOps : Exposure to concepts and enablers - CI/CD platforms, version control, automated quality control management Technical Skills (Valuable) Ab Initio : Experience developing Co>Op graphs; ability to tune for performance. Demonstrable knowledge across full suite of Ab Initio toolsets e.g., GDE, Express>IT, Data Profiler and Conduct>IT, Control>Center, Continuous>Flows Cloud : Good exposure to public cloud data platforms such as S3, Snowflake, Redshift, Databricks, BigQuery, etc. Demonstratable understanding of underlying architectures and trade-offs Data Quality & Controls : Exposure to data validation, cleansing, enrichment and data controls Containerization : Fair understanding of containerization platforms like Docker, Kubernetes File Formats : Exposure in working on Event/File/Table Formats such as Avro, Parquet, Protobuf, Iceberg, Delta Others : Basics of Job scheduler like Autosys. Basics of Entitlement management Certification on any of the above topics would be an advantage. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Digital Software Engineering ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster.

Posted 3 days ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

As a member of the Google Cloud Consulting Professional Services team, you will have the opportunity to contribute to the success of businesses by guiding them through their cloud journey and leveraging Google's global network, data centers, and software infrastructure. Your role will involve assisting customers in transforming their businesses by utilizing technology to connect with customers, employees, and partners. Your responsibilities will include interacting with stakeholders to understand customer requirements and providing recommendations for solution architectures. You will collaborate with technical leads and partners to lead migration and modernization projects to Google Cloud Platform (GCP). Additionally, you will design, build, and operationalize data storage and processing infrastructure using Cloud native products, ensuring data quality and governance procedures are in place to maintain accuracy and reliability. In this role, you will work on data migrations, modernization projects, and design data processing systems optimized for scaling. You will troubleshoot platform/product tests, understand data governance and security controls, and travel to customer sites to deploy solutions and conduct workshops to educate and empower customers. Furthermore, you will be responsible for translating project requirements into goals and objectives, creating work breakdown structures to manage internal and external stakeholders effectively. You will collaborate with Product Management and Product Engineering teams to drive excellence in products and contribute to the digital transformation of organizations across various industries. By joining this team, you will play a crucial role in shaping the future of businesses of all sizes and assisting them in leveraging Google Cloud to accelerate their digital transformation journey.,

Posted 3 days ago

Apply

3.0 - 7.0 years

0 Lacs

chennai, tamil nadu

On-site

As a Transformation Engineering professional at Talworx, you will be expected to meet the following requirements: A Bachelor's degree in Computer Science, Information Systems, or a related field is preferred. You should have at least 5 years of experience in application development, deployment, and support. Your expertise should encompass a wide range of technologies including Java, JEE, JSP, Spring, Spring Boot (Microservices), Spring JPA, REST, JSON, Junit, React, Python, Javascript, HTML, and XML. Additionally, you should have a minimum of 3 years of experience in a Platform/Application Engineering role supporting on-premises and Cloud-based deployments, with a preference for Azure. While not mandatory, the following skills would be beneficial for the role: - At least 3 years of experience in Platform/Application Administration. - Proficiency in software deployments on Linux and Windows systems. - Familiarity with Spark, Docker, Containers, Kubernetes, Microservices, Data Analytics, Visualization Tools, and GIT. - Hands-on experience in building and supporting modern AI technologies such as Azure Open AI and LLM Infrastructure/Applications. - Experience in deploying and maintaining applications and infrastructure through configuration management software like Ansible and Terraform, following Infrastructure as Code (IaC) best practices. - Strong scripting skills in languages like bash and Python. - Proficiency in using GitHub to manage application and infrastructure deployment lifecycles within a structured CI/CD environment. - Familiarity with working in a structured ITSM change management environment. - Knowledge of configuring monitoring solutions and creating dashboards using tools like Splunk, Wily, Prometheus, Grafana, Dynatrace, and Azure Monitor. If you are passionate about driving transformation through engineering and possess the required qualifications and skills, we encourage you to apply and be a part of our dynamic team at Talworx.,

Posted 3 days ago

Apply

8.0 - 13.0 years

18 - 22 Lacs

Hyderabad, Bengaluru

Work from Office

To Apply - Mandatory to submit Details via Google Form - https://forms.gle/cCa1WfCcidgiSTgh8 Position : Senior Data Engineer - Total 8+ years Required Relevant 6+ years in Databricks, AWS, Apache Spark & Informatica (Required Skills) As a Senior data Engineer in our team, youll build and nurture positive working relationships with teams and clients with the intention to exceed client expectations Seeking experienced data Engineer to design, implement, and maintain robust data pipelines and analytics solutions using databricks & AWS services. The ideal candidate will have a strong background in data services, big data technologies, and programming languages. Role & responsibilities Technical Leadership: Guide and mentor teams in designing and implementing Databricks solutions. Architecture & Design: Develop scalable data pipelines and architectures using Databricks Lakehouse. Data Engineering: Lead the ingestion and transformation of batch and streaming data. Performance Optimization: Ensure efficient resource utilization and troubleshoot performance bottlenecks. Security & Compliance: Implement best practices for data governance, access control, and compliance. Collaboration: Work closely with data engineers, analysts, and business stakeholders. Cloud Integration: Manage Databricks environments on Azure, AWS, or GCP. Monitoring & Automation: Set up monitoring tools and automate workflows for efficiency. Qualifications: 6+ years of experience in Databricks, AWS and 4+ Apache Spark, and Informatica. Excellent problem-solving and leadership skills. Good to have these skills 1. Design and implement scalable, high-performance data pipelines using AWS services 2. Develop and optimize ETL processes using AWS Glue, EMR, and Lambda 3. Build and maintain data lakes using S3 and Delta Lake 4. Create and manage analytics solutions using Amazon Athena and Redshift 5. Design and implement database solutions using Aurora, RDS, and DynamoDB 6. Develop serverless workflows using AWS Step Functions 7. Write efficient and maintainable code using Python/PySpark, and SQL/PostgrSQL 8. Ensure data quality, security, and compliance with industry standards 9. Collaborate with data scientists and analysts to support their data needs 10. Optimize data architecture for performance and cost-efficiency 11. Troubleshoot and resolve data pipeline and infrastructure issues Preferred candidate profile (Good to have) 1. Bachelors degree in computer science, Information Technology, or related field 2. Relevant years of experience as a Data Engineer, with at least 60% of experience focusing on AWS 3. Strong proficiency in AWS data services: Glue, EMR, Lambda, Athena, Redshift, S3 4. Experience with data lake technologies, particularly Delta Lake 5. Expertise in database systems: Aurora, RDS, DynamoDB, PostgreSQL 6. Proficiency in Python and PySpark programming 7. Strong SQL skills and experience with PostgreSQL 8. Experience with AWS Step Functions for workflow orchestration Technical Skills: Good to have - AWS Services: Glue, EMR, Lambda, Athena, Redshift, S3, Aurora, RDS, DynamoDB , Step Functions - Big Data: Hadoop, Spark, Delta Lake - Programming: Python, PySpark - Databases: SQL, PostgreSQL, NoSQL - Data Warehousing and Analytics - ETL/ELT processes - Data Lake architectures - Version control: Git - Agile methodologies

Posted 3 days ago

Apply

3.0 - 7.0 years

0 Lacs

haryana

On-site

As a Software Engineer specializing in AI/ML/LLM/Data Science at Entra Solutions, a FinTech company within the mortgage Industry, you will play a crucial role in designing, developing, and deploying AI-driven solutions using cutting-edge technologies such as Machine Learning, NLP, and Large Language Models (LLMs). Your primary focus will be on building and optimizing retrieval-augmented generation (RAG) systems, LLM fine-tuning, and vector search technologies using Python. You will be responsible for developing scalable AI pipelines that ensure high performance and seamless integration with both cloud and on-premises environments. Additionally, this role will involve implementing MLOps best practices, optimizing AI model performance, and deploying intelligent applications. In this role, you will: - Develop, fine-tune, and deploy AI/ML models and LLM-based applications for real-world use cases. - Build and optimize retrieval-augmented generation (RAG) systems using Vector Databases such as ChromaDB, Pinecone, and FAISS. - Work on LLM fine-tuning, embeddings, and prompt engineering to enhance model performance. - Create end-to-end AI solutions with APIs using frameworks like FastAPI, Flask, or similar technologies. - Establish and maintain scalable data pipelines for training and inferencing AI models. - Deploy and manage models using MLOps best practices on cloud platforms like AWS or Azure. - Optimize AI model performance for low-latency inference and scalability. - Collaborate with cross-functional teams including Product, Engineering, and Data Science to integrate AI capabilities into applications. Qualifications: Must Have: - Proficiency in Python - Strong hands-on experience in AI/ML frameworks such as TensorFlow, PyTorch, Hugging Face, LangChain, and OpenAI APIs. Good to Have: - Experience with LLM fine-tuning, embeddings, and transformers. - Knowledge of NLP, vector search technologies (ChromaDB, Pinecone, FAISS, Milvus). - Experience in building scalable AI models and data pipelines with Spark, Kafka, or Dask. - Familiarity with MLOps tools like Docker, Kubernetes, and CI/CD for AI models. - Hands-on experience in cloud-based AI deployment using platforms like AWS Lambda, SageMaker, GCP Vertex AI, or Azure ML. - Knowledge of prompt engineering, GPT models, or knowledge graphs. What's in it for you - Competitive Salary & Full Benefits Package - PTOs / Medical Insurance - Exposure to cutting-edge AI/LLM projects in an innovative environment - Career Growth Opportunities in AI/ML leadership - Collaborative & AI-driven work culture Entra Solutions is an equal employment opportunity employer, and we welcome applicants from diverse backgrounds. Join us and be a part of our dynamic team driving innovation in the FinTech industry.,

Posted 3 days ago

Apply

8.0 - 12.0 years

0 Lacs

thiruvananthapuram, kerala

On-site

As a Senior Data Scientist / AI Engineer with 8-10 years of experience, you will be a vital part of our team working on developing an AI-enabled ERP solution. Your primary responsibilities will include designing scalable AI models, implementing ML pipelines, and exploring the latest AI technologies to drive innovation. Additionally, you will play a crucial role in managing and mentoring a team of data scientists and AI engineers to foster a collaborative and efficient work environment. Your key responsibilities will involve designing and building AI models from scratch, optimizing machine learning algorithms, and integrating them into our ERP solution. You will also be responsible for developing and maintaining an AI development and deployment pipeline using cloud and containerized solutions. Furthermore, conducting R&D activities to identify opportunities for AI-driven automation, working on sentiment analysis, NLP, and computer vision tasks, and deploying ML models while ensuring seamless integration into production systems will be part of your role. Collaboration with product teams to align AI strategies with business objectives, leading and mentoring a team of data scientists and AI engineers, and fostering a collaborative team culture are essential aspects of this role. Your expertise in Python and ML frameworks, strong understanding of ML algorithms, experience with deep learning architectures, NLP, and computer vision, along with hands-on experience in deploying AI models using Flask APIs, Docker, and Kubernetes are critical for success in this position. Preferred skills include knowledge of Java, experience with software development best practices, understanding of SDLC, version control, CI/CD pipelines, and experience with Big Data technologies. If you are passionate about working with AI-driven products and have a proven track record of solving real-world challenges, we are excited to hear from you! This is a full-time position located in Trivandrum/Kochi, with a remote option available initially. If you have a minimum of 8 years of experience in AI, Machine Learning, or Data Science roles, and have worked on AI model deployment or building AI pipelines, we encourage you to apply. Leadership and team management experience, along with the ability to guide and develop junior team members, are essential requirements for this role. Your strong problem-solving skills, ability to articulate AI concepts effectively, and experience in interacting with clients to communicate AI-driven solutions will be highly valued.,

Posted 3 days ago

Apply

6.0 - 10.0 years

0 Lacs

karnataka

On-site

The responsibilities of the role involve designing and implementing Azure Synapse Analytics solutions for data processing and reporting. You will be required to optimize ETL pipelines, SQL pools, and Synapse Spark workloads while ensuring data quality, security, and governance best practices are followed. Collaborating with business stakeholders to develop data-driven solutions and mentoring a team of data engineers are key aspects of this position. To excel in this role, you should possess 6-10 years of experience in Data Engineering, BI, or Cloud Analytics. Expertise in Azure Synapse, Azure Data Factory, SQL, and ETL processes is essential. Experience with Fabric is strongly desirable. Strong leadership, problem-solving, and stakeholder management skills are required. Additionally, knowledge of Power BI, Python, or Spark is a plus. Deep understanding of Data Modelling techniques, Design and development of ETL Pipelines, Azure Resources Cost Management, and writing complex SQL queries are important competencies. Familiarity with Best Authorization and security practices for Azure components, Master Data/metadata management, and data governance is crucial. Being able to manage a complex and rapidly evolving business and actively lead, develop, and support team members is vital. An Agile mindset and the ability to adapt to constant changes in risks and forecasts are expected. Thorough knowledge of data warehouse architecture, principles, and best practices is necessary. Expertise in designing star and snowflake schemas, identifying facts and dimensions, and selecting appropriate granularity levels is also required. Ensuring data integrity within the dimensional model by validating data and identifying inconsistencies is part of the role. You will work closely with Product Owners and data engineers to translate business needs into effective dimensional models. Joining MRI Software offers the opportunity to lead AI-driven data integration projects in real estate technology, work in a collaborative and innovative environment with global teams, and access competitive compensation, career growth opportunities, and exposure to cutting-edge technologies. The ideal candidate should hold a Bachelor's/Master's degree in software engineering, Computer Science, or a related area. The benefits of this position include hybrid working arrangements, an annual performance-related bonus, 6x Flexi any days, medical insurance coverage for extended family members, and an engaging, fun, and inclusive culture at MRI Software. MRI Software is a company that delivers innovative applications and hosted solutions to empower real estate companies to enhance their business. With a flexible technology platform and an open and connected ecosystem, we cater to the unique needs of real estate businesses globally. With offices across various countries and a diverse team, we provide expertise and insight to support our clients effectively. MRI Software is proud to be an Equal Employment Opportunity employer.,

Posted 3 days ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

We're Hiring: Passionate Preschool Teacher at Maple Bear, Sector 71, Gurgaon! 🐻📚 Maple Bear Canadian Pre-school, Sector 71 Gurgaon, is looking for a dedicated and experienced Preschool Teacher to join our team! If you have a genuine love for early childhood education, prior experience in a preschool setting, and a nurturing approach that brings out the best in every child—you might be the perfect fit for our Maple Bear family. 🔹 Location: Sector 71, Gurgaon 🔹 Role: Preschool Teacher 🔹 Experience: Must have prior experience working with preschoolers 🔹 Key Traits: Warm, patient, energetic, creative, and fluent in English At Maple Bear, we follow a Canadian early childhood curriculum designed to spark curiosity, build foundational skills, and create joyful learning experiences in a safe and loving environment. If shaping young minds in their most formative years excites you, we'd love to hear from you!

Posted 3 days ago

Apply

2.0 - 6.0 years

15 - 30 Lacs

Bengaluru

Work from Office

Be a part of a team that harnesses advanced AI, ML, and big data technologies to develop cutting-edge healthcare technology platform, delivering innovative business solutions. Job Title : Data Engineer II / Senior Data Engineer Job Location : Bengaluru, Pune - India Job Summary: We are a leading Software as a Service (SaaS) company that specializes in the transformation of data in the US healthcare industry through cutting-edge Artificial Intelligence (AI) solutions. We are looking for Software Developers, who should continually strive to advance engineering excellence and technology innovation. The mission is to power the next generation of digital products and services through innovation, collaboration, and transparency. You will be a technology leader and doer who enjoys working in a dynamic, fast-paced environment. Responsibilities: Design, develop, and maintain robust and scalable ETL/ELT pipelines to ingest and transform large datasets from various sources. Optimize and manage databases (SQL/NoSQL) to ensure efficient data storage, retrieval, and manipulation for both structured and unstructured data. Collaborate with data scientists, analysts, and engineers to integrate data from disparate sources and ensure smooth data flow between systems. Implement and maintain data validation and monitoring processes to ensure data accuracy, consistency, and availability. Automate repetitive data engineering tasks and optimize data workflows for performance and scalability. Work closely with cross-functional teams to understand their data needs and provide solutions that help scale operations. Ensure proper documentation of data engineering processes, workflows, and infrastructure for easy maintenance and scalability Roles and Responsibilities Design, develop, test, deploy, and maintain large-scale data pipelines using AWS services such as S3, Glue, Lambda, Step Functions. Collaborate with cross-functional teams to gather requirements and design solutions for complex data engineering projects. Develop ETL/ELT pipelines using Python scripts and SQL queries to extract insights from structured and unstructured data sources. Implement web scraping techniques to collect relevant data from various websites and APIs. Ensure high availability of the system by implementing monitoring tools like CloudWatch. Desired Profile: Bachelors or Masters degree in Computer Science, Information Technology, or a related field. 3-5 years of hands-on experience as a Data Engineer or in a related data-driven role. Strong experience with ETL tools like Apache Airflow, Talend, or Informatica. Expertise in SQL and NoSQL databases (e.g., MySQL, PostgreSQL, MongoDB, Cassandra). Strong proficiency in Python, Scala, or Java for data manipulation and pipeline development. Experience with cloud-based platforms (AWS, Google Cloud, Azure) and their data services (e.g., S3, Redshift, BigQuery). Familiarity with big data processing frameworks such as Hadoop, Spark, or Flink. Experience in data warehousing concepts and building data models (e.g., Snowflake, Redshift). Understanding of data governance, data security best practices, and data privacy regulations (e.g., GDPR, HIPAA). Familiarity with version control systems like Git.. HiLabs is an equal opportunity employer (EOE). No job applicant or employee shall receive less favorable treatment or be disadvantaged because of their gender, marital or family status, color, race, ethnic origin, religion, disability, or age; nor be subject to less favorable treatment or be disadvantaged on any other basis prohibited by applicable law. HiLabs is proud to be an equal opportunity workplace dedicated to pursuing and hiring a diverse and inclusive workforce to support individual growth and superior business results. Thank you for reviewing this opportunity with HiLabs! If this position appears to be a good fit for your skillset, we welcome your application.

Posted 3 days ago

Apply

1.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Description 1 year of experience developing and training machine learning models using structured and unstructured data. Experience with LLMs, transformers, and generative AI (e.g., GPT, Claude, etc.). Knowledge of AI/LLM stacks such as LangChain, LangGraph, and vector databases. Exposure to conversational interfaces, AI copilots, and agent-based automation. Experience with deploying models in production environments (e.g., MLflow). Strong programming skills in Python (with libraries TensorFlow, PyTorch, scikit-learn, NumPy, etc.). Solid understanding of machine learning fundamentals, deep learning architectures, natural language processing, and computer vision. Knowledge of data processing frameworks (e.g., Pandas, Spark) and databases (SQL, NoSQL).

Posted 3 days ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

As a part of ZS, you will have the opportunity to work in a place driven by passion that aims to change lives. ZS is a management consulting and technology firm that is dedicated to enhancing life and its quality. The core strength of ZS lies in its people, who work collectively to develop transformative solutions for patients, caregivers, and consumers worldwide. By adopting a client-first approach, ZS employees bring impactful results to every engagement by partnering closely with clients to design custom solutions and technological products that drive value and yield positive outcomes in key areas of their business. Your role at ZS will require you to bring inquisitiveness for learning, innovative ideas, courage, and dedication to make a life-changing impact. At ZS, the individuals are highly valued, recognizing both the visible and invisible facets of their identities, personal experiences, and belief systems. These elements shape the uniqueness of each individual and contribute to the diverse tapestry within ZS. ZS acknowledges and celebrates personal interests, identities, and the thirst for knowledge as integral components of success within the organization. Learn more about the diversity, equity, and inclusion initiatives at ZS, along with the networks that support ZS employees in fostering community spaces, accessing necessary resources for growth, and amplifying the messages they are passionate about. As an Architecture & Engineering Specialist specializing in ML Engineering at ZS's India Capability & Expertise Center (CEC), you will be part of a team that constitutes over 60% of ZS employees across three offices in New Delhi, Pune, and Bengaluru. The CEC plays a pivotal role in collaborating with colleagues from North America, Europe, and East Asia to deliver practical solutions to clients that drive the company's operations. Upholding standards of analytical, operational, and technological excellence, the CEC leverages collective knowledge to enable ZS teams to achieve superior outcomes for clients. Joining ZS's Scaled AI practice within the Architecture & Engineering Expertise Center will immerse you in a dynamic ecosystem focused on generating continuous business value for clients through innovative machine learning, deep learning, and engineering capabilities. In this role, you will collaborate with data scientists to craft cutting-edge AI models, develop and utilize advanced ML platforms, establish and implement sophisticated ML pipelines, and oversee the entire ML lifecycle. **Responsibilities:** - Design and implement technical features using best practices for the relevant technology stack - Collaborate with client-facing teams to grasp the solution context, contribute to technical requirement gathering and analysis - Work alongside technical architects to validate design and implementation strategies - Write production-ready code that is easily testable, comprehensible to other developers, and addresses edge cases and errors - Ensure top-notch quality deliverables by adhering to architecture/design guidelines, coding best practices, and engaging in periodic design/code reviews - Develop unit tests and higher-level tests to handle expected edge cases, errors, and optimal scenarios - Utilize bug tracking, code review, version control, and other tools for organizing and delivering work - Participate in scrum calls, agile ceremonies, and effectively communicate progress, issues, and dependencies - Contribute consistently by researching and evaluating the latest technologies, conducting proofs-of-concept, and creating prototype solutions - Aid the project architect in designing modules/components of the overall project/product architecture - Break down large features into estimable tasks, lead estimation, and defend them with clients - Independently implement complex features with minimal guidance, such as service or application-wide changes - Systematically troubleshoot code issues/bugs using stack traces, logs, monitoring tools, and other resources - Conduct code/script reviews of senior engineers within the team - Mentor and cultivate technical talent within the team **Requirements:** - Minimum 5+ years of hands-on experience in deploying and productionizing ML models at scale - Proficiency in scaling GenAI or similar applications to accommodate high user traffic, large datasets, and reduce response time - Strong expertise in developing RAG-based pipelines using frameworks like LangChain & LlamaIndex - Experience in crafting GenAI applications such as answering engines, extraction components, and content authoring - Expertise in designing, configuring, and utilizing ML Engineering platforms like Sagemaker, MLFlow, Kubeflow, or other relevant platforms - Familiarity with Big data technologies including Hive, Spark, Hadoop, and queuing systems like Apache Kafka/Rabbit MQ/AWS Kinesis - Ability to quickly adapt to new technologies, innovate in solution creation, and independently conduct POCs on emerging technologies - Proficiency in at least one Programming language such as PySpark, Python, Java, Scala, etc., and solid foundations in Data Structures - Hands-on experience in building metadata-driven, reusable design patterns for data pipeline, orchestration, and ingestion patterns (batch, real-time) - Experience in designing and implementing solutions on distributed computing and cloud services platforms (e.g., AWS, Azure, GCP) - Hands-on experience in constructing CI/CD pipelines and awareness of application monitoring practices **Additional Skills:** - AWS/Azure Solutions Architect certification with a comprehensive understanding of the broader AWS/Azure stack - Knowledge of DevOps CI/CD, data security, and experience in designing on cloud platforms - Willingness to travel to global offices as required to collaborate with clients or internal project teams **Perks & Benefits:** ZS provides a holistic total rewards package encompassing health and well-being, financial planning, annual leave, personal growth, and professional development. The organization offers robust skills development programs, various career progression options, internal mobility paths, and a collaborative culture that empowers individuals to thrive both independently and as part of a global team. ZS is committed to fostering a flexible and connected work environment that enables employees to combine work from home and on-site presence at clients/ZS offices for the majority of the week. This approach allows for the seamless integration of the ZS culture and innovative practices through planned and spontaneous face-to-face interactions. **Travel:** Travel is an essential aspect of working at ZS, especially for client-facing roles. Business needs dictate the priority for travel, and while some projects may be local, all client-facing employees should be prepared to travel as required. Travel opportunities provide avenues to strengthen client relationships, gain diverse experiences, and enhance professional growth through exposure to different environments and cultures. **Application Process:** Candidates must either possess or be able to obtain work authorization for their intended country of employment. To be considered, applicants must submit an online application, including a complete set of transcripts (official or unofficial). *Note: NO AGENCY CALLS, PLEASE.* For more information, visit [ZS Website](www.zs.com).,

Posted 3 days ago

Apply

0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Embark on a transformative journey as a Data Scientist AI/ML - AVP at Barclays in the Group Control Quantitative Analytics team, where you'll spearhead the evolution of our digital landscape, driving innovation and excellence. You'll harness cutting-edge technology to revolutionize our digital offerings, ensuring unapparelled customer experiences. Group Control Quantitative Analytics (GCQA) is a global organization of highly specialized data scientists working on Artificial Intelligence, Machine Learning and Gen AI model development and model management including governance and monitoring. GCQA is led by Remi Cuchillo under Lee Gregory, who is Chief Data and Analytics Officer (CDAO) in Group Control. GCQA is responsible for developing and managing AI/ML/GenAI models (including governance and regular model monitoring) and providing analytical support across different areas including Fraud, Financial Crime, Customer Due Diligence, Controls, Security etc. within Barclays. The Data Scientist position provides project specific leadership in building targeting solutions that integrate effectively into existing systems and processes while delivering strong and consistent performance. Working with GC CDAO team, the Quantitative Analytics Data Scientist role provides expertise in project design, predictive model development, validation, monitoring, tracking and implementation. To be successful in this role, you should possess the following skillsets: Python Programming. Knowledge of Artificial Intelligence and Machine Learning algorithms including NLP. SQL. Spark/PySpark. Predictive Model development. Model lifecycle and model management including monitoring, governance and implementation. DevOps tools like Git/Bitbucket etc. Project management using JIRA. Some Other Highly Valued Skills Include DevOps tools TeamCity, Jenkins etc. Knowledge of Financial/Banking Domain. Knowledge of GenAI tools and working. AWS. Databricks. You may be assessed on the key critical skills relevant for success in role, such as risk and controls, change and transformation, business acumen strategic thinking and digital and technology, as well as job-specific technical skills. This role is based in our Noida office. Purpose of the role To design, develop, implement, and support mathematical, statistical, and machine learning models and analytics used in business decision-making Accountabilities Design analytics and modelling solutions to complex business problems using domain expertise. Collaboration with technology to specify any dependencies required for analytical solutions, such as data, development environments and tools. Development of high performing, comprehensively documented analytics and modelling solutions, demonstrating their efficacy to business users and independent validation teams. Implementation of analytics and models in accurate, stable, well-tested software and work with technology to operationalise them. Provision of ongoing support for the continued effectiveness of analytics and modelling solutions to users. Demonstrate conformance to all Barclays Enterprise Risk Management Policies, particularly Model Risk Policy. Ensure all development activities are undertaken within the defined control environment. Assistant Vice President Expectations To advise and influence decision making, contribute to policy development and take responsibility for operational effectiveness. Collaborate closely with other functions/ business divisions. Lead a team performing complex tasks, using well developed professional knowledge and skills to deliver on work that impacts the whole business function. Set objectives and coach employees in pursuit of those objectives, appraisal of performance relative to objectives and determination of reward outcomes If the position has leadership responsibilities, People Leaders are expected to demonstrate a clear set of leadership behaviours to create an environment for colleagues to thrive and deliver to a consistently excellent standard. The four LEAD behaviours are: L – Listen and be authentic, E – Energise and inspire, A – Align across the enterprise, D – Develop others. OR for an individual contributor, they will lead collaborative assignments and guide team members through structured assignments, identify the need for the inclusion of other areas of specialisation to complete assignments. They will identify new directions for assignments and/ or projects, identifying a combination of cross functional methodologies or practices to meet required outcomes. Consult on complex issues; providing advice to People Leaders to support the resolution of escalated issues. Identify ways to mitigate risk and developing new policies/procedures in support of the control and governance agenda. Take ownership for managing risk and strengthening controls in relation to the work done. Perform work that is closely related to that of other areas, which requires understanding of how areas coordinate and contribute to the achievement of the objectives of the organisation sub-function. Collaborate with other areas of work, for business aligned support areas to keep up to speed with business activity and the business strategy. Engage in complex analysis of data from multiple sources of information, internal and external sources such as procedures and practises (in other areas, teams, companies, etc).to solve problems creatively and effectively. Communicate complex information. 'Complex' information could include sensitive information or information that is difficult to communicate because of its content or its audience. Influence or convince stakeholders to achieve outcomes. All colleagues will be expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence and Stewardship – our moral compass, helping us do what we believe is right. They will also be expected to demonstrate the Barclays Mindset – to Empower, Challenge and Drive – the operating manual for how we behave.

Posted 3 days ago

Apply

0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Embark on a transformative journey as a Data Scientist AI/ML at Barclays in the Group Control Quantitative Analytics team, where you'll spearhead the evolution of our digital landscape, driving innovation and excellence. You'll harness cutting-edge technology to revolutionize our digital offerings, ensuring unapparelled customer experiences. Group Control Quantitative Analytics (GCQA) is a global organization of highly specialized data scientists working on Artificial Intelligence, Machine Learning and Gen AI model development and model management including governance and monitoring. GCQA is led by Remi Cuchillo under Lee Gregory, who is Chief Data and Analytics Officer (CDAO) in Group Control. GCQA is responsible for developing and managing AI/ML/GenAI models (including governance and regular model monitoring) and providing analytical support across different areas including Fraud, Financial Crime, Customer Due Diligence, Controls, Security etc. within Barclays. The Data Scientist position provides project specific leadership in building targeting solutions that integrate effectively into existing systems and processes while delivering strong and consistent performance. Working with GC CDAO team, the Quantitative Analytics Data Scientist role provides expertise in project design, predictive model development, validation, monitoring, tracking and implementation. To be successful in this role, you should possess the following skillsets: Python Programming. Knowledge of Artificial Intelligence and Machine Learning algorithms including NLP. SQL. Spark/PySpark. Predictive Model development. Model lifecycle and model management including monitoring, governance and implementation. DevOps tools like Git/Bitbucket etc. Project management using JIRA. Some Other Highly Valued Skills Include DevOps tools TeamCity, Jenkins etc. Knowledge of Financial/Banking Domain. Knowledge of GenAI tools and working. AWS. Databricks. You may be assessed on the key critical skills relevant for success in role, such as risk and controls, change and transformation, business acumen strategic thinking and digital and technology, as well as job-specific technical skills. This role is based in our Noida office. Purpose of the role To design, develop, implement, and support mathematical, statistical, and machine learning models and analytics used in business decision-making Accountabilities Design analytics and modelling solutions to complex business problems using domain expertise. Collaboration with technology to specify any dependencies required for analytical solutions, such as data, development environments and tools. Development of high performing, comprehensively documented analytics and modelling solutions, demonstrating their efficacy to business users and independent validation teams. Implementation of analytics and models in accurate, stable, well-tested software and work with technology to operationalise them. Provision of ongoing support for the continued effectiveness of analytics and modelling solutions to users. Demonstrate conformance to all Barclays Enterprise Risk Management Policies, particularly Model Risk Policy. Ensure all development activities are undertaken within the defined control environment. Analyst Expectations To perform prescribed activities in a timely manner and to a high standard consistently driving continuous improvement. Requires in-depth technical knowledge and experience in their assigned area of expertise Thorough understanding of the underlying principles and concepts within the area of expertise They lead and supervise a team, guiding and supporting professional development, allocating work requirements and coordinating team resources. If the position has leadership responsibilities, People Leaders are expected to demonstrate a clear set of leadership behaviours to create an environment for colleagues to thrive and deliver to a consistently excellent standard. The four LEAD behaviours are: L – Listen and be authentic, E – Energise and inspire, A – Align across the enterprise, D – Develop others. OR for an individual contributor, they develop technical expertise in work area, acting as an advisor where appropriate. Will have an impact on the work of related teams within the area. Partner with other functions and business areas. Takes responsibility for end results of a team’s operational processing and activities. Escalate breaches of policies / procedure appropriately. Take responsibility for embedding new policies/ procedures adopted due to risk mitigation. Advise and influence decision making within own area of expertise. Take ownership for managing risk and strengthening controls in relation to the work you own or contribute to. Deliver your work and areas of responsibility in line with relevant rules, regulation and codes of conduct. Maintain and continually build an understanding of how own sub-function integrates with function, alongside knowledge of the organisations products, services and processes within the function. Demonstrate understanding of how areas coordinate and contribute to the achievement of the objectives of the organisation sub-function. Make evaluative judgements based on the analysis of factual information, paying attention to detail. Resolve problems by identifying and selecting solutions through the application of acquired technical experience and will be guided by precedents. Guide and persuade team members and communicate complex / sensitive information. Act as contact point for stakeholders outside of the immediate function, while building a network of contacts outside team and external to the organisation. All colleagues will be expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence and Stewardship – our moral compass, helping us do what we believe is right. They will also be expected to demonstrate the Barclays Mindset – to Empower, Challenge and Drive – the operating manual for how we behave.

Posted 3 days ago

Apply

1.0 - 5.0 years

0 Lacs

pune, maharashtra

On-site

ZS is a place where passion changes lives. As a management consulting and technology firm focused on improving life and how we live it, our most valuable asset is our people. Here you'll work side-by-side with a powerful collective of thinkers and experts shaping life-changing solutions for patients, caregivers, and consumers worldwide. ZSers drive impact by bringing a client-first mentality to each and every engagement. We partner collaboratively with our clients to develop custom solutions and technology products that create value and deliver company results across critical areas of their business. Bring your curiosity for learning, bold ideas, courage, and passion to drive life-changing impact to ZS. Our most valuable asset is our people. At ZS, we honor the visible and invisible elements of our identities, personal experiences, and belief systemsthe ones that comprise us as individuals, shape who we are, and make us unique. We believe your personal interests, identities, and desire to learn are part of your success here. Learn more about our diversity, equity, and inclusion efforts and the networks ZS supports to assist our ZSers in cultivating community spaces, obtaining the resources they need to thrive, and sharing the messages they are passionate about. As a Senior Cloud Site Reliability Engineer at ZS, you will be part of the CCoE (Cloud Center of Excellence) Team. This team builds, maintains, and helps architect the systems enabling ZS client-facing software solutions. The CCoE team defines and implements best practices to ensure performant, resilient, and secure cloud solutions. The team comprises analytical problem solvers from diverse backgrounds who share a passion for quality delivery, whether the customer is a client or another ZS employee. The team has a presence in ZS's Evanston, Illinois, and Pune, India offices. **What You'll Do:** Acting as a Senior Cloud Site Reliability Engineer, you will work with a team of operations engineers and software developers to analyze, maintain, and nurture our Cloud solutions/products to support the ever-growing company's clientele. As a technical expert, you will closely collaborate with various teams to ensure the stability of the environment by: - Analyzing the current state, designing appropriate solutions, and working with the team to implement them. - Coordinating emergency responses, performing root cause analysis, identifying and implementing solutions to prevent re-occurrences. - Working with the team to identify ways to increase MTBF and lower MTTR for the environment. - Reviewing the entire application stack and executing initiatives to reduce failures, defects, and issues with the overall performance. - Identifying and working with the team to implement more efficient system procedures. - Maintaining environment monitoring systems to provide the best visibility into the state of the deployed products/solutions. - Performing root cause analysis on incoming infrastructure alerts and working with teams to resolve them. - Maintaining performance analysis tools, identifying any adverse changes to performance and working with the teams to resolve them. - Researching industry trends and technologies and promoting adoption of best-in-class tools and technologies. - Taking the initiative to advance the quality, performance, or scalability of our Cloud Solutions by influencing the architecture or design of our products. - Designing, developing, and executing automated tests to validate solutions and environments. - Troubleshooting issues across the entire stack - infrastructure, software, application, and network. **What You'll Bring:** - 3+ years experience working as a Site Reliability Engineer or an equivalent position. - 2+ years experience with AWS cloud technologies and at least one AWS certification (Solution Architect / DevOps Engineer) is required. - 1+ years experience functioning as a senior member in an infrastructure/software team. - Hands-on experience with AWS services like EC2, RDS, EMR, CloudFront, ELB, API Gateway, CodeBuild, AWS Config, Systems Manager, Service Catalog, Lambda, etc. - Full-stack IT experience with *nix, Windows, network/firewall concepts, source control (BitBucket), and build/dependency management and continuous integration systems (TeamCity, Jenkins). - Expertise in at least one scripting language, Python preferred. - Firm understanding of application reliability, performance tuning, and scalability. - Exposure to big data technologies (Spark, Hadoop, Scala, etc.) stack is preferred. - Solid knowledge of infrastructure and cloud-native services along with network technologies. - Solid understanding of RDBMS and Cloud Database engines like Postgres SQL, MySQL, etc. - Firm understanding of Clusters, Load balancers, and CDN. - Experience in fault-tolerant system design. - Familiarity with Splunk data analysis, Datadog, or similar tools is a plus. - A Bachelor's degree (Master's preferred) in a related technical field. - Excellent analytical, troubleshooting, and communication skills. - Strong verbal, written, and team presentation communication skills. Fluency in English is required. - Initiative and the ability to remain flexible and responsive in a dynamic environment. - Ability to quickly learn new platforms, languages, tools, and techniques as needed to meet project requirements. **Perks & Benefits:** ZS offers a comprehensive total rewards package including health and well-being, financial planning, annual leave, personal growth, and professional development. Our robust skills development programs, multiple career progression options, internal mobility paths, and collaborative culture empower you to thrive as an individual and global team member. We are committed to giving our employees a flexible and connected way of working. A flexible and connected ZS allows us to combine work from home and on-site presence at clients/ZS offices for the majority of our week. The magic of ZS culture and innovation thrives in both planned and spontaneous face-to-face connections. **Travel:** Travel is a requirement at ZS for client-facing ZSers; business needs of your project and client are the priority. While some projects may be local, all client-facing ZSers should be prepared to travel as needed. Travel provides opportunities to strengthen client relationships, gain diverse experiences, and enhance professional growth by working in different environments and cultures. If you are interested in joining us, we encourage you to apply even if you don't meet 100% of the requirements listed above. ZS is an equal opportunity employer and is committed to providing equal employment and advancement opportunities without regard to any class protected by applicable law. **To Complete Your Application:** Candidates must possess or be able to obtain work authorization for their intended country of employment. An online application, including a full set of transcripts (official or unofficial), is required to be considered. NO AGENCY CALLS, PLEASE. Find Out More At: www.zs.com,

Posted 3 days ago

Apply

2.0 - 6.0 years

0 Lacs

karnataka

On-site

We have an exciting and rewarding opportunity for you to take your software engineering career to the next level. As a Software Engineer III - Big Data/Java/Scala at JPMorgan Chase within the Liquidity Risk (LRI) team, you will design and implement the next generation build out of a cloud native liquidity risk management platform for JPMC. The Liquidity Risk technology organization aims to provide comprehensive solutions to managing the firm's liquidity risk and to meet our regulatory reporting obligations across 50+ markets. The program will include the strategic build out of advanced liquidity calculation engines, incorporate AI and ML into our liquidity risk processes, and bring digital-first reporting capabilities. The target platform must process 40-60 million transactions and positions daily, calculate risk presented by the current actual as well as model-based what-if state of the market, build a multidimensional picture of the corporate risk profile, and provide the ability to analyze it in real time. Job Responsibilities: Executes standard software solutions, design, development, and technical troubleshooting. Applies knowledge of tools within the Software Development Life Cycle toolchain to improve the value realized by automation. Gathers, analyzes, and draws conclusions from large, diverse data sets to identify problems and contribute to decision-making in service of secure, stable application development. Learns and applies system processes, methodologies, and skills for the development of secure, stable code and systems. Adds to team culture of diversity, equity, inclusion, and respect. Contributes to team drive for continual improvement of development process and innovative solutions to meet business needs. Applies appropriate dedication to support the business goals through technology solutions. Required Qualifications, Capabilities, and Skills: Formal training or certification on software engineering concepts and 2+ years applied experience. Hands-on development experience and in-depth knowledge of Java, Scala, Spark, Bigdata related technologies. Hands-on practical experience in system design, application development, testing, and operational stability. Experience in cloud technologies (AWS). Experience across the whole Software Development Life Cycle. Experience to agile methodologies such as CI/CD, Applicant Resiliency, and Security. Emerging knowledge of software applications and technical processes within a technical discipline. Ability to work closely with stakeholders to define requirements. Interacting with partners across feature teams to collaborate on reusable services to meet solution requirements. Preferred Qualifications, Capabilities, and Skills: Experience of working in big data solutions with evidence of ability to analyze data to drive solutions. Exposure to complex computing using JVM and Big data. Ability to find the issue and optimize an existing workflow.,

Posted 3 days ago

Apply

6.0 - 12.0 years

0 Lacs

thiruvananthapuram, kerala

On-site

You will be responsible for leveraging your 6-12 years of experience in Data Warehouse and Big Data technologies to contribute to our team in Trivandrum. Your expertise in Programming Languages such as Scala, Spark, PySpark, Python, and SQL, along with Big Data Technologies like Hadoop, Hive, Pig, and MapReduce will be crucial for this role. Additionally, your proficiency in ETL & Data Engineering including Data Warehouse Design, ETL, Data Analytics, Data Mining, and Data Cleansing will be highly valued. As a part of our team, you will be expected to have hands-on experience with Cloud Platforms like GCP and Azure, as well as tools & frameworks such as Apache Hadoop, Airflow, Kubernetes, and Containers. Your skills in data pipeline creation, optimization, troubleshooting, and data validation will play a key role in ensuring the efficiency and accuracy of our data processes. Ideally, you should have at least 4 years of experience working with Scala, Spark, PySpark, Python, and SQL, in addition to 3+ years of strategic data planning, governance, and standard procedures. Experience in Agile environments and a good understanding of Java, ReactJS, and Node.js will be beneficial for this role. Moreover, your ability to work with data analytics, machine learning, and optimization will be advantageous. Knowledge of managing big data workloads, containerized environments, and experience in analyzing large datasets to optimize data workflows will further strengthen your profile for this position. UST is a global digital transformation solutions provider with a track record of working with some of the world's best companies for over 20 years. With a team of over 30,000 employees in 30 countries, we are committed to making a real impact through transformation. If you are passionate about innovation, agility, and driving positive change through technology, we invite you to join us on this journey of creating boundless impact and touching billions of lives in the process.,

Posted 3 days ago

Apply

0.0 - 31.0 years

2 - 3 Lacs

Burari, New Delhi

On-site

📍 Location: Burari, Delhi (Work from Office) 🕒 Full-Time | 💼 Department: Sales & Client Acquisition 📢 Company: Perridot Media – India's Viral Campaign Experts About Perridot MediaAt Perridot Media, we craft viral campaigns that spark conversations and connect brands with millions. From meme marketing to high-impact digital strategies, we help brands like ZEE5, Netflix, Amazon Prime and more dominate the internet. Now, we’re looking for a dynamic and confident Female Sales Executive to join our growing team! Role OverviewWe are hiring a smart, English-fluent Female Sales Executive to represent our marketing services to potential clients. If you're a go-getter who enjoys communication, client handling, and digital trends, this role is for you! Key ResponsibilitiesPitch and sell digital marketing services to potential clients Follow up with leads via calls, emails, or in-person meetings Maintain relationships with existing clients for repeat business Coordinate with the creative team to fulfill client requirements Achieve monthly sales targets and report progress regularly Represent Perridot Media professionally in all client interactions Who Should Apply?Females only (Freshers or up to 1 year experience welcome) Must have excellent English communication (spoken + written) Confident, energetic, and presentable personality Basic knowledge of digital marketing or media trends is a plus Perks & Benefits✨ Fixed salary + attractive incentives 🎯 Career growth in a fast-paced media agency ☕ Creative and youthful work environment 🧠 On-the-job training and mentoring To Apply:📩 Send your CV to hiring@peridotmedia.in with subject line: Sales Executive Application – [Your Name] 🌐 Learn more about us at www.peridotmedia.in

Posted 3 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies