Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 - 7.0 years
0 Lacs
karnataka
On-site
We are seeking an experienced Big Data Engineer to join our team in Bangalore immediately. If you have a strong background in big data technologies, data processing frameworks, and cloud platforms, we would like to hear from you. Your primary responsibilities will include designing, developing, and maintaining big data solutions using Hadoop, Hive, and Spark. You will be tasked with creating data processing pipelines with Spark (Scala/Java) and implementing real-time streaming solutions using Kafka. Additionally, working with PostgresSQL for data management and integration will be part of your daily routine. Collaboration with various teams to deliver scalable and robust data solutions is a key aspect of this role. You will utilize Git for version control and Jenkins for CI/CD pipeline automation. Experience in deploying and managing workloads on AWS and/or Azure cloud platforms is highly desirable. Moreover, having proficiency in leveraging Databricks for advanced Spark analytics (Databricks Spark certification is preferred) will be advantageous. As a Big Data Engineer, you will also be responsible for troubleshooting data and performance issues while optimizing processes for efficiency. If you are ready to contribute your expertise to a dynamic team and take on these exciting challenges, we encourage you to apply for this position.,
Posted 12 hours ago
4.0 years
0 Lacs
Mohali, Punjab
On-site
Job Information Date Opened 08/08/2025 Job Type Full time Industry Technology Work Experience 4+ Years City Mohali State/Province Punjab Country India Zip/Postal Code 160071 Job Description About XenonStack XenonStack is the fastest-growing Data and AI Foundry for Agentic Systems , enabling people and organizations to gain real-time and intelligent business insights . We deliver innovation through: Akira AI – Building Agentic Systems for AI Agents XenonStack Vision AI – Vision AI Platform NexaStack AI – Inference AI Infrastructure for Agentic Systems Our mission is to accelerate the world’s transition to AI + Human Intelligence by building cloud-native platforms, decision-driven analytics, and enterprise-ready AI solutions . The Opportunity We are seeking an experienced Senior Data Engineer – Microsoft Azure with 4+ years of expertise in designing, building, and maintaining modern data platforms on Azure . This role offers the chance to own complex data engineering solutions , optimize large-scale data workflows, and work at the forefront of cloud-native AI-driven analytics . You will collaborate closely with Data Science, AI, and Platform Engineering teams to deliver high-performance, scalable, and secure data systems that power real-time insights for enterprise clients. Key Responsibilities Azure Data Platform Development Design, develop, and maintain data pipelines, ETL/ELT workflows , and integration processes using Azure services. Implement scalable data architectures leveraging Azure Data Factory, Azure Synapse Analytics, Azure Databricks, and Azure Data Lake . Data Engineering & Optimization Build and optimize data ingestion and transformation processes for batch, streaming, and real-time analytics. Ensure data quality, security, and governance across the platform. Apply performance tuning for large-scale datasets to improve latency, throughput, and cost efficiency . Collaboration & Delivery Work closely with Data Scientists, BI teams, and application developers to meet business requirements. Support advanced analytics and AI initiatives by enabling accessible, well-structured data . Participate in code reviews, architecture discussions, and DevOps practices for deployment and monitoring. Innovation & Continuous Improvement Stay updated on Azure cloud-native innovations and recommend modern tools and frameworks. Automate workflows for scalability, maintainability, and operational efficiency . Skills & Qualifications Technical Skills Bachelor’s degree in Computer Science, Engineering, or related field. 4+ years of hands-on data engineering experience, with at least 2+ years in Microsoft Azure ecosystems . Proficiency in Azure Data Factory, Synapse Analytics, Databricks, and Data Lake Storage . Strong programming skills in Python, SQL , and one additional language (Scala, Java, or C# preferred). Experience with data modeling, warehousing, and advanced ETL/ELT design patterns . Knowledge of Azure Event Hubs, Azure Stream Analytics , and other real-time processing services. Understanding of data governance, security, and compliance in cloud environments. Familiarity with CI/CD pipelines, Git, and Infrastructure-as-Code (Terraform, Bicep, ARM templates). Professional Attributes Strong problem-solving and analytical thinking skills. Proven ability to work collaboratively in cross-functional teams. Excellent communication skills for both technical and non-technical audiences. Self-driven, adaptable, and passionate about continuous learning. Career Growth & Benefits 1. Continuous Learning & Growth Access to advanced cloud and data certifications. Opportunities to work on AI + Big Data integration projects for global enterprises. 2. Recognition & Rewards Performance-based incentives and recognition for innovation. 3. Work Benefits & Well-Being Comprehensive medical insurance. Project-based allowances and cab facilities for women employees. Life at XenonStack – Join Us & Make an Impact At XenonStack, we foster a culture of cultivation built on bold, human-centric leadership. We value deep work , an obsession with adoption , and simplicity in every solution we deliver . Our Product Philosophy: Obsessed with Adoption – Making AI accessible and simplifying enterprise processes. Obsessed with Simplicity – Turning complexity into intuitive, impactful experiences. Be part of our mission to accelerate the world’s transition to AI + Human Intelligence . Benefits
Posted 12 hours ago
8.0 - 12.0 years
0 Lacs
karnataka
On-site
As a Tech Lead at Carelon Global Solutions India, your primary responsibility will involve collaborating with data architects to implement data models and ensure seamless integration with AWS services. You will be developing solutions for effective data processing from on-premise to cloud requirements of large enterprises using AWS services and Snowflake. Additionally, you will design and develop data ingestion pipelines to extract data from various sources, transform, aggregate, and enrich the data. It will also be part of your role to implement data governance policies and procedures to ensure data quality, security, and compliance. Your expertise will be required in technologies such as Snowflake, Python, AWS S3-Athena, RDS, Cloudwatch, Lambda, Glue, and EMR Cluster. You will need a solid understanding of nested JSON file handling, flattening, and different file formats like Avro and Parquet. It will be essential to analyze day-to-day loads/issues, work closely with admin/architect teams during any issues, and document and simulate complex issues. Moreover, you will be responsible for overseeing the technical aspects of projects, making design and architecture decisions, and ensuring best practices are followed. Effective communication with peers, breaking down complex topics, and guiding cross-functional teams will be crucial. You will also need to have a deep understanding of the project's overall architecture and critical systems. Delegating work and assignments to both yourself and team members, organizing the team by implementing a project timeline, and collaborating with different business functions to identify and clear roadblocks for the team will be part of your routine. To qualify for this position, you should have a Bachelor's Degree in Information Technology/Data Engineering or a similar field, along with a minimum of 3-9 years of experience in AWS services. A total of 8+ years of experience in Snowflake with AWS, data engineering, and cloud services, as well as 8-11 years of overall experience in IT, will be required. Experience in agile development processes is also preferred. Being a part of Carelon Global Solutions India means being in an environment that fosters growth, well-being, purpose, and a sense of belonging. With extensive focus on learning and development, an innovative culture, holistic well-being, comprehensive rewards and recognitions, competitive health and medical insurance coverage, best-in-class amenities and workspaces, and policies designed with associates at the center, you will have limitless opportunities to thrive and succeed. At Carelon, we celebrate the diversity of our workforce and the diverse ways we work. We are committed to providing reasonable accommodations to empower our associates to deliver the best results for our customers. If you have a disability and need accommodation during the interview process, please ask for the Reasonable Accommodation Request Form. Carelon Global Solutions India is an equal opportunity employer, offering a full-time job position with a promise of limitless opportunities and a culture that values inclusivity and diversity.,
Posted 13 hours ago
5.0 - 10.0 years
0 Lacs
pune, maharashtra
On-site
The role of Data Engineer at Atgeir Solutions in Pune requires a Bachelor's or Master's degree in Computer Science, Engineering, or a related field with a minimum of 5-10 years of experience in IT Consulting & Services. As a Data Engineer, you will be responsible for designing, developing, and optimizing data infrastructure and workflows to drive insightful analytics, enhance AI/ML models, and facilitate data-driven decision-making for clients. Your primary responsibilities will include designing and implementing scalable data architectures, ETL pipelines, and data workflows to process, store, and analyze large datasets effectively. You will also integrate and consolidate data from various sources, ensuring data integrity and quality while implementing data governance practices and complying with information security standards. Collaborating closely with data scientists, software engineers, and business analysts, you will align data engineering solutions with business goals and requirements. Additionally, optimizing database performance, query execution, and storage solutions for efficiency is a crucial part of your role. You will also contribute to innovative projects and mentor junior team members, fostering a culture of continuous learning and development. The ideal candidate for this position should possess proficiency in programming languages such as Python, Java, or Scala, along with hands-on experience in big data technologies like Hadoop and Spark. Strong knowledge of SQL and NoSQL databases, familiarity with data warehousing tools, and experience with cloud platforms such as GCP, AWS, or Azure are essential. Expertise in ETL/ELT processes, data modeling, and data lake architectures, as well as knowledge of data visualization tools, are required. Educational/Academic Experience in teaching relevant technical subjects and the ability to explain complex concepts to diverse audiences are beneficial. Soft skills including excellent analytical and problem-solving abilities, strong interpersonal skills, and the capacity to collaborate effectively in cross-functional teams are highly valued. Preferred qualifications include certifications in cloud technologies, familiarity with AI/ML tools and frameworks, and contributions to research papers, technical blogs, or open-source projects.,
Posted 13 hours ago
4.0 - 8.0 years
0 Lacs
pune, maharashtra
On-site
As a Software Development Engineer (Data Engineering) in Enterprise Data Solution (EDS) at Mastercard, you will have the opportunity to contribute to building high-performance data pipelines for loading data into the company's Data Warehouse. The Data Warehouse plays a crucial role in providing analytical capabilities to various business users, helping them address their business challenges through data-driven insights. You will be an integral part of a growing organization, collaborating with experienced engineers to tackle complex problems. Your responsibilities will include participating in medium-to-large size data engineering projects, integrating new sources of real-time, streaming, batch, and API-based data into the platform, and supporting business stakeholders in leveraging data-driven insights for growth and transformation. You will be involved in building and maintaining data processing workflows, ensuring reliable integrations with internal systems and third-party APIs, and assisting data analysts in deriving meaningful insights from complex datasets. Collaborating with cross-functional agile teams, you will drive projects through the full development cycle, while also promoting data engineering best practices within the team. To excel in this role, you should hold at least a Bachelor's degree in Computer Science, Computer Engineering, or a related field, or possess equivalent work experience. You must have prior experience in Data Warehouse projects within a product or service-based organization, along with expertise in Data Engineering and implementing end-to-end Data Warehouse projects in a Big Data environment. Proficiency in working with databases like Oracle and Netezza, strong SQL knowledge, and experience in building data pipelines using Spark with Scala/Python/Java on Hadoop are essential. Familiarity with Nifi and Agile methodologies is advantageous. Strong analytical skills are necessary for debugging production issues, providing root cause analysis, and implementing mitigation plans. Effective communication, relationship-building, collaboration, and organizational skills are essential for this role. You should be detail-oriented, proactive, and able to work independently under pressure, demonstrating a high level of initiative and self-motivation. The ability to quickly learn and adopt new technologies, conduct proof of concepts (POCs), and work effectively in diverse, geographically distributed project teams is key to success in this role. In addition to your technical responsibilities, as a Mastercard employee, you are expected to uphold corporate security responsibilities. This includes compliance with security policies, maintaining confidentiality and integrity of accessed information, reporting any security violations or breaches, and completing mandatory security trainings as per Mastercard's guidelines.,
Posted 13 hours ago
4.0 - 8.0 years
0 Lacs
pune, maharashtra
On-site
As an Associate Manager Data Science within the Digital and Technology team, you will play a crucial role in modeling complex problems, uncovering insights, and streamlining workflows by utilizing statistical analysis, machine learning, generative AI, and data visualization techniques with cutting-edge big data & AI technologies. Your primary focus will involve collaborating with business partners to develop data science projects that drive advanced analytics and AI adoption to gain a competitive edge, enhance operational efficiency, and foster innovation. Your responsibilities will include executing end-to-end advanced data science projects aligned with business goals, mastering on-premise and cloud-based data warehousing and data lakes, staying abreast of the latest AI and ML technologies, constructing data pipelines for data ingestion, cleaning and imputing unclean data, conducting in-depth exploratory data analysis, engineering features relevant to data science problems, creating and assessing data models using various techniques, and presenting modeling results and recommendations to stakeholders in a clear and actionable manner. Moreover, you will collaborate closely with business stakeholders to identify challenges and opportunities, develop data-driven strategies, and implement solutions leveraging AI and machine learning technologies. It is essential to adhere to established standards, enhance documentation, share knowledge within teams, ensure data accuracy, consistency, and security, and effectively collaborate with team members. To qualify for this role, you should possess a Bachelor's or Master's degree in Computer Science, Engineering, Statistics, or Applied Mathematics, along with at least 4 years of experience in data science. You should have a strong background in machine learning, statistical analysis, and predictive modeling, with practical expertise in programming languages like Python, R, or Scala. Proficiency in big data technologies, cloud-based solutions, and experience in developing scalable machine learning models for production environments are crucial. Additionally, familiarity with data engineering tools, cloud services platforms, containerization, CI/CD tools, and excellent communication skills are required for this position. As an integral part of a fast-paced and collaborative environment, you should demonstrate strong analytical skills, problem-solving abilities, and have a growth mindset to drive innovation. Your leadership behaviors should emphasize building outstanding teams, setting clear directions, simplification, collaboration, execution, accountability, fostering growth, embracing inclusivity, and maintaining an external focus to excel in this role.,
Posted 13 hours ago
1.0 - 5.0 years
0 Lacs
haryana
On-site
As a Senior Quality Assurance Engineer at Ticketmaster Sport International Engineering in New Delhi, you will be joining a globally recognized leader in sports ticketing. The Ticketmaster Sports International Engineering division is dedicated to creating and maintaining industry-standard ticketing software solutions relied upon by high-profile sports clubs and organizations. Your role will involve ensuring the reliability, quality, and performance of our software solutions. Your responsibilities will include collaborating with a Microsoft .Net development team to enhance the end-user purchase experience through basket management, ticket delivery, and fulfillment. You will be involved in designing and implementing quality assurance practices, executing test cases, producing test reports, and improving automated test suites for robust and high-quality software solutions. Key responsibilities: - Quality control and sign-off of software releases - Designing modular testing solutions - Setting up and maintaining testing frameworks - Developing quality assurance practices and test plans - Executing test cases and preparing test plans - Producing test and quality reports - Creating automated test suites - Collaborating with a team to deliver software solutions - Reviewing defects and updating for accuracy - Operating effectively within a global organization - Defining and advocating for QA standards and best practices Required technical skills: - 3+ years of experience in the IT industry - Experience in Agile methodology and working in scrum teams - Automation testing using Selenium - Performance testing using Gatling - Knowledge of C#/Java/Scala and OOPS concepts - Familiarity with CI tools like GitLab CI and Jenkins - Experience with web services testing and relational databases - Understanding of web protocols and standards - Hands-on experience with Git version control Desired technical skills: - Experience in automation test framework setup - Testing in cloud environments like AWS - Mentoring QA staff on quality objectives - Working with containerization technologies like Docker - Experience in microservice development - Familiarity with GitLab CI pipelines and Octopus Deploy - Knowledge of TestRail and code analysis tools like SonarQube Behavioral skills: - Excellent communication and interpersonal skills - Problem-solving abilities - Enthusiasm for technology and desire to grow as a QA software engineer - Curiosity for new technologies - Ability to respond positively to challenges - Desire to take on responsibility and contribute to team success At Ticketmaster, you will be part of a diverse and inspiring culture driven by teamwork, integrity, and belonging. If you are passionate about live entertainment and technology, and want to contribute to a global vision of connecting people to live events, we welcome your application.,
Posted 14 hours ago
15.0 - 19.0 years
0 Lacs
karnataka
On-site
Salesforce is currently seeking software developers who are eager to make a significant and measurable positive impact through their code for users, the company's success, and the industry at large. Join a team of top-notch engineers to develop innovative features that customers will cherish, adopt, and utilize while ensuring the stability and scalability of our trusted CRM platform. The role of a software engineer at Salesforce involves architecture, design, implementation, and testing to guarantee the delivery of high-quality products. You will have the opportunity to participate in code reviews, mentor junior engineers, and offer technical guidance to the team based on your seniority level. At Salesforce, we prioritize writing high-quality, maintainable code that enhances product stability and simplifies our work processes. We embrace a hybrid work model that values the unique strengths of each team member and encourages personal growth. We believe that autonomous teams empowered to make decisions foster individual and collective success for the product, the company, and our customers. In the position of Backend Principal Engineer, your responsibilities will include: - Developing new and innovative components in a rapidly evolving market to enhance scalability and efficiency. - Creating high-quality, production-ready code to serve millions of users across our applications. - Making design choices based on performance, scalability, and future scalability. - Contributing to all stages of the software development life cycle (SDLC) in a Hybrid Engineering model, including design, implementation, code reviews, automation, and testing. - Building efficient components and algorithms within a microservice multi-tenant SaaS cloud environment. - Conducting code reviews, mentoring junior engineers, and providing technical guidance to the team as per your seniority level. Required Skills: - Proficiency in multiple programming languages and platforms. - Over 15 years of software development experience. - In-depth knowledge and experience in working with distributed systems. - Strong understanding of Services Oriented Architecture. - Deep expertise in object-oriented programming and various scripting languages such as C++, C#.Net, Java, Python, Scala, Go, and Node.JS. - Excellent grasp of RDBMS concepts and experience in developing applications on SQL Server, MySQL, and PostgreSQL. - Experience in developing SAAS applications on public cloud infrastructure like AWS, Azure, or GCP. - Proficiency in queues, locks, scheduling, event-driven architecture, workload distribution, relational, and non-relational databases. - Thorough understanding of software development best practices and leadership capabilities. - Degree or equivalent relevant experience required, with evaluations based on core competencies related to the role. Preferred Skills: - Familiarity with NoSQL databases like Cassandra, HBase, and document stores such as Elastic Search. - Experience with open-source projects like Kafka, Spark, or Zookeeper. - Knowledge or contributions to open-source technologies. - Experience in Native Windows or Linux development. - Development of RESTful services. - Understanding of security concepts such as mTLS, PKI, OAuth/SAML, etc. - Experience with distributed caching and load balancing systems. BENEFITS & PERKS Salesforce offers a comprehensive benefits package, including well-being reimbursement, generous parental leave, adoption assistance, fertility benefits, and more. Employees also have access to world-class enablement and on-demand training through Trailhead.com. Additionally, they can benefit from exposure to executive thought leaders, regular 1:1 coaching with leadership, volunteer opportunities, and participation in the 1:1:1 model for community giving back. For more details, please visit https://www.salesforcebenefits.com/,
Posted 15 hours ago
4.0 - 5.0 years
0 Lacs
Thiruvananthapuram, Kerala, India
On-site
Job Title : Data Science Specialist Location : Thiruvananthapuram, Kerala Job Summary We are seeking a hands-on and results-driven Data Science Specialist with 4-5 years of experience in designing and implementing advanced analytical solutions. The ideal candidate will have strong statistical foundations, demonstrated expertise in solving real-world business problems using Machine Learning, Data Science, and a track record of building and deploying data products. Exposure to NLP and Generative AI will be considered an advantage. Key Responsibilities Collaborate with cross-functional teams to translate business problems into data science use cases. Design, develop, and deploy data science models - including classification, regression, and ideally survival analysis techniques. Build and productionize data products that deliver measurable business impact. Perform exploratory data analysis, feature engineering, model validation, and performance tuning. Apply statistical methods to uncover trends, anomalies, and actionable insights. Implement scalable solutions using Python (or R/Scala), SQL, and modern data science libraries. Stay up to date with advancements in NLP and Generative AI and evaluate their applicability to internal use cases. Communicate findings and recommendations clearly to both technical and non-technical stakeholders. Qualifications Education: Bachelor's degree in a quantitative field such as Statistics, Computer Science, Mathematics, Engineering, or a related discipline is required. Master's degree or certifications in Data Science, Machine Learning, or Applied Statistics is a strong plus. Experience 4-5 years of hands-on experience in data science projects, preferably across different domains. Demonstrated experience in end-to-end ML model development, from problem framing to deployment. Prior experience working with cross-functional business teams is highly desirable. Must-Have Skills Statistical Expertise: Strong understanding of hypothesis testing, linear/non-linear regression, classification techniques, and distributions. Business Problem Solving: Experience translating ambiguous business challenges into data science use cases. Model Development: Hands-on experience in building and validating machine learning models (classification, regression, survival analysis). Programming Proficiency: Strong skills in Python (Pandas, NumPy, Scikit-learn, Matplotlib/Seaborn), and SQL. Data Manipulation: Experience handling structured/unstructured datasets, performing EDA, and data cleaning. Communication: Ability to articulate complex technical concepts to non-technical audiences. Version Control & Collaboration: Familiarity with Git/GitHub and collaborative development practices. Deployment Mindset: Understanding of how to build data products that are usable, scalable, and Skills : Experience with survival analysis or time-to-event modelling techniques. Exposure to Natural Language Processing (NLP) methods (e.g., tokenization, embeddings, sentiment analysis). Familiarity with Generative AI technologies (e.g., LLMs, transformers, prompt engineering). Experience with MLOps tools, pipeline orchestration (e.g., MLflow, Airflow), or cloud platforms (AWS, GCP, Azu (ref:hirist.tech)
Posted 15 hours ago
2.0 - 6.0 years
0 Lacs
pune, maharashtra
On-site
As an experienced Data Architect with over 8 years of relevant experience, you will be responsible for designing and implementing scalable and resilient data architectures for both batch and streaming data processing. Your role will involve developing data models and database structures to ensure efficient data storage and retrieval. Ensuring data security, integrity, and compliance with relevant regulations will be a key focus area for you. You will also be responsible for integrating data from various sources into the big data infrastructure and monitoring and optimizing its performance. Collaboration will be a crucial aspect of your job as you work closely with data scientists, engineers, and business stakeholders to understand requirements and translate them into technical solutions. Your expertise in evaluating and recommending new technologies and tools for data management and processing will be invaluable. Additionally, you will provide guidance and mentorship to junior team members, identify and resolve complex data challenges, and actively participate in the pre-and-post sales process. The ideal candidate for this role will have a Bachelor's or Master's degree in computer science, computer engineering, or a relevant field with a total of 10+ years of experience. At least 2 years of experience as a Big Data Architect is required. You should have a strong understanding of big data technologies such as Hadoop, Spark, NoSQL databases, and cloud-based data services like AWS, Azure, and GCP. Proficiency in programming languages like Python, Java, Scala, and Spark is essential. Experience with data modeling, database design, ETL processes, and data security principles is also necessary. In addition to technical skills, strong analytical and problem-solving abilities, good communication and collaboration skills, knowledge of API design and development, and an understanding of data visualization techniques are required. Familiarity with authentication mechanisms like LDAP, Active Directory, SAML, Kerberos, and authorization configuration for Hadoop-based distributed systems is a plus. Experience with DevOps methodology, toolsets, and automation will be beneficial. If you are passionate about data architecture and possess the required skills and qualifications, this role offers an exciting opportunity to contribute to the design and implementation of cutting-edge data solutions.,
Posted 15 hours ago
10.0 - 14.0 years
0 Lacs
chennai, tamil nadu
On-site
The Applications Development Technology Senior Lead Analyst position is at a senior level and involves establishing and implementing new or revised application systems and programs in coordination with the Technology Team. Your main objective in this role is to lead applications systems analysis and programming activities. You will be responsible for leading the Data Science functions regionally to meet goals, deploy new products, and enhance processes. Additionally, you will serve as a functional Subject Matter Expert (SME) across the company, utilizing advanced knowledge of algorithms, data structures, distributed systems, and networking to lead, architect, and drive broader adoption forward. To be successful in this role, you should have at least 10+ years of relevant experience in an Apps Development role or senior level experience within Data analytics/ML space, as well as at least 3+ years of experience applying AI to practical uses, with experience in deep learning, NLP, and Tensorflow. Proficiency in Scala, Python, and other language or domain-specific packages, as well as the Big Data ecosystem, is required. You should exhibit expertise in all aspects of technology by understanding broader patterns and techniques as they apply to Citigroup's internal and external cloud platforms (AWS, PCF, Akamai). Additionally, acquiring relevant technology and financial industry skills (AWS PWS) and understanding all aspects of NGA technology, including innovative approaches and new opportunities, is essential. Strong communication skills are a must, including the ability to translate business use cases to tech specs, work with diverse project teams, and develop relationships with vendors. You will be responsible for analyzing complex business processes, system processes, and industry standards to define and develop solutions to high-level problems. Your role will also involve allocating work and acting as an advisor/coach to developers, analysts, and new team members. You will provide expertise in the area of advanced knowledge of applications programming and plan assignments involving large budgets, cross-functional projects, or multiple projects. Additionally, you will appropriately assess risk when making business decisions, safeguarding Citigroup, its clients, and assets by driving compliance with applicable laws, rules, and regulations. You will utilize advanced knowledge of supported main system flows and comprehensive knowledge of multiple areas to achieve technology goals. Qualifications required for this role include over 14+ years of relevant experience, hands-on experience with Big Data, ML, and Gen AI, and experience in executing projects from start to end. You should be a demonstrated Subject Matter Expert (SME) in the area(s) of Applications Development, with demonstrated knowledge of client core business functions, leadership, project management, and development skills, as well as relationship and consensus-building skills. Education requirements include a Bachelor's degree/University degree or equivalent experience, with a Master's degree preferred. Citi is an equal opportunity and affirmative action employer, offering full-time job opportunities in the Technology Job Family Group, specifically in the Applications Development Job Family.,
Posted 16 hours ago
4.0 - 8.0 years
0 Lacs
pune, maharashtra
On-site
The Applications Development Intermediate Programmer Analyst position is an intermediate level role that involves participating in the establishment and implementation of new or revised application systems and programs in coordination with the Technology team. Your main objective will be to contribute to applications systems analysis and programming activities. You will be responsible for utilizing your knowledge of applications development procedures and concepts, along with basic knowledge of other technical areas, to identify and define necessary system enhancements. This includes using script tools, analyzing/interpreting code, consulting with users, clients, and other technology groups on issues, recommending programming solutions, installing, and supporting customer exposure systems. Additionally, you will apply fundamental knowledge of programming languages for design specifications, analyze applications to identify vulnerabilities and security issues, conduct testing and debugging, and serve as an advisor or coach to new or lower-level analysts. Your role will also involve identifying problems, analyzing information, and making evaluative judgments to recommend and implement solutions. You will resolve issues by identifying and selecting solutions through the application of acquired technical experience and guided by precedents. You should be able to operate with a limited level of direct supervision, exercise independence of judgment and autonomy, and act as a subject matter expert to senior stakeholders and/or other team members. Moreover, you will need to appropriately assess risks when making business decisions, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients, and assets. This includes driving compliance with applicable laws, rules, and regulations, adhering to policy, applying sound ethical judgment regarding personal behavior, conduct, and business practices, and escalating, managing, and reporting control issues with transparency. Qualifications: - 4-6 years of proven experience in developing and managing Big data solutions using Apache Spark, Scala is a must - Strong programming skills in Scala, Java, or Python - Hands-on experience with technologies like Apache Hive, Apache Kafka, HBase, Couchbase, Sqoop, Flume, etc. - Proficiency in SQL and experience with relational databases (Oracle/PL-SQL) - Experience in working on Kafka, JMS/MQ applications, and multiple operating systems (Unix, Linux, Win) - Familiarity with data warehousing concepts, ETL processes, data modeling, data architecture, and data integration techniques - Knowledge of best practices for data security, privacy, and compliance - Strong technical knowledge of Apache Spark, Hive, SQL, and the Hadoop ecosystem - Experience with developing frameworks and utility services, including logging/monitoring - Experience delivering high-quality software following continuous delivery and using code quality tools (JIRA, GitHub, Jenkins, Sonar, etc.) - Experience creating large-scale, multi-tiered, distributed applications with Hadoop and Spark - Profound knowledge of implementing different data storage solutions such as RDBMS (Oracle), Hive, HBase, Impala, and NoSQL databases Education: - Bachelor's degree/University degree or equivalent experience Please note that this job description provides a high-level review of the types of work performed, and other job-related duties may be assigned as required.,
Posted 16 hours ago
8.0 - 20.0 years
0 Lacs
chennai, tamil nadu
On-site
The Applications Development Senior Programmer Analyst position is an intermediate level role where you will be responsible for participating in the establishment and implementation of new or revised application systems and programs in coordination with the Technology team. Your main objective will be to contribute to applications systems analysis and programming activities. Your responsibilities will include conducting tasks related to feasibility studies, time and cost estimates, IT planning, risk technology, applications development, model development, and establishing and implementing new or revised applications systems and programs to meet specific business needs or user areas. You will also be monitoring and controlling all phases of the development process, including analysis, design, construction, testing, and implementation, as well as providing user and operational support on applications to business users. Your role will require you to utilize in-depth specialty knowledge of applications development to analyze complex problems/issues, provide evaluation of business process, system process, and industry standards, and make evaluative judgments. You will be recommending and developing security measures in post-implementation analysis of business usage to ensure successful system design and functionality. Additionally, you will be consulting with users/clients and other technology groups on issues, recommending advanced programming solutions, and installing and assisting customer exposure systems. Furthermore, you will ensure that essential procedures are followed, help define operating standards and processes, and serve as an advisor or coach to new or lower-level analysts. You should have the ability to operate with a limited level of direct supervision, exercise independence of judgment and autonomy, and act as a subject matter expert to senior stakeholders and/or other team members. As an Applications Development Senior Programmer Analyst, you will be expected to appropriately assess risk when making business decisions, demonstrate particular consideration for the firm's reputation, and safeguard Citigroup, its clients, and assets by driving compliance with applicable laws, rules, and regulations. This includes adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct, and business practices, and escalating, managing, and reporting control issues with transparency. Qualifications: - 8 to 20 years of relevant experience - Primary skills in Java/Scala + Spark - Must have experience in Hadoop/Java/Spark/Scala/Python - Experience in systems analysis and programming of software applications - Experience in managing and implementing successful projects - Working knowledge of consulting/project management techniques/methods - Ability to work under pressure and manage deadlines or unexpected changes in expectations or requirements Education: Bachelors degree/University degree or equivalent experience Please note that this job description provides a high-level review of the types of work performed, and other job-related duties may be assigned as required.,
Posted 16 hours ago
2.0 - 31.0 years
17 Lacs
Bengaluru/Bangalore
On-site
Job Title: Data Architect What are my responsibilities? As a Data Architect, you will: Design & develop technical solutions to integrate disparate information and create meaningful insights for business using big-data architectures. Build and analyze large structured and unstructured datasets on scalable cloud infrastructures. Develop prototypes and proof of concepts using multiple data sources and big-data technologies. Process, manage, extract, and cleanse data to apply data analytics effectively. Design and develop scalable end-to-end data pipelines for batch and stream processing. Stay current with the Data Analytics landscape, exploring new technologies, techniques, tools, and methods. Inspire enthusiasm for using modern data technologies to solve problems and deliver business value. Qualification: Bachelor’s or Master’s in Computer Science & Engineering, or equivalent. Professional Degree in Data Engineering/Analytics is desirable. Experience Level: Minimum 8 years in software development, with at least 2-3 years hands-on experience in Big Data/Data Engineering. Desired Knowledge & Experience: Data Engineer - Big Data Developer: Spark: Spark 3.x, RDD/DataFrames/SQL, Batch/Structured Streaming; knowledge of internals like Catalyst/Tungsten/Photon. Databricks: Workflows, SQL Warehouses/Endpoints, DLT, Pipelines, Unity, Autoloader. IDE & Tools: IntelliJ/PyCharm, Git, Azure DevOps, GitHub Copilot. Testing: pytest, Great Expectations. CI/CD: Yaml Azure Pipelines, Continuous Delivery, Acceptance Testing. Big Data Design: Lakehouse/Medallion Architecture, Parquet/Delta, Partitioning, Distribution, Data Skew, Compaction. Languages: Python/Functional Programming (FP). SQL: TSQL, Spark SQL, HiveQL. Storage: Data Lake, Big Data Storage Design. Additional Helpful Skills: Data Pipelines: ADF, Synapse Pipelines, Oozie, Airflow. Languages: Scala, Java. NoSQL: Cosmos DB, MongoDB, Cassandra. Cubes: SSAS (ROLAP, HOLAP, MOLAP), AAS, Tabular Model. SQL Server: TSQL, Stored Procedures. Hadoop Stack: HDInsight, MapReduce, HDFS, YARN, Oozie, Hive, HBase, Ambari, Ranger, Atlas, Kafka. Data Catalogs: Azure Purview, Apache Atlas, Informatica. Big Data Architect: Expertise: Mastery of technologies, languages, and methodologies mentioned in Data Engineer – Big Data Developer. Mentorship: Mentor and educate developers on relevant technologies and methodologies. Architecture Styles: Lakehouse, Lambda, Kappa, Delta, Data Lake, Data Mesh, Data Fabric, Data Warehouses (e.g., Data Vault). Application Architecture: Microservices, NoSQL, Kubernetes, Cloud-native solutions. Experience: Proven track record across multiple technology generations (Data Warehouse → Hadoop → Big Data → Cloud → Data Mesh). Certification: Architect certifications such as Siemens Certified Software Architect or iSAQB CPSA. Required Soft Skills & Other Capabilities: Excellent communication skills to convey technical concepts to non-technical stakeholders. Strong attention to detail with a proven ability to solve complex business problems. Initiative and resilience to experiment with new ideas. Effective planning and organizational skills. Collaborative mindset for sharing ideas and developing solutions. Ability to work independently and in a global team environment.
Posted 17 hours ago
2.0 - 6.0 years
0 Lacs
haryana
On-site
As a Data Scientist at KPMG in India, you will work closely with business stakeholders and cross-functional subject matter experts to gain a deep understanding of the business context and key questions. Your main responsibilities will include creating Proof of Concepts (POCs) and Minimum Viable Products (MVPs), guiding them through production deployment, and operationalizing projects. You will be instrumental in influencing the machine learning strategy for digital programs and projects. Your role will involve making solution recommendations that strike a balance between speed to market and analytical soundness. You will explore design options to assess efficiency and impact, develop approaches to enhance robustness and rigor, and formulate model-based solutions by combining machine learning algorithms with other techniques. Using a variety of commercial and open-source tools such as Python, R, and TensorFlow, you will develop analytical and modeling solutions. You will create algorithms to extract information from large datasets, deploy these algorithms to production for actionable insights, and compare results from different methodologies to recommend optimal techniques. Furthermore, you will work on multiple pillars of artificial intelligence, including cognitive engineering, conversational bots, and data science. It will be your responsibility to ensure that the solutions you develop exhibit high levels of performance, security, scalability, maintainability, and reliability upon deployment. In addition to your technical responsibilities, you will lead discussions, provide thought leadership, and share subject matter expertise in machine learning techniques, tools, and concepts. You will also facilitate the sharing of new ideas, learnings, and best practices across different geographies. To be successful in this role, you must have a Bachelor of Science or Bachelor of Engineering degree at a minimum, along with 2-4 years of work experience as a Data Scientist. You should possess a combination of business focus, strong analytical and problem-solving skills, programming knowledge, and proficiency in statistical concepts and machine learning algorithms. Key technical skills required for this role include proficiency in Python, SQL, Docker, and versioning tools, as well as experience with Microsoft Azure or AWS data management tools. Familiarity with Agile principles, descriptive statistics, predictive modeling, decision trees, optimization techniques, and deep learning methodologies is also essential. Moreover, you must have the ability to lead, manage, and deliver customer business results through data scientists or professional services teams. Strong communication skills, both written and verbal, are crucial, as you will be expected to share ideas effectively and communicate data analysis assumptions and results clearly. While certain skills such as Agent Framework, RAG Framework, and knowledge of AI on cloud services are a must, familiarity with AI algorithms, deep learning, computer vision, and responsible AI frameworks is considered a bonus. Overall, as a Data Scientist at KPMG in India, you will play a vital role in developing innovative solutions, driving business outcomes through data-driven insights, and contributing to the continuous growth and success of the organization.,
Posted 17 hours ago
3.0 - 7.0 years
0 Lacs
chennai, tamil nadu
On-site
You are a Hadoop Developer with a Bachelor's degree and 3-6 years of experience. You will be based in Chennai or Bangalore. Your primary responsibilities include working with Big Data technologies, writing SQL queries, and having proficiency in tools like Hadoop, Hive, Pyspark, Scala, Azure, and C. Additionally, experience in Control-M/Airflow and DevOps will be advantageous. You should have a good understanding of SDLC, Scrum, and Agile methodologies. It is desirable to have knowledge in Banking & Money Laundering domains. Your role will involve designing, developing, and implementing Big Data or Data lake solutions. You will be responsible for writing test cases, validating results, and automating manual tasks. As a Hadoop Developer, you must possess strong analytical skills, excellent programming abilities, and a proven track record in delivering successful Big Data solutions. Effective communication with stakeholders and coordination with multiple teams will be crucial for this role. Your focus on recognizing problems, providing value-added solutions, and driving business outcomes will be key to your success.,
Posted 17 hours ago
4.0 - 8.0 years
0 Lacs
surat, gujarat
On-site
We are seeking a Data Engineer to play a crucial role in designing, constructing, and maintaining scalable data systems and infrastructure. Your primary responsibilities will include collaborating with various teams to gather requirements, develop data pipelines, and establish best practices for data governance, security, and analytics. This position presents an exciting opportunity to shape the core of our data environment and directly impact how we harness data for business insights and innovation. In this role, you will architect and implement data solutions by designing and constructing scalable data platforms using cloud services such as AWS, Azure, or GCP, along with on-premises technologies. You will be tasked with developing best practices for data storage, ingestion, and processing, both in batch and streaming formats. Your expertise will be instrumental in creating and managing robust ETL/ELT workflows that handle various data types while optimizing data pipelines for reliability, scalability, and performance. Furthermore, you will be responsible for defining and enforcing data governance policies, ensuring compliance with relevant data privacy regulations, and implementing metadata management and cataloging solutions. Automating the detection of new data sets, schema changes, and lineage updates will be a key aspect of your role. Additionally, you will establish automated checks and alerts for data quality, completeness, and consistency, troubleshooting and resolving data-related issues as needed. Your collaboration skills will be put to the test as you work closely with cross-functional stakeholders to translate requirements into scalable technical solutions. Qualifications for this role include a Bachelor's or Master's degree in Computer Science, Engineering, or a related field, or equivalent experience, along with at least 4 years of hands-on experience in data engineering or related fields. Technical skills essential for this position include proficiency in cloud platforms, distributed data processing tools, SQL, programming languages, real-time data streaming solutions, data modeling, and modern data architecture patterns. Experience with data governance, security, and data quality frameworks is highly desirable. Soft skills such as excellent communication, stakeholder management, and the ability to explain complex technical concepts to diverse audiences are crucial for success in this role. Certifications in cloud platforms, infrastructure-as-code, orchestration tools, and containerization for data engineering pipelines are considered advantageous. If you are passionate about leveraging data for business insights and innovation and possess the technical skills and collaborative mindset required for this role, we encourage you to apply.,
Posted 17 hours ago
7.0 - 11.0 years
0 Lacs
ahmedabad, gujarat
On-site
The Lead Data Engineer (Databricks) position is an exciting opportunity for individuals with 7 to 10 years of experience in data engineering to join our team in Pune or Ahmedabad. As a Lead Data Engineer, you will play a crucial role in enhancing our data engineering capabilities, working with cutting-edge technologies like Databricks and Generative AI. Your responsibilities will include leading the design, development, and optimization of data solutions using Databricks to ensure scalability, efficiency, and security. You will collaborate with cross-functional teams to gather and analyze data requirements, translating them into robust data architectures and solutions. Developing and maintaining ETL pipelines, integrating with Azure Data Factory as necessary, will also be part of your role. Additionally, you will implement machine learning models and advanced analytics solutions, incorporating Generative AI to drive innovation. Ensuring data quality, governance, and security practices are followed to maintain the integrity and reliability of data solutions will be essential. As a technical leader, you will provide mentorship to junior engineers, creating an environment of learning and growth within the team. To qualify for this role, you should have a Bachelors or Masters degree in computer science, Information Technology, or a related field. Your proven expertise in building and optimizing data solutions using Databricks and integrating with Azure Data Factory/AWS Glue, along with proficiency in SQL and programming languages like Python or Scala, will be crucial. A strong understanding of data modeling, ETL processes, Data Warehousing/Data Lakehouse concepts, cloud platforms (especially Azure), and containerization technologies is required. Preferred skills for this role include experience with Generative AI technologies, familiarity with AWS or GCP cloud platforms, and knowledge of data governance frameworks and tools. Excellent analytical, problem-solving, and communication skills, as well as demonstrated leadership ability with experience mentoring junior team members, will be advantageous in this position. Stay updated on the latest trends in data engineering, Databricks, Generative AI, and Azure Data Factory to continuously enhance team capabilities.,
Posted 17 hours ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description About TaskUs: TaskUs is a provider of outsourced digital services and next-generation customer experience to fast-growing technology companies, helping its clients represent, protect and grow their brands. Leveraging a cloud-based infrastructure, TaskUs serves clients in the fastest-growing sectors, including social media, e-commerce, gaming, streaming media, food delivery, ride-sharing, HiTech, FinTech, and HealthTech. The People First culture at TaskUs has enabled the company to expand its workforce to approximately 45,000 employees globally. Presently, we have a presence in twenty-three locations across twelve countries, which include the Philippines, India, and the United States. It started with one ridiculously good idea to create a different breed of Business Processing Outsourcing (BPO)! We at TaskUs understand that achieving growth for our partners requires a culture of constant motion, exploring new technologies, being ready to handle any challenge at a moment's notice, and mastering consistency in an ever-changing world. What We Offer: At TaskUs, we prioritize our employees' well-being by offering competitive industry salaries and comprehensive benefits packages. Our commitment to a People First culture is reflected in the various departments we have established, including Total Rewards, Wellness, HR, and Diversity. We take pride in our inclusive environment and positive impact on the community. Moreover, we actively encourage internal mobility and professional growth at all stages of an employee's career within TaskUs. Join our team today and experience firsthand our dedication to supporting People First. Job Description Summary Data Scientist with deep expertise in modern AI/ML technologies to join our innovative team. This role combines cutting-edge research in machine learning, deep learning, and generative AI with practical full-stack cloud development skills. You will be responsible for architecting and implementing end-to-end AI solutions, from data engineering pipelines to production-ready applications leveraging the latest in agentic AI and large language models. Job Description Key Responsibilities AI/ML Development & Research Design, develop, and deploy advanced machine learning and deep learning models for complex business problems Implement and optimize Large Language Models (LLMs) and Generative AI solutions Build agentic AI systems with autonomous decision-making capabilities Conduct research on emerging AI technologies and their practical applications Perform model evaluation, validation, and continuous improvement Cloud Infrastructure & Full-Stack Development Architect and implement scalable cloud-native ML/AI solutions on AWS, Azure, or GCP Develop full-stack applications integrating AI models with modern web technologies Build and maintain ML pipelines using cloud services (SageMaker, ML Engine, etc.) Implement CI/CD pipelines for ML model deployment and monitoring Design and optimize cloud infrastructure for high-performance computing workloads Data Engineering & Database Management Design and implement data pipelines for large-scale data processing Work with both SQL and NoSQL databases (PostgreSQL, MongoDB, Cassandra, etc.) Optimize database performance for ML workloads and real-time applications Implement data governance and quality assurance frameworks Handle streaming data processing and real-time analytics Leadership & Collaboration Mentor junior data scientists and guide technical decision-making Collaborate with cross-functional teams including product, engineering, and business stakeholders Present findings and recommendations to technical and non-technical audiences Lead proof-of-concept projects and innovation initiatives Required Qualifications Education & Experience Master's or PhD in Computer Science, Data Science, Statistics, Mathematics, or related field 5+ years of hands-on experience in data science and machine learning 3+ years of experience with deep learning frameworks and neural networks 2+ years of experience with cloud platforms and full-stack development Technical Skills - Core AI/ML Machine Learning: Scikit-learn, XGBoost, LightGBM, advanced ML algorithms Deep Learning: TensorFlow, PyTorch, Keras, CNN, RNN, LSTM, Transformers Large Language Models: GPT, BERT, T5, fine-tuning, prompt engineering Generative AI: Stable Diffusion, DALL-E, text-to-image, text generation Agentic AI: Multi-agent systems, reinforcement learning, autonomous agents Technical Skills - Development & Infrastructure Programming: Python (expert), R, Java/Scala, JavaScript/TypeScript Cloud Platforms: AWS (SageMaker, EC2, S3, Lambda), Azure ML, or Google Cloud AI Databases: SQL (PostgreSQL, MySQL), NoSQL (MongoDB, Cassandra, DynamoDB) Full-Stack Development: React/Vue.js, Node.js, FastAPI, Flask, Docker, Kubernetes MLOps: MLflow, Kubeflow, Model versioning, A/B testing frameworks Big Data: Spark, Hadoop, Kafka, streaming data processing Preferred Qualifications Experience with vector databases and embeddings (Pinecone, Weaviate, Chroma) Knowledge of LangChain, LlamaIndex, or similar LLM frameworks Experience with model compression and edge deployment Familiarity with distributed computing and parallel processing Experience with computer vision and NLP applications Knowledge of federated learning and privacy-preserving ML Experience with quantum machine learning Expertise in MLOps and production ML system design Key Competencies Technical Excellence Strong mathematical foundation in statistics, linear algebra, and optimization Ability to implement algorithms from research papers Experience with model interpretability and explainable AI Knowledge of ethical AI and bias detection/mitigation Problem-Solving & Innovation Strong analytical and critical thinking skills Ability to translate business requirements into technical solutions Creative approach to solving complex, ambiguous problems Experience with rapid prototyping and experimentation Communication & Leadership Excellent written and verbal communication skills Ability to explain complex technical concepts to diverse audiences Strong project management and organizational skills Experience mentoring and leading technical teams How We Partner To Protect You: TaskUs will neither solicit money from you during your application process nor require any form of payment in order to proceed with your application. Kindly ensure that you are always in communication with only authorized recruiters of TaskUs. DEI: In TaskUs we believe that innovation and higher performance are brought by people from all walks of life. We welcome applicants of different backgrounds, demographics, and circumstances. Inclusive and equitable practices are our responsibility as a business. TaskUs is committed to providing equal access to opportunities. If you need reasonable accommodations in any part of the hiring process, please let us know. We invite you to explore all TaskUs career opportunities and apply through the provided URL https://www.taskus.com/careers/ . TaskUs is proud to be an equal opportunity workplace and is an affirmative action employer. We celebrate and support diversity; we are committed to creating an inclusive environment for all employees. TaskUs people first culture thrives on it for the benefit of our employees, our clients, our services, and our community. Req Id: R_2507_10295_1 Posted At: Thu Jul 31 2025 00:00:00 GMT+0000 (Coordinated Universal Time)
Posted 18 hours ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
You will be responsible for understanding the software requirements and developing them into a working source code as per the specified timelines. Collaboration with team members to complete deliverables will be a key aspect of your role. You will also be accountable for recording management of production incidents/errors and engaging/notifying the team as necessary. Your participation in the resolution of production incidents with stakeholders/management will be crucial. Additionally, you will manage application/infrastructure production configuration/processes/releases. It is important to get mentored on best practices followed in the software industry and contribute to all stages of the software development lifecycle. Envisioning system features and functionality, ensuring application designs align with business goals, and writing well-designed, testable code are essential tasks. You will be expected to develop and test software with high quality and standards, contribute to production support, and resolve issues in a timely manner. Understanding Agile practices and setting priorities on work products based on agreed iteration goals are also part of your responsibilities. Effective collaboration with team members to share best practices and flexibility to work and support Paris hours are required. As a Big Data Specialist Engineer with a Financial Domain Background, hands-on experience in Hadoop ecosystem application development and Spark and Scala Development is essential. A thorough understanding of Hadoop and its ecosystem, modern and traditional databases, SQL, microservices, excellent coding skills in Scala, hands-on experience with Spark, advanced Scala, Apache Nifi, Kafka, proficiency in Linux environment and tools, git, Jenkins, and Ansible is required. Experience in the financial and banking domain specifically in Credit Risk chain is a must. You should possess excellent communication skills, work independently or in a team effectively, have good problem-solving skills and attention to detail, and demonstrate the ability to work in a team environment. Joining us at Socit Gnrale offers the opportunity to be part of a team that believes in the transformative power of individuals. You will have the chance to impact the future by creating, daring, innovating, and taking action. Whether you're here for a short period or planning a long-term career, together we can make a positive difference. Employee engagement in solidarity actions, supporting ESG principles, and fostering diversity and inclusion are integral parts of our organizational culture. If you are looking to grow in a stimulating environment, contribute positively to society, and enhance your expertise, you will find a welcoming home with us.,
Posted 18 hours ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
The role of a Principal Data Engineer (Associate Director) is a permanent position based in Bangalore within the ISS department at Fidelity. As a part of the ISS Data Platform Team, you will be responsible for designing, developing, and maintaining scalable data pipelines and architectures to facilitate data ingestion, integration, and analytics. You will lead a team of senior and junior developers, providing mentorship and guidance while collaborating with enterprise architects, business analysts, and stakeholders to understand data requirements and drive technical innovation within the department. Your key responsibilities will include taking ownership of technical delivery, leading a subsection of the wider data platform, and ensuring code reusability, quality, and developer productivity are maximized. You will be expected to challenge the status quo by implementing the latest data engineering practices and techniques. Additionally, you will be required to leverage cloud-based data platforms such as Snowflake and Databricks, have expertise in the AWS ecosystem, particularly Lambda, EMR, MSK, Glue, and S3, and be proficient in Python, SQL, and CI/CD pipelines. The ideal candidate will possess advanced technical skills in designing event-based or streaming data architectures using tools like Kafka, implementing data access controls for regulatory compliance, and using both RDBMS and NoSQL offerings. Experience with CDC ingestion, orchestration tools like Airflow, and containerization technologies is desirable. Strong soft skills in problem-solving, strategic communication, and project management are crucial for this role. At Fidelity, we offer a comprehensive benefits package, prioritize your well-being, support your development, and promote a flexible work environment. We are committed to ensuring that you feel motivated by your work, happy to be a part of our team, and have the opportunity to build your future with us. To learn more about our work culture, commitment to dynamic working, and potential career growth opportunities, visit careers.fidelityinternational.com.,
Posted 19 hours ago
6.0 - 10.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Calling all innovators – find your future at Fiserv. We’re Fiserv, a global leader in Fintech and payments, and we move money and information in a way that moves the world. We connect financial institutions, corporations, merchants, and consumers to one another millions of times a day – quickly, reliably, and securely. Any time you swipe your credit card, pay through a mobile app, or withdraw money from the bank, we’re involved. If you want to make an impact on a global scale, come make a difference at Fiserv. Job Title Software Development Engineering - Professional I What does a successful Software Development Engineering - Professional I do at Fiserv: As a senior developer, you will work as part of team to design develop, test, and maintain Java based applications that leverage with Oracle Database. The candidate should possess strong foundation in Java programming and Oracle technologies. What Will You Do Write clean, efficient, and maintainable Java code. Assist in designing and implementing software solutions. Participate in code reviews and debugging. Troubleshoot and resolve software defects and issues. Collaborate with cross functional teams. What Will You Need To Know Looking for Bachelor’s Degree in computer science, engineering or related field. 6 to 10 years of experience as a Tech Lead or senior developer. Knowledge of core Java/J2EE concepts and programming principles. Knowledge of databases and SQL. Familiarity with Java Development tools and IDE’s. Familiarity with tools and frameworks viz Git, JIRA, Jenkins, Kafka. Strong problem solving and analytical skills. Full stack developer experience in digital tech stack e.g., HTML, CSS, Java Script, Core Java, Springboot, Scala, React, Angular, PostgreSQL, Cockroach, etc. Strong experience in developing APIs and Micro Services on cloud. Experience with tools and frameworks such as Jenkins, Maven, GitLab, JIRA, SonarQube, TDD/BDD, Docker & Kubernetes, Kafka. What Would Be Great To Have Ability to speak, write and read in English (advanced intermediate level). Agile development experience. We welcome and encourage diversity in our workforce. Fiserv is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, colour, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protector veteran. Explore the possibilities of a career with Fiserv and Find your Forward with us! Thank You For Considering Employment With Fiserv. Please Apply using your legal name Complete the step-by-step profile and attach your resume (either is acceptable, both are preferable). Our Commitment To Diversity And Inclusion Fiserv is proud to be an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, gender, gender identity, sexual orientation, age, disability, protected veteran status, or any other category protected by law. Note To Agencies Fiserv does not accept resume submissions from agencies outside of existing agreements. Please do not send resumes to Fiserv associates. Fiserv is not responsible for any fees associated with unsolicited resume submissions. Warning About Fake Job Posts Please be aware of fraudulent job postings that are not affiliated with Fiserv. Fraudulent job postings may be used by cyber criminals to target your personally identifiable information and/or to steal money or financial information. Any communications from a Fiserv representative will come from a legitimate Fiserv email address.
Posted 22 hours ago
10.0 years
0 Lacs
Pune, Maharashtra, India
Remote
About Fusemachines Fusemachines is a 10+ year old AI company, dedicated to delivering state-of-the-art AI products and solutions to a diverse range of industries. Founded by Sameer Maskey, Ph.D., an Adjunct Associate Professor at Columbia University, our company is on a steadfast mission to democratize AI and harness the power of global AI talent from underserved communities. With a robust presence in four countries and a dedicated team of over 400 full-time employees, we are committed to fostering AI transformation journeys for businesses worldwide. At Fusemachines, we not only bridge the gap between AI advancement and its global impact but also strive to deliver the most advanced technology solutions to the world. About The Role This is a remote, contract position responsible for designing, building, and maintaining the infrastructure required for data integration, storage, processing, and analytics (BI, visualization and Advanced Analytics). We are looking for a skilled Senior Data Engineer with a strong background in Python, SQL, PySpark, Azure, Databricks, Synapse, Azure Data Lake, DevOps and cloud-based large scale data applications with a passion for data quality, performance and cost optimization. The ideal candidate will develop in an Agile environment, contributing to the architecture, design, and implementation of Data products in the Aviation Industry, including migration from Synapse to Azure Data Lake. This role involves hands-on coding, mentoring junior staff and collaboration with multi-disciplined teams to achieve project objectives. Qualification & Experience Must have a full-time Bachelor's degree in Computer Science or similar At least 5 years of experience as a data engineer with strong expertise in Databricks, Azure, DevOps, or other hyperscalers 5+ years of experience with Azure DevOps, GitHub Proven experience delivering large scale projects and products for Data and Analytics, as a data engineer, including migrations Following certifications: Databricks Certified Associate Developer for Apache Spark Databricks Certified Data Engineer Associate Microsoft Certified: Azure Fundamentals Microsoft Certified: Azure Data Engineer Associate Microsoft Exam: Designing and Implementing Microsoft DevOps Solutions (nice to have) Required Skills/Competencies Strong programming Skills in one or more languages such as Python (must have), Scala, and proficiency in writing efficient and optimized code for data integration, migration, storage, processing and manipulation Strong understanding and experience with SQL and writing advanced SQL queries Thorough understanding of big data principles, techniques, and best practices Strong experience with scalable and distributed Data Processing Technologies such as Spark/PySpark (must have: experience with Azure Databricks), DBT and Kafka, to be able to handle large volumes of data Solid Databricks development experience with significant Python, PySpark, Spark SQL, Pandas, NumPy in Azure environment Strong experience in designing and implementing efficient ELT/ETL processes in Azure and Databricks and using open source solutions being able to develop custom integration solutions as needed Skilled in Data Integration from different sources such as APIs, databases, flat files, event streaming Expertise in data cleansing, transformation, and validation Proficiency with Relational Databases (Oracle, SQL Server, MySQL, Postgres, or similar) and NonSQL Databases (MongoDB or Table) Good understanding of Data Modeling and Database Design Principles. Being able to design and implement efficient database schemas that meet the requirements of the data architecture to support data solutions Strong experience in designing and implementing Data Warehousing, data lake and data lake house, solutions in Azure and Databricks Good experience with Delta Lake, Unity Catalog, Delta Sharing, Delta Live Tables (DLT) Strong understanding of the software development lifecycle (SDLC), especially Agile methodologies Strong knowledge of SDLC tools and technologies Azure DevOps and GitHub, including project management software (Jira, Azure Boards or similar), source code management (GitHub, Azure Repos or similar), CI/CD system (GitHub actions, Azure Pipelines, Jenkins or similar) and binary repository manager (Azure Artifacts or similar) Strong understanding of DevOps principles, including continuous integration, continuous delivery (CI/CD), infrastructure as code (IaC – Terraform, ARM including hands-on experience), configuration management, automated testing, performance tuning and cost management and optimization. Strong knowledge in cloud computing specifically in Microsoft Azure services related to data and analytics, such as Azure Data Factory, Azure Databricks, Azure Synapse Analytics, Azure Data Lake, Azure Stream Analytics, SQL Server, Azure Blob Storage, Azure Data Lake Storage, Azure SQL Database, etc Experience in Orchestration using technologies like Databricks workflows and Apache Airflow Strong knowledge of data structures and algorithms and good software engineering practices Proven experience migrating from Azure Synapse to Azure Data Lake, or other technologies Strong analytical skills to identify and address technical issues, performance bottlenecks, and system failures Proficiency in debugging and troubleshooting issues in complex data and analytics environments and pipelines Good understanding of Data Quality and Governance, including implementation of data quality checks and monitoring processes to ensure that data is accurate, complete, and consistent. Experience with BI solutions including PowerBI is a plus Strong written and verbal communication skills to collaborate and articulate complex situations concisely with cross-functional teams, including business users, data architects, DevOps engineers, data analysts, data scientists, developers, and operations teams Ability to document processes, procedures, and deployment configurations Understanding of security practices, including network security groups, Azure Active Directory, encryption, and compliance standards Ability to implement security controls and best practices within data and analytics solutions, including proficient knowledge and working experience on various cloud security vulnerabilities and ways to mitigate them. Self-motivated with the ability to work well in a team, and experienced in mentoring and coaching different members of the team A willingness to stay updated with the latest services, Data Engineering trends, and best practices in the field Comfortable with picking up new technologies independently and working in a rapidly changing environment with ambiguous requirements Care about architecture, observability, testing, and building reliable infrastructure and data pipelines Responsibilities Architect, design, develop, test and maintain high-performance, large-scale, complex data architectures, which support data integration (batch and real-time, ETL and ELT patterns from heterogeneous data systems: APIs and platforms), storage (data lakes, warehouses, data lake houses, etc), processing, orchestration and infrastructure. Ensuring the scalability, reliability, and performance of data systems, focusing on Databricks and Azure Contribute to detailed design, architectural discussions, and customer requirements sessions Actively participate in the design, development, and testing of big data products. Construct and fine-tune Apache Spark jobs and clusters within the Databricks platform Migrate out of Azure Synapse to Azure Data Lake or other technologies Assess best practices and design schemas that match business needs for delivering a modern analytics solution (descriptive, diagnostic, predictive, prescriptive) Design and implement data models and schemas that support efficient data processing and analytics Design and develop clear, maintainable code with automated testing using Pytest, unittest, integration tests, performance tests, regression tests, etc Collaborating with cross-functional teams and Product, Engineering, Data Scientists and Analysts to understand data requirements and develop data solutions, including reusable components meeting product deliverables. Evaluating and implementing new technologies and tools to improve data integration, data processing, storage and analysis Evaluate, design, implement and maintain data governance solutions: cataloging, lineage, data quality and data governance frameworks that are suitable for a modern analytics solution, considering industry-standard best practices and patterns Continuously monitor and fine-tune workloads and clusters to achieve optimal performance Provide guidance and mentorship to junior team members, sharing knowledge and best practices Maintain clear and comprehensive documentation of the solutions, configurations, and best practices implemented Promote and enforce best practices in data engineering, data governance, and data quality Ensure data quality and accuracy Design, Implement and maintain data security and privacy measures Be an active member of an Agile team, participating in all ceremonies and continuous improvement activities, being able to work independently as well as collaboratively Fusemachines is an Equal Opportunities Employer, committed to diversity and inclusion. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or any other characteristic protected by applicable federal, state, or local laws. Powered by JazzHR OlZLzNKEXE
Posted 23 hours ago
6.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Role Description: Data Engineer – Neo 4J The ideal candidate is a hands-on technology developer with experience in developing scalable applications and platforms. They must be at ease working in an agile environment with little supervision. The person should be a self-motivated person with a passion for problem solving and continuous learning. Experience: 3 – 6 Years Role and responsibilities Build Graph Database solutions leveraging large-scale datasets to solve various business use cases. Design and build graph data models to support variety of use cases including knowledge graphs Design and build graph database load processes to efficiently populate the graph database Strong organizational skills, with the ability to work autonomously as well as in a team-based environment Data pipeline framework development Technical skills requirements The candidate must demonstrate proficiency in, Solid understanding of graph data modeling, graph schema development, graph data design. Graph data modeling (Experience with graph data models) and graph languages (Cypher, Gremlin, SparQL), exposure to various graph data modeling techniques) Candidate should have hands-on experience with Neo4j database and the Cypher query language . Candidate should know basic network science concepts and graph algorithms. Candidate should be able to write efficient & accurate Cypher queries to solve business problems. Candidate should have a strong understanding of how graph data looks like and how it can be created from relational data. Candidate should be able import data into Neo4j effectively and handle Neo4j database instances. Fluency in complex SQL and experience with RDBMSs. Project Experience in Python, Spark, PySpark, Scala, Python, NiFi, Hive, NoSql DBs. Deep understanding of representing relational models using a graph data base for large clusters of nodes. Relevant experience in general data base design with emphasis on graph storage models. Should be able to write efficient & accurate Cypher queries to solve business problems. Experience working on any Databricks would be added advantage. Solid grounding in Agile methodologies. Experience with git and other source control systems. Nice-to-have skills Neo4j Certified Developer Certification. Design and build graph data models to support variety of use cases including knowledge graphs. Strong delivery background across the delivery of high-value, business-facing technical projects in major organizations. Experience of managing client delivery teams, ideally coming from a Data Engineering / Data Science environment. Qualifications B.Tech./M.Tech./MS or BCA/MCA degree from a reputed university
Posted 1 day ago
175.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
At American Express, our culture is built on a 175-year history of innovation, shared values and Leadership Behaviors, and an unwavering commitment to back our customers, communities, and colleagues. As part of Team Amex, you'll experience this powerful backing with comprehensive support for your holistic well-being and many opportunities to learn new skills, develop as a leader, and grow your career. Here, your voice and ideas matter, your work makes an impact, and together, you will help us define the future of American Express. How will you make an impact in this role? As part of our diverse tech team, you can architect, code and ship software that makes us an essential part of our customer's digital lives. Here, you can work alongside talented engineers in an open, supportive, inclusive environment where your voice is valued, and you make your own decisions on what tech to use to solve challenging problems. Amex offers a range of opportunities to work with the latest technologies and encourages you to back the broader engineering community through open source. And because we understand the importance of keeping your skills fresh and relevant, we give you dedicated time to invest in your professional development. Find your place in technology on #TeamAmex. Responsibilities include but are not limited to: Owns all technical aspects of software development for assigned applications. Performs hands-on architecture, design, and development of systems. Functions as member of an agile team and helps drive consistent development practices wrt tools, common components, and documentation. Typically spends 80% of time writing code and testing, and remainder of time collaborating with stakeholders through ongoing product/platform releases. Develops deep understanding of tie-ins with other Amex systems and platforms within the supported domains. Writes code and unit tests, works on API specs, automation, and conducts code reviews and testing. Performs ongoing refactoring of code, utilizes visualization and other techniques to fast-track concepts, and deliver continuous improvement - Identifies opportunities to adopt innovative technologies. Provides continuous support for ongoing application availability. Works closely with product owners on blueprints and annual planning of feature sets that impact multiple platforms and products. Works with product owners to prioritize features for ongoing sprints and managing a list of technical requirements based on industry trends, new technologies, known defects, and issues. Qualification: Bachelor's degree in computer science, computer engineering, other technical discipline, or equivalent work experience 3+ years of software development experience Demonstrated experience with Agile or other rapid application development methods Demonstrated experience with object-oriented design and coding Demonstrated experience on these core technical skills (Mandatory) Core Java, Spring Framework, Java EE Hadoop Ecosystem (HBase, Hive, MapReduce, HDFS, Pig, Sqoop etc) Spark Relational Database (Postgres / MySQL / DB2 etc.) Cloud development (Micro-services) Parallel & distributed (multi-tiered) systems Application design, software development and automated testing Demonstrated experience on these additional technical skills (Nice to Have) Unix / Shell scripting Python / Scala Message Queuing, Stream processing (Kafka) Elastic Search Webservices, open API development, and REST concepts Experience with implementing integrated automated release management using tools/technologies/frameworks like Maven, Git, code/security review tools, Jenkins, Automated testing and Junit. · Adobe suite of Personalization Products like Journey Optimized, Decisioning Engine, Work front. We back you with benefits that support your holistic well-being so you can be and deliver your best. This means caring for you and your loved ones' physical, financial, and mental health, as well as providing the flexibility you need to thrive personally and professionally: Competitive base salaries Bonus incentives Support for financial-well-being and retirement Comprehensive medical, dental, vision, life insurance, and disability benefits (depending on location) Flexible working model with hybrid, onsite or virtual arrangements depending on role and business need Generous paid parental leave policies (depending on your location) Free access to global on-site wellness centers staffed with nurses and doctors (depending on location) Free and confidential counseling support through our Healthy Minds program Career development and training opportunities American Express is an equal opportunity employer and makes employment decisions without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran status, disability status, age, or any other status protected by law. Offer of employment with American Express is conditioned upon the successful completion of a background verification check, subject to applicable laws and regulations.
Posted 1 day ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39928 Jobs | Dublin
Wipro
19405 Jobs | Bengaluru
Accenture in India
15976 Jobs | Dublin 2
EY
15128 Jobs | London
Uplers
11281 Jobs | Ahmedabad
Amazon
10521 Jobs | Seattle,WA
Oracle
9339 Jobs | Redwood City
IBM
9274 Jobs | Armonk
Accenture services Pvt Ltd
7978 Jobs |
Capgemini
7754 Jobs | Paris,France