Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 - 5.0 years
6 - 7 Lacs
Hyderabad
Work from Office
Description & Requirements Embedded within the Infor Data Services team, the Java Developer is responsible for designing and implementing high-quality, reusable Java components and services. The role involves using Spring Boot to implement microservice architectures and integrating them with various databases and data storage solutions, ensuring the performance and scalability of the software in line with Infors data management strategy. Essential Duties: Develop reusable and maintainable Java components and services. Implement microservice architecture using Spring Boot. Design REST APIs with a focus on industry standards. Utilize Spark in Java for data processing tasks. Integrate code with databases, both relational (SQL) and NoSQL. Conduct unit testing to ensure functionality meets design specifications. Apply object-oriented programming (OOP) principles effectively. Collaborate with cross-functional teams to translate technical requirements into effective code. Basic Qualifications: 4-5 years of experience in Java development. Strong proficiency in core and advanced Java, including the latest features. Experience with Spring Boot and Spark libraries in Java. Knowledge of database both relational and NoSQL. Knowledge of Multiple design patterns, Kafka, Git, Docker, and Linux. Strong communication, problem-solving, and teamwork skills. Bachelor s degree in Computer Science or a related field. Preferred Qualifications: Experience with Spark using Spring Boot. Familiarity with AWS services, and Agile tools like Jira and Confluence.
Posted 6 hours ago
3.0 - 6.0 years
6 - 10 Lacs
Pune
Work from Office
About Atos Atos is a global leader in digital transformation with c. 78,000 employees and annual revenue of c. 10 billion. European number one in cybersecurity, cloud and high-performance computing, the Group provides tailored end-to-end solutions for all industries in 68 countries. A pioneer in decarbonization services and products, Atos is committed to a secure and decarbonized digital for its clients. Atos is a SE (Societas Europaea) and listed on Euronext Paris. The purpose of Atos is to help design the future of the information space. Its expertise and services support the development of knowledge, education and research in a multicultural approach and contribute to the development of scientific and technological excellence. Across the world, the Group enables its customers and employees, and members of societies at large to live, work and develop sustainably, in a safe and secure information space. Role: Azure Databricks Developer Location: Mumbai / Chennai Responsibilities: Develop, Enhance and maintain scalable data pipelines and ETL processes in Azure Databricks Migrate Spark-Scala jobs from to Azure Databricks environment Provide Test Support for Spark Jobs in Databricks Implement and optimize Spark jobs, data transformations, and data processing workflows in Databricks Leverage Azure DevOps and CI/CD best practices to automate the deployment and management of data pipelines and infrastructure Profile Requirements (Required) Excellent English communication skills (Spoken/Written) (Required) Experience in developing and maintaining data pipelines using Azure Databricks, Spark (Required) Proficiency in programming languages such as, Scala (Required) Good Problem-solving (Required) Work Diligence and Responsibility (Optional) Education Background in IT or related field Our Offering: Global cutting-edge IT projects that shape the future of digital and have a positive impact on environment. Wellbeing programs & work-life balance - integration and passion sharing events. Attractive Salary and Company Initiative Benefits Courses and conferences. Attractive Salary. Hybrid work culture.
Posted 6 hours ago
15.0 - 22.0 years
50 - 60 Lacs
Hyderabad
Work from Office
We areseeking an experienced Data Solution Architect to lead the design andimplementation of scalable, secure, and high-performing data solutions acrosscloud and hybrid environments. The ideal candidate will bring deep expertise in Data Engineering, APIs, Python, Spark/PySpark , and enterprise cloudplatforms such as AWS and Azure . This is a strategic,client-facing role that involves working closely with stakeholders, engineeringteams, and business leaders to architect and deliver robust data platforms. KeyResponsibilities: Architect end-to-end data solutions across cloud (AWS/Azure) and on-premises environments Develop and integrate RESTful APIs for data ingestion, transformation, and distribution Define data architecture standards, best practices, and governance frameworks Work with DevOps and cloud teams to deploy solutions using CI/CD and infrastructure-as-code Guide and mentor data engineering teams in solution implementation and performance optimization Ensure high availability, scalability, and data security compliance across platforms Collaborate with product owners and stakeholders to translate business needs into technical specifications Conduct architecture reviews, risk assessments, and solution validation Requirements RequiredSkills & Experience: 15 to 22 years of total experience in IT, with at least 5+ years in data architecture roles Strong experience in data processing frameworks and building the ETL solutions Proven expertise in designing and deploying solutions on AWS and Azure cloud platforms Hands-on experience with data integration, real-time streaming , and API-based data access Proficient in data modeling (structured, semi-structured, unstructured data) Deep understanding of data lakes, data warehouses, and modern data mesh/architecture patterns Experience with tools such as Airflow, Glue, Data bricks, Synapse, Redshift, or similar Knowledge of security, compliance, and governance practices in large-scale data platforms Strong communication, leadership, and client-facing skills Benefits Standard Company Benefits ","
Posted 6 hours ago
2.0 - 7.0 years
4 - 9 Lacs
Chennai
Work from Office
Do you like developing technical training, including gamified learning solutions, video creation, instructional design and graphic design to create high impact training curriculum? Amazon is looking for innovative learning professionals to support the development of engaging digital training for our front-line data associates. As a Learning Experience Designer, you will be an eLearning developer creating digital assets using standard instructional software tools to develop training that will inspire our learner population. Roles and Responsibilities will include, but are not limited to: Develop learning solutions using eLearning development tools, which include using storyboards and design direction to create digital assets such as videos, graphics, and modules. Development of Game based learnings, podcasts and interactive leaderboards for improving learning retention for AGI DS Employees Ability to rapid develop digital assets using the right methodologies to support course content for a frequently changing environment. Strong video editing and development skills Partnering with other Learning Experience Designers to produce high quality eLearning content Support curriculum maintenance reviews for developed training and update learning assets according to maintenance cycles. Design and Develop templates and apply ADDIE approach toward module designing and track scores and absorption of the learning content Perform Deep dive analysis on the deviations, problems, root cause and solutions Experience with SCORM requirements End to end execution of UAT, LMS and localization of contents along with coordination with cross functional teams such as conventions, Operations and WFM/Tech etc Be able to understand the business requirement and execute the task as per timelines or realign on the expected deliverables / time by using multiple strategic solutions to impart learning A day in the life As a Learning Experience Designer, you will be an eLearning developer creating digital assets using standard instructional software tools to develop training that will inspire our learner population About the team The AGIDS organization is engaged in the data processing to support the voice recognition for Alexa, the cloud-based service that powers devices like Amazon Echo, Echo Show, Echo Plus, Echo Spot, Echo Dot, and more. The Alexa service is always getting smarter, both for features, and for natural language understanding and accuracy. Because Alexa s brains are in the AWS cloud, she continually learns and adds more functionality, every hour, every day. We also are building the future with Alexa LLM and generative AI. Come build the future with us. 2+ years of design experience Have an available online portfolio Experience working with a variety of design tools such as Photoshop, Illustrator, and InDesign Experience in prototyping Knowledge of user-centered design methodologies, usability principles, web-based information architecture and design Experience working in a collaborative team and working directly with developers for implementation of designs
Posted 6 hours ago
6.0 - 9.0 years
12 - 13 Lacs
Bengaluru
Work from Office
At Allstate, great things happen when our people work together to protect families and their belongings from life s uncertainties. And for more than 90 years our innovative drive has kept us a step ahead of our customers evolving needs. From advocating for seat belts, air bags and graduated driving laws, to being an industry leader in pricing sophistication, telematics, and, more recently, device and identity protection. Job Description This role is responsible for driving multiple complex tracks of work to deliver Big Data solutions enabling advanced data science and analytics. This includes working with the team on new Big Data systems for analyzing data; the coding & development of advanced analytics solutions to make/optimize business decisions and processes; integrating new tools to improve descriptive, predictive, and prescriptive analytics; and discovery of new technical challenges that can be solved with existing and emerging Big Data hardware and software solutions. This role contributes to the structured and unstructured Big Data / Data Science tools of Allstate from traditional to emerging analytics technologies and methods. The role is responsible for assisting in the selection and development of other team members. Skills Primarily: Scala & Spark: Strong in functional programming and big data processing using Spark... Java: Proficient in Java 8+, REST API development, multithreading, and OOP concepts. Good Hands-on with MongoDB CAAS: Experience with Docker, Kubernetes, and deploying containerized apps. Tools: Git, CI/CD, JSON, SBT/Maven, Agile methodologies. Key Responsibilities Uses new areas of Big Data technologies, (ingestion, processing, distribution) and research delivery methods that can solve business problems Participates in the development of complex prototypes and department applications that integrate Big Data and advanced analytics to make business decisions Supports Innovation; regularly provides new ideas to help people, process, and technology that interact with analytic ecosystem Participates in the development of complex technical solutions using Big Data techniques in data & analytics processes Influence within the team on the effectiveness of Big Data systems to solve their business problems. Leverages and uses Big Data best practices / lessons learned to develop technical solutions used for descriptive analytics, ETL, predictive modeling, and prescriptive real time decisions analytics Partners closely with team members on Big Data solutions for our data science community and analytic users Partners with Allstate Technology teams on Big Data efforts Education Masters Degree (Preferred) Experience 6 or more years of experience (Preferred) Primary Skills Apache Spark, Big Data, Big Data Engineering, Big Data Systems, Big Data Technologies, CasaXPS, CI/CD, Data Science, Docker (Software), Git, Influencing Others, Java, MongoDB, Multithreading, RESTful APIs, Scala (Programming Language), ScalaTest, Spring Boot
Posted 6 hours ago
3.0 - 6.0 years
4 - 8 Lacs
Hyderabad
Work from Office
We are seeking an Automation Engineer with expertise in object-oriented programming development using Python and Scala. The role involves designing automation solutions, managing stakeholders, and delivering efficient data processing pipelines. Develop and maintain automation pipelines using Python and Scala. Execute SQL queries for data analysis and validation. Analyze and solve complex problems with scalable solutions. Collaborate with clients to understand requirements and ensure successful delivery. Communicate effectively with stakeholders and cross-functional teams. Required Skills: Hands on experience in Automation and Functional testing Strong experience in developing automation pipelines using object-oriented programming(Python & Scala) and SQL Exceptional coding logic building. Excellent analytical and logical problem-solving abilities. Outstanding communication and stakeholder management skills. Preferred Qualifications: BE/BTech or equivalent technical degree. Proven automation experience in client-facing roles. Location : Hyderabad (Work from Client Office) Python, Pyspark Framework, Scala
Posted 6 hours ago
3.0 - 10.0 years
8 - 9 Lacs
Mumbai
Work from Office
OneTru Data Operations Team As part of the Data Operations Team, this position should be focused on delivering actionable insights to evaluate and control data ingestion processes across multiple sources (Cloud and On-Prem). This individual will leverage state-of-the-art tools to cultivate the analytical methods needed to consistently refine and improve the efficiency and effectiveness of data onboarding/data operations processes. Associate Data Analyst/Data Operations OneTru Data Operations Team As part of the Data Operations Team, this position should be focused on delivering actionable insights to evaluate and control data ingestion processes across multiple sources (Cloud and On-Prem). This individual will leverage state-of-the-art tools to cultivate the analytical methods needed to consistently refine and improve the efficiency and effectiveness of data onboarding processes. In this role, you will be responsible for acting as an ETL Platform subject matter expert and processes that you will be involved in, along with being able to gather knowledge from other SMEs and requirements from key stakeholders. Responsibilities : Identify, analyze, and troubleshoot possible data flow issues between servers and processing steps. Identify, analyze, and interpret trends or patterns across flows and data sets. Measure, track and report key performance indicators and quality metrics from large data sets, automated processes, and data processing stages. Find and solve data problems, ensuring timely short-term and long-term preventive solutions. Develop and improve existing processes to ensure data ingestion through the ETL Platform. Work with management and teammates to prioritize business and information needs. Locate and define new opportunities for process improvement or process automation. Deliver excellent customer support through efficient and accurate handling of tickets/requests and general program inquiries. Perform other work-related tasks and responsibilities assigned to you from time to time. Participate in new products and features deployment and propose technical solutions that meet business needs. Requirements : Active student in Systems Engineering, Statistics, Mathematics, Industrial Engineering, or related field. Logical thinking and troubleshooting skills. Clear verbal and written communication skills. B2+ English Level. Knowledge and experience with Microsoft Excel, SQL, and regular/glob expressions. Knowledge and experience with visualization tools: Tableau, Power BI or Google Looker, etc. (nice to have). Experience with Unix/Linux, Hadoop, and Scripting languages such as Python, Bash, JavaScript etc. . (nice to have). Aptitude : Results-oriented and with a mindset to improve data, processes, and procedures. Ability to work independently and work effectively in a team environment. Strong analytical skills with the ability to collect, organize, analyze, and disseminate significant amounts of information with attention to detail and precision. Ability to learn and apply new technologies promptly. Interpersonal skills (leadership, teamwork, teaching, ability to dialogue, and effective interaction with different profiles of the organization) Creative problem solving and research skills with the ability to recognize patterns in data. Impact Youll Make: Responsibilities : Identify, analyze, and troubleshoot possible data flow issues between servers and processing steps. Identify, analyze, and interpret trends or patterns across flows and data sets. Measure, track and report key performance indicators and quality metrics from large data sets, automated processes, and data processing stages. Find and solve data problems, ensuring timely short-term and long-term preventive solutions. Develop and improve existing processes to ensure data ingestion through the ETL Platform. Work with management and teammates to prioritize business and information needs. Locate and define new opportunities for process improvement or process automation. Deliver excellent customer support through efficient and accurate handling of tickets/requests and general program inquiries. Perform other work-related tasks and responsibilities assigned to you from time to time. Participate in new products and features deployment and propose technical solutions that meet business needs. Requirements : Active student in Systems Engineering, Statistics, Mathematics, Industrial Engineering, or related field. Logical thinking and troubleshooting skills. Clear verbal and written communication skills. B2+ English Level. Knowledge and experience with Microsoft Excel, SQL, and regular/glob expressions. Knowledge and experience with visualization tools: Tableau, Power BI or Google Looker, etc. (nice to have). Experience with Unix/Linux, Hadoop, and Scripting languages such as Python, Bash, JavaScript etc. . (nice to have). Aptitude : Results-oriented and with a mindset to improve data, processes, and procedures. Ability to work independently and work effectively in a team environment. Strong analytical skills with the ability to collect, organize, analyze, and disseminate significant amounts of information with attention to detail and precision. Ability to learn and apply new technologies promptly. Interpersonal skills (leadership, teamwork, teaching, ability to dialogue, and effective interaction with different profiles of the organization) Creative problem solving and research skills with the ability to recognize patterns in data. This is a hybrid position and involves regular performance of job responsibilities virtually as well as in-person at an assigned TU office location for a minimum of two days a week. TransUnion Job Title Analyst, Data Analysis
Posted 6 hours ago
4.0 - 7.0 years
5 - 9 Lacs
Bengaluru
Work from Office
ETL Developer Java Developer ETL & API Integration Location: Bangalore (Hybrid 2 to 3 days onsite per week) Company: Hireflex247 India Pvt Ltd Type: Contract / Full-Time We are seeking a Java Developer with a strong foundation in ETL design , RESTful API integrations , and data transformation workflows to support a global enterprise client. This role is pivotal in ensuring the smooth flow of data across multiple systems from Candidate to Co-worker , helping maintain system stability, accuracy, and performance. You ll be part of a dynamic technology team supporting the People Operations and Transformation (POT) initiatives. The scope includes balancing enhancements, change requests (CRs) , and bug fixes in an agile environment, ensuring seamless data processing and integration. Key Responsibilities: Develop and maintain Java-based ETL processes for handling high-volume data transformations. Design, consume, and integrate RESTful APIs efficiently, ensuring error handling and reliability. Ensure data accuracy, consistency, and mapping across systems. Collaborate with cross-functional teams to support changes, enhancements, and critical business flows. Contribute to resolving production issues and maintaining business continuity in data processing pipelines. (Preferred) Assist in integrations involving SAP SuccessFactors, particularly around Recruiting and Hiring flows. Required Skills: Strong Java Development expertise, with experience in ETL-style backend logic. Hands-on experience with REST APIs : building, consuming, and handling errors and integration scenarios. Data transformation and mapping skills across enterprise systems. Ability to work both independently and collaboratively in agile delivery teams. Exposure to or working knowledge of SAP SuccessFactors , especially around Recruit-to-Hire or Candidate-to-Co-worker integrations. Experience working in fast-paced enterprise environments and debugging live production systems. Top 3 Must-Haves: Strong Java backend development with a focus on ETL logic Proficiency with RESTful API integration and error handling Experience in data mapping and transformation across multiple systems
Posted 6 hours ago
1.0 - 4.0 years
4 - 8 Lacs
Bengaluru
Work from Office
IQVIA is developing our next-generation Global Privacy Analytics Engineering platform to support analytics and insights against hundreds of Terabytes of health care data, and doing it in near real-time. We are currently seeking resources with experience in building and taking to production low latency, Massive Parallel Processing (MPP) data and analytic systems, ideally on Hadoop, Scala and Spark. Responsibilities Develop and maintain high-quality data processing applications using Apache Spark and related technologies. Design and implement data transformations with a focus on performance and scalability. Collaborate with data engineers and analysts to integrate data sources and services. Write and execute unit tests to ensure code quality and reliability. Participate in code reviews and adhere to best coding practices. Work within an Agile development framework to deliver features and improvements. Troubleshoot and resolve issues in existing data processing applications. Stay up-to-date with the latest industry trends and technologies. Mentor junior developers and provide technical guidance. Communicate effectively with team members and stakeholders to ensure project success. Manage multiple tasks and meet deadlines in a fast-paced environment. Utilize tools like JIRA, Confluence, and Git/Bitbucket for project management and version control. Experience with testing frameworks like ScalaTest or JUnit. Qualifications Bachelor s degree in Computer Science, Software Engineering, or a closely related field. Technical Requirement: 4+ years of experience with Apache Spark (preferably Spark 3.0 or above). Proficiency in Scala or Java programming languages. Experience with big data technologies like Hadoop, Kafka, or HDFS. Experience with Orchestration tool like airflow. Experience working with Managed cloud Spark infrastructure like AWS EMR and in general AWS tools like S3, SQS, VPC and API Gateway. Familiarity with data processing frameworks and libraries. Strong understanding of distributed computing and parallel processing. Knowledge of Agile methodologies and good coding practices. Ability to work both independently and as part of a team. Strong communication and interpersonal skills. Experience with version control systems like Bit Bucket/Git. Experience with CI/CD tools like Jenkins, Docker, or Kubernetes.
Posted 6 hours ago
0.0 - 4.0 years
2 - 6 Lacs
Hyderabad
Work from Office
FactSet creates flexible, open data and software solutions for over 200,000 investment professionals worldwide, providing instant access to financial data and analytics that investors use to make crucial decisions. At FactSet, our values are the foundation of everything we do. They express how we act and operate , serve as a compass in our decision-making, and play a big role in how we treat each other, our clients, and our communities. We believe that the best ideas can come from anyone, anywhere, at any time, and that curiosity is the key to anticipating our clients needs and exceeding their expectations. Your Teams Impact Research Analyst - ESG Fundamental Data - Operations Team will focus on initiatives related to the collection of company reported ESG data. The Analyst should have a strong business acumen and the ability to leverage a wide array of data and research analysis techniques to develop innovative recommendations. The ideal candidate is passionate about different secondary and primary research approaches, interpreting data, providing thought leadership/insights, and making recommendations in line with business goals. Research Analyst will be responsible and accountable for contributing to the growth of FactSet s ESG solutions knowledge library product and for ensuring consistently high-quality research. What Youll Do Collect, analyze and process company reported data from various data sources and into the database utilized by the ESG content team. ESG data processing, research, and analysis in line with ESG content team methodology. Fix the audit resolutions and recommendations coming from QC\Specialists based on specific guidelines. Tracking and monitoring developments and trends in various sectors of specialization and assessing the ESG impact of those companies on the overall sector. Update internal documents for performance metrics monitoring. Escalate data interpretation issues, as needed. Troubleshoot problems or issues and provide support to the team Focus and ensure that the integrity of the companys ESG data information is intact. Provide timely, accurate and reliable demographic and statistical ESG reported. Fulfill the research/analytical/ market study needs as and when required/ requested by the Internal/external stakeholders in order to procure and retain future/current business. The job also involves reporting and maintaining records of work done on a timely basis. Share Ideas and Best Practices from a Process Improvement perspective What Were Looking For Graduate in commerce /Environmental science/ Corporate Governance/ Business Management / Finance / Accounting / Economics or equivalent. Computer Literacy to effectively use the database and software for collection and processing of content/data Good communications skills - written & verbal Numerical skills to ensure understanding of the financial data to be collected and/or processed Analytical skills and detail orientation to ensure accuracy of data Good Knowledge of financial markets and accountancy (as needed) To ensure effective understanding of assigned content/market Whats In It For You At FactSet, our people are our greatest asset, and our culture is our biggest competitive advantage. Being a FactSetter means: The opportunity to join an S&P 500 company with over 45 years of sustainable growth powered by the entrepreneurial spirit of a start-up. Support for your total well-being. This includes health, life, and disability insurance, as well as retirement savings plans and a discounted employee stock purchase program, plus paid time off for holidays, family leave, and company-wide wellness days. Flexible work accommodations. We value work/life harmony and offer our employees a range of accommodations to help them achieve success both at work and in their personal lives. A global community dedicated to volunteerism and sustainability, where collaboration is always encouraged, and individuality drives solutions. Career progression planning with dedicated time each month for learning and development. Business Resource Groups open to all employees that serve as a catalyst for connection, growth, and belonging. Learn more about our benefits here . Salary is just one component of our compensation package and is based on several factors including but not limited to education, work experience, and certifications. Company Overview: FactSet ( NYSE:FDS | NASDAQ:FDS ) helps the financial community to see more, think bigger, and work better. Our digital platform and enterprise solutions deliver financial data, analytics, and open technology to more than 8,200 global clients, including over 200,000 individual users. Clients across the buy-side and sell-side, as well as wealth managers, private equity firms, and corporations, achieve more every day with our comprehensive and connected content, flexible next-generation workflow solutions, and client-centric specialized support. As a member of the S&P 500, we are committed to sustainable growth and have been recognized among the Best Places to Work in 2023 by Glassdoor as a Glassdoor Employees Choice Award winner. Learn more at www.factset.com and follow us on X and LinkedIn . At FactSet, we celebrate difference of thought, experience, and perspective. Qualified applicants will be considered for employment without regard to characteristics protected by law.
Posted 6 hours ago
4.0 - 9.0 years
11 - 15 Lacs
Bengaluru
Work from Office
Zendesk is a service-first CRM company that builds powerful, customizable software designed to improve customer relations. At Zendesk, we encourage growth, innovation and believe in giving back to the communities we call home. We are currently looking for a Senior Software Engineer based in Pune with a proven experience in implementing high scale data-processing applications to work on building the Real-time Reporting & Analytics pipeline at Zendesk. As a Senior Software Engineer, you would contribute to designing, implementing, refactoring and optimising systems with ownership on major project components. The ideal candidate has experience in building large-scale cloud applications preferably in a SaaS environment and has previously contributed to real-time data processing, while partnering with team stakeholders. Please note that Zendesk can only hire candidates who are physically located and plan to work from Karnataka or Maharashtra. Please refer to the location posted on the requisition for where this role is based. What youll be doing Solve complex problems and make decisions about technical tradeoffs that optimise for prioritised qualities through analyzing using multiple perspectives and information. Take ownership, estimate and prioritise groups of work based on previous experience and trade offs. Own the definition, communication, performance, cost, quality, security and compliance effects of groups of work. Contribute to software development by designing systems, refactoring in balance with feature delivery, optimising system performance and more. Mentor junior team members and other developers, fostering a culture of continuous learning and technical excellence. What you bring to the role 4+ years experience working on high scale applications and data processing pipelines. Have proven experience in Software Engineering with a focus on delivering large-scale distributed and high quality applications. Hands-on experience with real-time data processing technologies (eg. Kafka, Kinesis, ElasticSearch, Flink Spark, ClickHouse, Snowflake, etc ). Demonstrated ability to take ownership on designing and implementing technical components. Ability to explain and implement reliability patterns while solving problems in production. Ability to improve the observability, performance and security of the system. Strong communication skills, both written and verbal and the ability to collaborate with teams across multiple time zones globally. Proficiency in Java and hands-on experience with technologies like AWS, Kafka, Docker and Kubernetes are preferred.
Posted 6 hours ago
6.0 - 11.0 years
6 - 7 Lacs
Bengaluru
Work from Office
Founded in 1976, CGI is among the worlds largest independent IT and business consulting services firms. With 94,000 consultants and professionals globally, CGI delivers an end-to-end portfolio of capabilities, from strategic IT and business consulting to systems integration, managed IT and business process services, and intellectual property solutions. CGI works with clients through a local relationship model complemented by a global delivery network that helps clients digitally transform their organizations and accelerate results. CGI Fiscal 2024 reported revenue is CA$14.68 billion, and CGI shares are listed on the TSX (GIB.A) and the NYSE (GIB). Learn more about us at cgi.com. Job Title: SAP Middleware PI (Process Integration)/PO(Process Orchestration) Consultant Position: SSE / LA / AC Experience: 6+ years of experience Category: Software Development Job location: Bangalore / Chennai / Pune / Hyderabad Position ID: J0325-1757 Work Type: Hybrid Employment Type: Full Time / Permanent Qualification: Bachelor s or Master s degree in Computer Science, Engineering, or a related field. As an SAP PI/PO Consultant, you will be responsible for designing, developing, and managing integrations between SAP and non-SAP systems using SAP Process Integration (PI) and Process Orchestration (PO). Your role includes configuring adapters, developing mappings, troubleshooting integration issues, optimizing performance, ensuring security compliance, and working on SAP Cloud Platform Integration (CPI) to support seamless data exchange and business process automation. Responsibilities and must have Skills: Design, develop, and configure SAP PI/PO interfaces for seamless data exchange between SAP and non-SAP systems. Monitor and troubleshoot integration issues to ensure data flow consistency and system reliability. Develop mappings using graphical, XSLT, and Java-based transformations for structured data exchange. Configure adapters (IDOC, SOAP, REST, JDBC, RFC, File, etc.) to enable integration between different systems. Work on SAP Cloud Platform Integration (CPI) for hybrid and cloud-based integration scenarios. Implement error handling, alerting mechanisms, and logging for proactive issue resolution. Optimize integration performance by tuning message processing, queues, and system parameters. Ensure compliance with security policies by implementing encryption, authentication, and authorization protocols. Collaborate with functional, Basis, and development teams to align integration strategies with business needs. Prepare documentation and provide end-user training on interface operations and troubleshooting. Good-to-Have Skills: SAP Cloud Platform Integration (CPI) - Experience with cloud-based integration scenarios. API Management - Knowledge of REST, SOAP, OData, and GraphQL APIs for modern integrations. B2B Integration - Experience with EDI, AS2, and ANSI X12 standards for business-to-business transactions. SAP Event Mesh & Web Services - Understanding of event-driven architecture for real-time data processing #LI-GB9 Skills: English Client Management Engineer .
Posted 6 hours ago
2.0 - 5.0 years
9 - 13 Lacs
Bengaluru
Work from Office
Join our Team About this opportunity: We are looking for a skilled and motivated AI/ML & GenAI Engineer to join our team and drive innovation through cutting-edge artificial intelligence and machine learning solutions. In this role, you will be responsible for designing, developing, and deploying scalable AI models including large language models (LLMs) to solve real-world business challenges. Youll work across the full machine learning lifecycle, from data preparation to production deployment, leveraging modern MLOps practices and cloud technologies such as AWS and Azure. This role offers a unique opportunity to contribute to impactful projects in a collaborative, fast-paced, and technology-driven environment. What you will do: Develop, Train, Test, and Deploy Machine Learning and GenAI-LLM models. Collect, Clean, and preprocess large-scale datasets for AI/ML training and evaluation. Collaborate with cross-functional teams to understand business needs and translate them into AI solutions. Design and implement scalable AI services and pipelines using Python and Cloud technologies (e.g., Azure, AWS). Continuously improve model performance through tuning, optimization, and retraining. Knowledge of MLOps practices to use IaC to deploy models into production and industrialising the business solution. The skills you bring: Strong expertise in AWS services: Glue, SageMaker, Lambda, CloudWatch, S3, IAM, etc. Solid programming skills in Python and experience with PySpark for large-scale data processing. Experience with DevOps/MLOps tools such as Azure DevOps, GitHub Actions. Education Bachelor s or Master s degree in Computer Science, Artificial Intelligence, Data Science, Engineering, or a related technical field. Experience Experience: 2-5 years of hands-on experience Why join Ericsson? What happens once you apply? Primary country and city: India (IN) || Bangalore Req ID: 768928
Posted 6 hours ago
6.0 - 12.0 years
11 - 13 Lacs
Bengaluru
Work from Office
This is a great opportunity to join a winning team. CGI offers a competitive compensation package with opportunities for growth and professional development. Benefits for full-time, permanent members start on the first day of employment and include a paid time-off program and profit participation and stock purchase plans. We wish to thank all applicants for their interest and effort in applying for this position, however, only candidates selected for interviews will be contacted. No unsolicited agency referrals please. Job Title: Bigdata developer(Scala+Spark) Position: Bigdata developer(Scala+Spark) Experience:7+Years Category: IT Infrastructure Main location: Bangalore/Chennai Position ID: J0125-0972 Employment Type: Full Time Education : Bachelore of Engineering : Develop, maintain, and optimize scalable data pipelines using Apache Spark and Scala. Work with large-scale structured and unstructured data from multiple sources. Implement data transformation, cleansing, and enrichment processes. Collaborate with data architects, data scientists, and business analysts to understand data requirements. Tune Spark applications for performance and scalability in a distributed environment. Ensure data security, quality, and governance best practices. Work in an Agile/Scrum environment and participate in sprint planning, reviews, and retrospectives. Deploy and monitor jobs on distributed data processing platforms such as Hadoop, AWS EMR, or Databricks. Required Skills: Strong programming experience in Scala and Apache Spark (batch and/or streaming). Experience with big data ecosystems like Hadoop, Hive, HDFS, and Kafka. Solid understanding of distributed computing and performance optimization. Familiarity with ETL development and data warehousing concepts. Experience with cloud platforms like AWS, Azure, or GCP. Proficient in using version control tools like Git and CI/CD practices. Knowledge of SQL and NoSQL databases. Required qualifications to be successful in this role: Exp : 6 to 12 Yrs Location: Bangalore / Hyderabad / Chennai / Pune Shift : 1pm to 10pm Skills: Java Banking Python
Posted 6 hours ago
3.0 - 7.0 years
6 - 10 Lacs
Pune
Work from Office
Brief Job Description: Environmental Health & Safety Analyst: Coordination of activities related to the implementation and maintenance of occupational health and safety (OHS), fire protection (FP), and environmental protection (EP) management systems. Responsibilities and Measurement Criteria with Time investment Needed on Each: (This will describe the overall core responsibilities of the role, decision making responsibilities etc. ) Implement legal, corporate, and specific rules and procedures in the areas of OHS, FP, and EP. Inspect equipment and workplaces for OHS compliance (e. g. , ladders, shelves) and review EHS documentation. Monitor the implementation of established rules according to internal EHS regulations for designated workplaces. Collaborate with EHS coordinators and managers to identify, address, and implement preventive and corrective measures to ensure compliance and resolve issues found in inspections/audits related to OHS, FP, and EP. Identify hazards and risks, collaborate on risk assessments for injury prevention and health protection, and regularly update hazard and risk analyses. Conduct employee training on general and specific topics. Support workplace managers in delivering training to their subordinates. Assist in investigating workplace accidents and other incidents, monitor the implementation of corrective actions, and cooperate with external agencies, government institutions, and insurance companies. Collaborate with EHS coordinators in creating, managing, and updating internal documentation, and independently draft designated EHS documents. Monitor legislation related to OHS, EP, and FP. Organize medical check-ups in cooperation with selected healthcare providers. Ensure regular inspections of shelves and ladders. Conduct regular EHS compliance inspections, focusing primarily on: Legal, corporate, and specific rules and procedures Employee responsibilities, especially the application of established procedures and the use of PPE Perform all other activities related to the agreed type of work as instructed by the superior. Follow established management system procedures, rules, and principles. Work in accordance with ITC international trade rules. Contribute to the development and adherence to standards, focusing on key areas such as customer satisfaction, company culture, and continuous improvement. Within the continuous improvement system, submit suggestions aligned with company goals. Qualifications: Required/ Minimum Qualifications: Full secondary technical education Qualification as a safety technician is an advantage Qualification as a fire technician is an advantage Experience in the field is an advantage 2 years of experience in a manufacturing company English - Intermediate level (written and spoken) Working with people, communication Task/plan tracking Problem analysis and solution proposals Data processing and report generation Knowledge of OHS, FP, EP Experience in conducting and dealing with audits Precision, diligence, independence Additional / Preferred Qualifications: IT skills: Microsoft Office. Communication skills Logical thinking, systematic approach, and reasoning Teamwork and goal orientation Positive thinking, perseverance Physical & Environmental Requirements: - (To be used majorly for manufacturing jobs. ) None Time Travel Needed: None
Posted 7 hours ago
2.0 - 4.0 years
11 - 12 Lacs
Chennai
Work from Office
About the Role: We are looking for a Data Engineer with 5+ years of experience in building modern data pipelines and ETL solutions using SQL, GCP native services such as Big Query, Dataflow, etc., and potentially other data processing tools. Requirements: In-depth knowledge of data warehousing concepts and databases like Big Query, Oracle, Teradata, DB2 and PostgreSQL. Development of data integration processes potentially involving micro-services architecture for data flow. Develops data integration modules and implements data transformations. Participates in reviews of data models, ETL designs and code. Assists in deployment of data pipelines and related components to Deve. #LI-Hybrid #LI-KS3
Posted 7 hours ago
7.0 - 8.0 years
15 - 16 Lacs
Hyderabad
Work from Office
Some careers shine brighter than others. If you re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Marketing Title. In this role, you will: Design, develop, and maintain scalable ETL/ELT pipelines using PySpark and Python. Build and manage real-time data ingestion and streaming pipelines using Apache Kafka. Develop and optimize data workflows and batch processes on GCP using services like BigQuery, Dataflow, Pub/Sub, and Cloud Composer. Implement data quality checks, error handling, and monitoring across pipelines. Collaborate with data scientists, analysts, and business teams to translate requirements into technical solutions. Ensure best practices in code quality, pipeline reliability, and data governance. Maintain thorough documentation of processes, tools, and infrastructure. Requirements To be successful in this role, you should meet the following requirements: 6+ years of experience in data engineering roles. Strong programming skills in Python and PySpark. Solid experience in working with Kafka for real-time data processing. Proven hands-on experience with GCP data tools and architecture. Familiarity with CI/CD, version control (Git), and workflow orchestration tools (Airflow/Composer). Strong analytical and problem-solving skills with attention to detail. Excellent communication and team collaboration skills You ll achieve more when you join HSBC. .
Posted 7 hours ago
3.0 - 6.0 years
5 - 9 Lacs
Mumbai
Work from Office
ISS STOXX is actively hiring a Software Engineer in React to join our Liquid Metrix team in Mumbai office (Goregaon East) . Overview: Purpose, mastery and autonomy! We provide all three at ISS STOXX to develop industry leading data processing and analytical platforms. We are looking for an experienced Software Engineer in React to join our LQM development team to help architect, develop, and release optimal solutions to business requirements. You will be working as user interface engineer and lean on backend skills to provide end to end solutions. Strong potential for growth into full stack senior engineer in future. Shift hours : 11 AM to 8 PM IST Responsibilities: Play an active role in a global, agile team developing complex applications with very large data sets using tools such as AG Grid, and High charts Development and enhancement of our SaaS web application, public APIs, and internal systems Automated testing, and ensuring high quality features for our clients using tools such as Cypress Review areas for further improvements in performance, stability and scalability Keeping up to date with the latest development trends & current best practices Qualifications: Minimum 3 to 6 years of web development experience Strong skills with ReactJS, HTML, CSS, SASS/LESS, and UI patterns Solid experience with responsive layouts Ability to quickly create prototypes Competent using version control with Git, & Git Flow Experience working with data driven applications (charting and data presentation) Excellent problem solving and analytical skills, with experience in operational support and escalation Experience with automated testing / unit testing Ability to work independently while proactively supporting team goals Good to have: Experience with C#, .NET 4.5+ - ideally .NET core Knowledge of financial markets & security trading JIRA Working in a distributed Agile team environment SQL database skills, including SQL Server Experience with AWS Experience with Azure DevOps pipelines #LQM #MID-SENIOR #LI-AK1 What You Can Expect from Us At ISS STOXX, our people are our driving force. We are committed to building a culture that values diverse skills, perspectives, and experiences. We hire the best talent in our industry and empower them with the resources, support, and opportunities to grow professionally and personally. Together, we foster an environment that fuels creativity, drives innovation, and shapes our future success. Let s empower, collaborate, and inspire. Let s be #BrilliantTogether. About ISS STOXX ISS STOXX GmbH is a leading provider of research and technology solutions for the financial market. Established in 1985, we offer top-notch benchmark and custom indices globally, helping clients identify investment opportunities and manage portfolio risks. Our services cover corporate governance, sustainability, cyber risk, and fund intelligence. Majority-owned by Deutsche B rse Group, ISS STOXX has over 3,400 professionals in 33 locations worldwide, serving around 6,400 clients, including institutional investors and companies focused on ESG, cyber, and governance risk. Clients trust our expertise to make informed decisions for their stakeholders benefit. Specifically, ISS LiquidMetrix provides a wide range of offerings, including Transaction Cost Analysis (TCA), execution quality, market abuse, and pre-trade analysis services across every public order and trade executed on European venues. Clients include sell sides, buy sides, exchanges, and regulators that require actionable analysis, reports, compliance tools, and global coverage. Visit our website: https://www.issgovernance.com View additional open roles: https: / / www.issgovernance.com / join-the-iss-team / .
Posted 7 hours ago
0.0 - 1.0 years
1 - 2 Lacs
Serampore
Work from Office
Responsibilities: * Manage back office operations: data entry, processing & reporting. * Collaborate with team on MIS ops & Excel report prep. * Follow banking industry standards & compliance guidelines. Contact with HR - 7003551682 Office cab/shuttle Health insurance Provident fund Annual bonus
Posted 7 hours ago
1.0 - 3.0 years
0 - 2 Lacs
Gurugram
Work from Office
Urgent Hiring for Data Entry Operator. Job Location Gurugram Sec 82 IMT Manesar Gurgaon. Only Male candidate. Interested candidate Send Me Updated CV On WhatsApp 9315987720 Prepare Excel reports from data entered into Google Sheets. Collaborate with backend operations team on non-voice tasks. Manage computer systems, perform back office processing.
Posted 8 hours ago
2.0 - 6.0 years
9 - 13 Lacs
Hyderabad
Work from Office
About Workato Workato transforms technology complexity into business opportunity. As the leader in enterprise orchestration, Workato helps businesses globally streamline operations by connecting data, processes, applications, and experiences. Its AI-powered platform enables teams to navigate complex workflows in real-time, driving efficiency and agility. Trusted by a community of 400, 000 global customers, Workato empowers organizations of every size to unlock new value and lead in today s fast-changing world. Learn how Workato helps businesses of all sizes achieve more at workato. com . Ultimately, Workato believes in fostering a flexible, trust-oriented culture that empowers everyone to take full ownership of their roles . We are driven by innovation and looking for team players who want to actively build our company. But, we also believe in balancing productivity with self-care . That s why we offer all of our employees a vibrant and dynamic work environment along with a multitude of benefits they can enjoy inside and outside of their work lives. If this sounds right up your alley, please submit an application. We look forward to getting to know you! Also, feel free to check out why: Business Insider named us an enterprise startup to bet your career on Forbes Cloud 100 recognized us as one of the top 100 private cloud companies in the world Deloitte Tech Fast 500 ranked us as the 17th fastest growing tech company in the Bay Area, and 96th in North America Quartz ranked us the #1 best company for remote workers Responsibilities We are looking for an experienced and exceptional Senior AI / Machine Learning Engineer to join our growing team. In this role, you will be involved in the design, development, and optimization of AI and Machine Learning products that deliver exceptional user experiences. The ideal candidate will combine strong software engineering skills with deep knowledge of machine learning systems. You will also be responsible to: Build conversational AI interfaces that handle multi-turn customer interactions, maintain context across sessions, and seamlessly escalate to human agents when necessary. Design and implement advanced AI/ML systems with a focus on LLMs, AI Agents, and retrieval-augmented generation (RAG) architectures. Build production-grade AI pipelines for data processing, model training, fine-tuning, and serving at scale. Implement feedback loops and continuous learning systems that incorporate customer satisfaction metrics, agent corrections, and conversation outcomes to improve model performance over time. Create analytics dashboards and reporting tools to track automation effectiveness, identify common customer pain points, and measure key performance indicators like resolution time, containment rate, and customer satisfaction scores. Lead technical initiatives for AI system integration into existing products and services. Collaborate with data scientists and ML researchers to implement and productionize new AI approaches and models. Requirements Qualifications / Experience / Technical Skills Bachelors degree in Computer Science, or a related field, or equivalent practical experience. 5+ years in backend software development using modern programming languages (e. g. , Python (strongly preferred!), Golang or Java). Demonstrated experience building production conversational AI systems including chatbots, virtual assistants, and automated support agents using LLMs (OpenAI, Anthropic, open-source models). Expertise in natural language understanding (NLU) and intent classification for customer query interpretation, entity extraction, and conversation flow management. Experience implementing multi-channel support automation across chat, email, voice, and messaging platforms with consistent context handling. Strong background in customer support metrics and KPIs including CSAT, first contact resolution, average handle time, and containment rate optimization. Experience with sentiment analysis and emotion detection for escalation triggers and customer satisfaction monitoring. Expertise in building knowledge bases and FAQ systems with dynamic content retrieval and self-learning capabilities from support interactions. Proficiency with contact center platforms (Zendesk, Salesforce Service Cloud, Genesys, or similar) and their API integrations. Experience implementing real-time agent assist systems that provide suggestions, knowledge articles, and response templates during live interactions. Familiarity with compliance and security requirements for handling sensitive customer data in automated systems (PCI, HIPAA, GDPR). Experience with A/B testing and experimentation frameworks for optimizing conversation flows and response strategies. Soft Skills / Personal Characteristics Strong communication abilities to explain technical concepts Collaborative mindset for cross-functional team work Detail-oriented with strong focus on quality Self-motivated and able to work independently Passion for solving complex search problems (REQ ID: 2158)
Posted 8 hours ago
15.0 - 20.0 years
10 - 14 Lacs
Pune
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : PySpark Good to have skills : NAMinimum 7.5 year(s) of experience is required Educational Qualification : A Engineering graduate preferably Computer Science graduate 15 years of full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to ensure project milestones are met, facilitating discussions to address challenges, and guiding your team in implementing effective solutions. You will also engage in strategic planning sessions to align project goals with organizational objectives, ensuring that all stakeholders are informed and involved in the development process. Your role will require a balance of technical expertise and leadership skills to drive the project forward successfully. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge.- Facilitate regular team meetings to discuss progress and address any roadblocks. Professional & Technical Skills: - Must To Have Skills: Proficiency in PySpark.- Strong understanding of data processing frameworks and distributed computing.- Experience with data integration and ETL processes.- Familiarity with cloud platforms and services related to data processing.- Ability to write efficient and optimized code for data manipulation. Additional Information:- The candidate should have minimum 7.5 years of experience in PySpark.- This position is based in Pune.- A Engineering graduate preferably Computer Science graduate 15 years of full time education is required. Qualification A Engineering graduate preferably Computer Science graduate 15 years of full time education
Posted 8 hours ago
10.0 - 12.0 years
37 - 40 Lacs
Pune
Work from Office
JR: R00208204 Experience: 10-12Years Educational Qualification: Any Degree --------------------------------------------------------------------- Job Title - S&C- Data and AI - CFO&EV Quantexa Platform(Assoc Manager) Management Level: 8-Associate Manager Location: Pune, PDC2C Must-have skills: Quantexa Platform Good to have skills: Experience in financial modeling, valuation techniques, and deal structuring. Job Summary : This role involves driving strategic initiatives, managing business transformations, and leveraging industry expertise to create value-driven solutions. Roles & Responsibilities: Provide strategic advisory services, conduct market research, and develop data-driven recommendations to enhance business performance. WHATS IN IT FOR YOU Accenture CFO & EV team under Data & AI team has comprehensive suite of capabilities in Risk, Fraud, Financial crime, and Finance. Within risk realm, our focus revolves around the model development, model validation, and auditing of models. Additionally, our work extends to ongoing performance evaluation, vigilant monitoring, meticulous governance, and thorough documentation of models. Get to work with top financial clients globally Access resources enabling you to utilize cutting-edge technologies, fostering innovation with the worlds most recognizable companies. Accenture will continually invest in your learning and growth and will support you in expanding your knowledge. Youll be part of a diverse and vibrant team collaborating with talented individuals from various backgrounds and disciplines continually pushing the boundaries of business capabilities, fostering an environment of innovation. What you would do in this role Engagement Execution Lead client engagements that may involve model development, validation, governance, strategy, transformation, implementation and end-to-end delivery of fraud analytics/management solutions for Accentures clients. Advise clients on a wide range of Fraud Management/ Analytics initiatives. Projects may involve Fraud Management advisory work for CXOs, etc. to achieve a variety of business and operational outcomes. Develop and frame Proof of Concept for key clients, where applicable Practice Enablement Mentor, groom and counsel analysts and consultants. Support development of the Practice by driving innovations, initiatives. Develop thought capital and disseminate information around current and emerging trends in Fraud Analytics and Management Support efforts of sales team to identify and win potential opportunities by assisting with RFPs, RFI. Assist in designing POVs, GTM collateral. Travel:Willingness to travel up to 40% of the time Professional Development Skills: Project Dependent Professional & Technical Skills: - Relevant experience in the required domain. - Strong analytical, problem-solving, and communication skills. - Ability to work in a fast-paced, dynamic environment. Advanced skills in development and validation of fraud analytics models, strategies, visualizations. Understanding of new/ evolving methodologies/tools/technologies in the Fraud management space. Expertise in one or more domain/industry including regulations, frameworks etc. Experience in building models using AI/ML methodologies Modeling:Experience in one or more of analytical tools such as SAS, R, Python, SQL, etc. Knowledge of data processes, ETL and tools/ vendor products such as VISA AA, FICO Falcon, EWS, RSA, IBM Trusteer, SAS AML, Quantexa, Ripjar, Actimize etc. Proven experience in one of data engineering, data governance, data science roles Experience in Generative AI or Central / Supervisory banking is a plus. Strong conceptual knowledge and practical experience in the Development, Validation and Deployment of ML/AL models Hands-on programming experience with any of the analytics tools and visualization tools (Python, R, PySpark, SAS, SQL, PowerBI/ Tableau) Knowledge of big data, ML ops and cloud platforms (Azure/GCP/AWS) Strong written and oral communication skills Project management skills and the ability to manage multiple tasks concurrently Strong delivery experience of short and long term analytics projects Additional Information: - Opportunity to work on innovative projects. - Career growth and leadership exposure. About Our Company | Accenture Qualification Experience: 10-12Years Educational Qualification: Any Degree
Posted 8 hours ago
5.0 - 10.0 years
10 - 14 Lacs
Chennai
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : PySpark Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. You will be responsible for overseeing the application development process and ensuring successful project delivery. Roles & Responsibilities:- Expected to be an SME- Collaborate and manage the team to perform- Responsible for team decisions- Engage with multiple teams and contribute on key decisions- Provide solutions to problems for their immediate team and across multiple teams- Lead the application development process- Ensure successful project delivery- Implement best practices for application design and configuration Professional & Technical Skills: - Must To Have Skills: Proficiency in PySpark- Strong understanding of big data processing- Experience with data processing frameworks like Apache Spark- Hands-on experience in building scalable applications- Knowledge of cloud platforms for application deployment Additional Information:- The candidate should have a minimum of 5 years of experience in PySpark- This position is based at our Chennai office- A 15 years full-time education is required Qualification 15 years full time education
Posted 8 hours ago
15.0 - 20.0 years
10 - 14 Lacs
Bengaluru
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : PySpark Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to ensure project milestones are met, facilitating discussions to address challenges, and guiding your team in implementing effective solutions. You will also engage in strategic planning sessions to align project goals with organizational objectives, ensuring that all stakeholders are informed and involved in the development process. Your role will be pivotal in driving innovation and efficiency within the application development lifecycle, fostering a collaborative environment that encourages team growth and success. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate knowledge sharing sessions to enhance team capabilities.- Monitor project progress and implement necessary adjustments to meet deadlines. Professional & Technical Skills: - Must To Have Skills: Proficiency in PySpark.- Good To Have Skills: Experience with Apache Hadoop.- Strong understanding of data processing frameworks.- Experience in building scalable data pipelines.- Familiarity with cloud platforms such as AWS or Azure. Additional Information:- The candidate should have minimum 5 years of experience in PySpark.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 8 hours ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
The data processing job market in India is thriving with opportunities for job seekers in the field. With the growing demand for data-driven insights in various industries, the need for professionals skilled in data processing is on the rise. Whether you are a fresh graduate looking to start your career or an experienced professional looking to advance, there are ample opportunities in India for data processing roles.
These major cities in India are actively hiring for data processing roles, with a multitude of job opportunities available for job seekers.
The average salary range for data processing professionals in India varies based on experience and skill level. Entry-level positions can expect to earn between INR 3-6 lakh per annum, while experienced professionals can earn upwards of INR 10 lakh per annum.
A typical career path in data processing may include roles such as Data Analyst, Data Engineer, Data Scientist, and Data Architect. As professionals gain experience and expertise in the field, they may progress from Junior Data Analyst to Senior Data Analyst, and eventually to roles such as Data Scientist or Data Architect.
In addition to data processing skills, professionals in this field are often expected to have knowledge of programming languages such as Python, SQL, and R. Strong analytical and problem-solving skills are also essential for success in data processing roles.
As you explore opportunities in the data processing job market in India, remember to prepare thoroughly for interviews and showcase your skills and expertise confidently. With the right combination of skills and experience, you can embark on a successful career in data processing in India. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
17062 Jobs | Dublin
Wipro
9393 Jobs | Bengaluru
EY
7759 Jobs | London
Amazon
6056 Jobs | Seattle,WA
Accenture in India
6037 Jobs | Dublin 2
Uplers
5971 Jobs | Ahmedabad
Oracle
5764 Jobs | Redwood City
IBM
5714 Jobs | Armonk
Tata Consultancy Services
3524 Jobs | Thane
Capgemini
3518 Jobs | Paris,France