Jobs
Interviews

7017 Hadoop Jobs - Page 35

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 - 8.0 years

0 Lacs

hyderabad, telangana

On-site

Agoda is an online travel booking platform that connects travelers with a vast global network of 4.7M hotels, holiday properties, flights, activities, and more. As part of Booking Holdings and based in Asia, Agoda's diverse team of 7,100+ employees from 95+ nationalities in 27 markets fosters an environment rich in diversity, creativity, and collaboration. At Agoda, innovation is driven by a culture of experimentation and ownership to enhance the customer experience of exploring the world. The Data department at Agoda, headquartered in Bangkok, is responsible for overseeing all data-related requirements within the company. The primary objective is to enable and enhance the use of data through creative approaches and the implementation of powerful resources such as operational and analytical databases, queue systems, BI tools, and data science technology. The Data team at Agoda consists of bright minds from around the world who are equipped with the necessary knowledge and tools to support the company's culture of diversity and experimentation. The team plays a critical role in empowering decision-making processes for business users, product managers, engineers, and other stakeholders, while also focusing on improving the search experience for customers and safeguarding against fraudulent activities. Agoda is seeking ambitious and agile data scientists to join the Data Science and Machine Learning (AI/ML) team in Bangkok. The role involves working on challenging machine learning and big data platforms, processing approximately 600B events daily and making 5B predictions. As part of this team, you will tackle real-world challenges such as dynamic pricing, customer intent prediction, personalized recommendations, and more. The role offers the opportunity to work with one of the world's largest ML infrastructure, employing advanced technologies like GPUs, CPU cores, and memory for innovation and user experience enhancement. In this role, you will have the opportunity to design, code, experiment, and implement models and algorithms to maximize customer experience, business outcomes, and infrastructure readiness. You will analyze vast amounts of data to derive actionable insights for driving improvements and innovation, collaborating with developers and various business owners to deliver high-quality results. Successful candidates will have hands-on data science experience, a strong understanding of AI/ML/DL and Statistics, proficiency in coding languages like Python, PySpark, and SQL, and excellent communication skills for multicultural teamwork. Preferred qualifications include a PhD or MSc in relevant fields, experience in NLP, image processing, recommendation systems, data engineering, and data science for e-commerce or OTA. Agoda welcomes applications from both local and international candidates, offering full visa sponsorship and relocation assistance for eligible individuals. Agoda is an Equal Opportunity Employer and keeps applications on file for future vacancies. For further information, please refer to the privacy policy on our website. Agoda does not accept third-party resumes and is not responsible for any fees related to unsolicited resumes.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

haryana

On-site

You are a skilled QA / Data Engineer with 3-5 years of experience, joining a team focused on ensuring the quality and reliability of data-driven applications. Your expertise lies in manual testing and SQL, with additional knowledge in automation and performance testing being highly valuable. Your responsibilities include performing thorough testing and validation to guarantee the integrity of the applications. Your must-have skills include extensive experience in manual testing within data-centric environments, strong SQL skills for data validation and querying, familiarity with data engineering concepts such as ETL processes, data pipelines, and data warehousing, experience in Geo-Spatial data, a solid understanding of QA methodologies and best practices for software and data testing, and excellent communication skills. It would be beneficial for you to have experience with automation testing tools and frameworks like Selenium and JUnit for data pipelines, knowledge of performance testing tools such as JMeter and LoadRunner for evaluating data systems, familiarity with data engineering tools and platforms like Apache Kafka, Apache Spark, and Hadoop, understanding of cloud-based data solutions like AWS, Azure, and Google Cloud, along with their testing methodologies. Your proficiency in SQL, JUnit, Azure, Google Cloud, communication skills, performance testing, Selenium, QA methodologies, Apache Spark, data warehousing, data pipelines, cloud-based data solutions, Apache Kafka, Geo-Spatial data, JMeter, data validation, automation testing, manual testing, AWS, ETL, Hadoop, LoadRunner, ETL processes, and data engineering will be crucial in excelling in this role.,

Posted 1 week ago

Apply

3.0 years

0 Lacs

Greater Kolkata Area

On-site

Responsibilities Strategic Data & AI Advisory : Lead the development of data and AI strategies that align with clients business objectives, enabling digital transformation and data-driven Data & AI Strategy : Design comprehensive data and AI roadmaps, helping clients unlock insights and drive innovation through advanced analytics and automation. Data Governance Leadership : Design and implement data governance frameworks including policies, roles, processes, and tools to ensure data quality, security, compliance, and usability across the organization. Guide clients on data stewardship, metadata management, master data management (MDM), and regulatory compliance (e.g., GDPR, HIPAA). IT Infrastructure Optimization : Oversee the assessment and optimization of clients IT infrastructure, systems, and processes to enhance efficiency and scalability. Technology Consulting : Provide expert guidance on the selection, deployment, and integration of technology solutions including software, hardware, cloud, and network systems. Cross-functional Collaboration : Work closely with engineering, business, and data teams to deliver tailored, end-to-end IT solutions that meet client-specific needs. Assessment & Solution Design : Conduct detailed assessments of clients IT environments and recommend targeted improvements or AI & ML Automation : Architect and deploy machine learning and AI solutions to optimize operations, automate decision-making, and generate business insights, using best-in-class technologies and platforms. Client Relationship Management : Serve as a trusted advisor, cultivating long-term relationships with clients and providing continuous strategic support. Innovation & Best Practices : Stay ahead of industry trends in data, AI, and IT strategy, bringing new ideas and continuous improvement to client engagements and internal consulting frameworks. Workshops & Enablement : Facilitate client workshops, training sessions, and capability-building engagements to foster adoption of data and AI practices. Governance & Risk Oversight : Identify and mitigate risks related to data use, technology implementation, and regulatory compliance. Methodology Development : Contribute to the evolution of internal consulting frameworks, tools, and best practices to enhance team performance and service quality. Monitoring & Reporting : Track project progress against milestones and communicate updates to clients and leadership, ensuring transparency and : Education : Bachelors or Masters degree in Computer Science, Data Science, Business Analytics, or a related field. Experience : 3 years in consulting roles with a focus on data strategy, AI/ML, and project management. Technical Skills : Proficiency in data analytics tools (e.g., Python, R), machine learning frameworks (e.g., TensorFlow, scikit-learn), and data visualization platforms (e.g., Tableau, Power BI). Certifications : Relevant certifications such as PMP, Certified Data Management Professional (CDMP), or equivalent are advantageous. Soft Skills : Excellent communication, problem-solving, and interpersonal skills. Ability to work collaboratively in a team Skills And Abilities Strong analytical and problem-solving skills. Excellent communication and documentation skills. Ability to work independently and as part of a team. Detail-oriented with a strong commitment to data security. Familiarity with data security tools and Qualifications : Experience with cloud platforms (e.g., AWS, Azure, Google Cloud) and big data technologies (e.g., Hadoop, Spark). Familiarity with regulatory standards related to data privacy and security (e.g., GDPR, HIPAA). Demonstrated ability to drive business development and contribute to practice growth. (ref:hirist.tech)

Posted 1 week ago

Apply

7.0 - 11.0 years

0 Lacs

haryana

On-site

About Prospecta Founded in 2002 in Sydney, Australia, with additional offices in India, North America, Canada, and a local presence in Europe, the UK, and Southeast Asia, Prospecta is dedicated to providing top-tier data management and automation software for enterprise clients. Our journey began with a mission to offer innovative solutions, leading us to become a prominent data management software company over the years. Our flagship product, MDO (Master Data Online), is an enterprise Master Data Management (MDM) platform designed to streamline data management processes, ensuring accurate, compliant, and relevant master data creation, as well as efficient data disposal. With a strong presence in asset-intensive industries such as Energy and Utilities, Oil and Gas, Mining, Infrastructure, and Manufacturing, we have established ourselves as a trusted partner in the field. Culture at Prospecta At Prospecta, our culture is centered around growth and embracing new challenges. We boast a passionate team that collaborates seamlessly to deliver value to our customers. Our diverse backgrounds create an exciting work environment that fosters a rich tapestry of perspectives and ideas. We are committed to nurturing an environment that focuses on both professional and personal development. Career progression at Prospecta is not just about climbing the corporate ladder but about encountering a continuous stream of meaningful opportunities that enhance personal growth and technical proficiency, all under the guidance of exceptional leaders. Our organizational structure emphasizes agility, responsiveness, and achieving tangible outcomes. If you thrive in a dynamic environment, enjoy taking on various roles, and are willing to go the extra mile to achieve goals, Prospecta is the ideal workplace for you. We continuously push boundaries while maintaining a sense of fun and celebrating victories, both big and small. About the Job Position: Jr. Platform Architect/ Sr. Backend Developer Location: Gurgaon Role Summary: In this role, you will be responsible for implementing technology solutions in a cost-effective manner by understanding project requirements and effectively communicating them to all stakeholders and facilitators. Key Responsibilities - Collaborate with enterprise architects, data architects, developers & engineers, data scientists, and information designers to identify and define necessary data structures, formats, pipelines, metadata, and workload orchestration capabilities. - Possess expertise in service architecture, development, and ensuring high performance and scalability. - Demonstrate experience in Spark, Elastic Search, SQL performance tuning, and optimization. - Showcase proficiency in architectural design and development of large-scale data platforms and data applications. - Hands-on experience with AWS, Azure, and OpenShift. - Deep understanding of Spark and its internal architecture. - Expertise in designing and building new Cloud Data platforms and optimizing them at the organizational level. - Strong hands-on experience in Big Data technologies such as Hadoop, Sqoop, Hive, and Spark, including DevOps. - Solid SQL (Hive/Spark) skills and experience in tuning complex queries. Must-Have - 7+ years of experience. - Proficiency in Java, Spring Boot, Apache Spark, AWS, OpenShift, PostgreSQL, Elastic Search, Message Queue, Microservice architecture, and Spark. Nice-to-Have - Knowledge of Angular, Python, Scala, Azure, Kafka, and various file formats like Parquet, AVRO, CSV, JSON, Hadoop, Hive, and HBase. What will you get Growth Path At Prospecta, your career journey is filled with growth and opportunities. Depending on your career trajectory, you can kickstart your career or accelerate your professional development in a dynamic work environment. Your success is our priority, and as you exhibit your abilities and achieve results, you will have the opportunity to quickly progress into leadership roles. We are dedicated to helping you enhance your experience and skills, providing you with the necessary tools, support, and opportunities to reach new heights in your career. Benefits - Competitive salary. - Health insurance. - Paid time off and holidays. - Continuous learning and career progression. - Opportunities to work onsite at various office locations and/or client sites. - Participation in annual company events and workshops.,

Posted 1 week ago

Apply

2.0 - 6.0 years

0 Lacs

chennai, tamil nadu

On-site

The ideal candidate for this position will be responsible for understanding business problems and formulating analytical solutions. You will apply machine learning, data mining, and text mining techniques to develop scalable solutions for various business challenges. You will also be involved in speech to text translation for Video/Audio NLP Analysis and generative AI application for Market Research data. Additionally, you will solve problems using advanced AI techniques, including GenAI methods, by designing NLP/LLM/GenAI applications/products with robust coding practices. Experience in LLM models like PaLM, GPT4, and Mistral is preferred. You will be expected to train, tune, validate, and monitor predictive models, as well as analyze and extract relevant information from large amounts of historical business data in structured and unstructured formats. Furthermore, you will establish scalable, efficient, automated processes for large-scale data analyses and develop and deploy Data Science models on cloud platforms like GCP, Azure, and AWS. Working with large, complex data sets using tools such as SQL, Google Cloud Services, Hadoop, Alteryx, and Python will be part of your daily responsibilities. Preferred qualifications for this role include 2+ years of experience in market research, data mining, statistical analysis, Gen AI, modeling, deep learning, optimization, or similar analytics. Additionally, having 2+ years of work experience with Python, SQL, and/or visualization/dashboard tools like Tableau, PowerBI, or Qliksense is desired. Comfort with working in an environment where problems are not always well-defined and the ability to effectively advocate technical solutions to various audiences are also important. Candidates with 3-4 years of industry experience and a Bachelor's or Master's degree in a quantitative field such as Statistics, Computer Science, Economics, Mathematics, Data Science, or Operations Research are required for this position. You should have 3+ years of experience using SQL for acquiring and transforming data, as well as experience with real-world data, data cleaning, data collection, or other data wrangling challenges. Knowledge in fundamental text data processing and excellent problem-solving, communication, and data presentation skills are essential. Moreover, you should be flexible to work on multiple projects/domains/tools, possess collaborative skills to work with business teams, and be comfortable coding in Tensorflow and/or Pytorch, Numpy, and Pandas, Scikit-learn. Experience with open-source NLP modules like SpaCy, HuggingFace, TorchText, fastai.text, and others is beneficial. Proven quantitative modeling and statistical analysis skills, along with a proactive and inquisitive attitude towards learning new tools and techniques, are highly valued.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

You are a global information technology, consulting, and business process services company headquartered in India, offering a wide range of services such as IT consulting, application development, business process outsourcing, and digital solutions to clients across diverse industries in over 167 countries. Your focus is on providing technology-driven solutions to enhance efficiency and innovation, contributing significantly to the digital transformation of businesses worldwide. You are looking for a Software Engineer with 5-8 years of experience, who is proficient in Scala programming, and has hands-on experience working in a globally distributed team. It is essential for the candidate to have experience with big-data technologies like Spark/Databricks and Hadoop/ADLS. Additionally, experience in cloud platforms such as Azure (Preferred), AWS, or Google is required, along with expertise in building data lakes and data pipelines using Azure, Databricks, or similar tools. The ideal candidate should be familiar with the development life cycle, including CI/CD pipelines, and have experience in Business Intelligence project development, preferably with Microsoft SQL Server BI stack (SSAS/SSIS). Data warehousing knowledge, designing Star schema/Snowflaking, working in source-controlled environments like GitHub, Subversion, and agile methodology are also necessary skills. You should have a proven track record of producing secure and clean code, strong analytical and problem-solving skills, and the ability to work effectively within the complexity and boundaries of a leading global organization. Being a flexible and resilient team player with strong interpersonal skills, taking initiative to drive projects forward, is highly valued. While experience in financial services is a plus, it is not required. Fluency in English is a must for this role. Interested candidates can respond by submitting their updated resumes. For more job opportunities, please visit Jobs In India - VARITE. If you are not available or interested in this position, you are encouraged to refer potential candidates who might be a good fit. VARITE offers a Candidate Referral program where you can earn a one-time referral bonus based on the experience level of the referred candidate upon completion of a three-month assignment with VARITE. VARITE is a global staffing and IT consulting company that provides technical consulting and team augmentation services to Fortune 500 Companies in the USA, UK, Canada, and India. You are a primary and direct vendor to leading corporations in various verticals, including Networking, Cloud Infrastructure, Hardware and Software, Digital Marketing and Media Solutions, Clinical Diagnostics, Utilities, Gaming and Entertainment, and Financial Services. VARITE is an Equal Opportunity Employer committed to creating a diverse and inclusive workplace environment.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

punjab

On-site

You will be joining XenonStack, a company committed to being the Most Value Driven Cloud Native Platform Engineering and Decision Driven Analytics Company. As part of our team, you will have the opportunity to work on enterprise national and international projects worth millions of dollars. We take pride in our lively and purposeful work culture, emphasizing a people-oriented approach. At XenonStack, you can expect complete job and employee security, along with warm, authentic, and transparent communication. As a Data Analyst at XenonStack, your responsibilities will include determining organizational goals, mining data from primary and secondary sources, cleaning and pruning data, analyzing and interpreting results, pinpointing trends, patterns, and correlations in complex data sets, providing concise data reports with visualizations, and creating and maintaining relational databases and data systems. We are looking for candidates with technical expertise in statistical skills, programming languages (especially Python), advanced MS Excel skills, data warehousing, business intelligence, data analysis, data cleaning, data visualization, and knowledge of Spark, PySpark, SQL, NoSQL, SAS, Tableau, Hadoop, JavaScript, and Python. Additionally, we value professional attributes such as excellent communication skills, attention to detail, analytical thinking, problem-solving aptitude, strong organizational skills, and visual thinking. Ideal candidates for this position should have a technical background, including a degree in BCA, BSC, B.Tech, MSc, or M.Tech with programming skills. The desired experience for this role is 3-4 years. If you enjoy working in a collaborative environment, have a passion for data analysis, and possess the required technical skills, XenonStack welcomes you to join our team located at Plot No. C-184, Sixth Floor 603, Sector 75 Phase VIIIA, Mohali 160071. This position requires in-office work engagement.,

Posted 1 week ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Join us as a Application Engineer - PySpark Developer at Barclays, where you'll take part in the evolution of our digital landscape, driving innovation and excellence. You'll harness cutting-edge technology to revolutionize our digital offerings, ensuring unparalleled customer experiences. As a part of the team, you will deliver technology stack, using strong analytical and problem solving skills to understand the business requirements and deliver quality solutions. You'll be working on complex technical problems that will involve detailed analytical skills and analysis. This will be done in conjunction with fellow engineers, business analysts and business stakeholders. To be successful as Application Engineer - PySpark Developer you should have experience with: Hands-on programming experience in a Big Data Hadoop ecosystem. Proficiency in PySpark, Hive, and Impala. Exposure to Mongo DB or any other NoSQL database. Solid experience with Unix shell. Experience with scheduling tools like AutoSys, airflow. Strong understanding of Agile methodologies and tools (JIRA, Confluence). Experience with CI/CD tools such as Jenkins, TeamCity, or GitLab. Excellent communication and collaboration skills. Ability to work independently and drive delivery with minimal supervision. Some Other Highly Valued Skills Include Bachelor’s degree in Computer Science, Engineering, or a related field. Relevant certifications in Big Data or cloud technologies are a plus. You may be assessed on key critical skills relevant for success in role, such as risk and controls, change and transformation, business acumen, strategic thinking and digital and technology, as well as job-specific technical skills. This role is based in Pune. Purpose of the role To design, develop and improve software, utilising various engineering methodologies, that provides business, platform, and technology capabilities for our customers and colleagues. Accountabilities Development and delivery of high-quality software solutions by using industry aligned programming languages, frameworks, and tools. Ensuring that code is scalable, maintainable, and optimized for performance. Cross-functional collaboration with product managers, designers, and other engineers to define software requirements, devise solution strategies, and ensure seamless integration and alignment with business objectives. Collaboration with peers, participate in code reviews, and promote a culture of code quality and knowledge sharing. Stay informed of industry technology trends and innovations and actively contribute to the organization’s technology communities to foster a culture of technical excellence and growth. Adherence to secure coding practices to mitigate vulnerabilities, protect sensitive data, and ensure secure software solutions. Implementation of effective unit testing practices to ensure proper code design, readability, and reliability. Analyst Expectations Will have an impact on the work of related teams within the area. Partner with other functions and business areas. Takes responsibility for end results of a team’s operational processing and activities. Escalate breaches of policies / procedure appropriately. Take responsibility for embedding new policies/ procedures adopted due to risk mitigation. Advise and influence decision making within own area of expertise. Take ownership for managing risk and strengthening controls in relation to the work you own or contribute to. Deliver your work and areas of responsibility in line with relevant rules, regulation and codes of conduct. Maintain and continually build an understanding of how own sub-function integrates with function, alongside knowledge of the organisations products, services and processes within the function. Demonstrate understanding of how areas coordinate and contribute to the achievement of the objectives of the organisation sub-function. Make evaluative judgements based on the analysis of factual information, paying attention to detail. Resolve problems by identifying and selecting solutions through the application of acquired technical experience and will be guided by precedents. Guide and persuade team members and communicate complex / sensitive information. Act as contact point for stakeholders outside of the immediate function, while building a network of contacts outside team and external to the organisation. All colleagues will be expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence and Stewardship – our moral compass, helping us do what we believe is right. They will also be expected to demonstrate the Barclays Mindset – to Empower, Challenge and Drive – the operating manual for how we behave.

Posted 1 week ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Join us as a Data Engineer - PySpark Developer at Barclays, where you'll take part in the evolution of our digital landscape, driving innovation and excellence. You'll harness cutting-edge technology to revolutionize our digital offerings, ensuring unparalleled customer experiences. As a part of the team, you will deliver technology stack, using strong analytical and problem solving skills to understand the business requirements and deliver quality solutions. You'll be working on complex technical problems that will involve detailed analytical skills and analysis. This will be done in conjunction with fellow engineers, business analysts and business stakeholders. To be successful as Data Engineer - PySpark Developer you should have experience with: Hands-on programming experience in a Big Data Hadoop ecosystem. Proficiency in PySpark, Hive, and Impala. Exposure to Mongo DB or any other NoSQL database. Solid experience with Unix shell. Experience with scheduling tools like AutoSys, airflow. Strong understanding of Agile methodologies and tools (JIRA, Confluence). Experience with CI/CD tools such as Jenkins, TeamCity, or GitLab. Excellent communication and collaboration skills. Ability to work independently and drive delivery with minimal supervision. Some Other Highly Valued Skills Include Bachelor’s degree in Computer Science, Engineering, or a related field. Relevant certifications in Big Data or cloud technologies are a plus. You may be assessed on key critical skills relevant for success in role, such as risk and controls, change and transformation, business acumen, strategic thinking and digital and technology, as well as job-specific technical skills. This role is based in Pune. Purpose of the role To build and maintain the systems that collect, store, process, and analyse data, such as data pipelines, data warehouses and data lakes to ensure that all data is accurate, accessible, and secure. Accountabilities Build and maintenance of data architectures pipelines that enable the transfer and processing of durable, complete and consistent data. Design and implementation of data warehoused and data lakes that manage the appropriate data volumes and velocity and adhere to the required security measures. Development of processing and analysis algorithms fit for the intended data complexity and volumes. Collaboration with data scientist to build and deploy machine learning models. Analyst Expectations To perform prescribed activities in a timely manner and to a high standard consistently driving continuous improvement. Requires in-depth technical knowledge and experience in their assigned area of expertise Thorough understanding of the underlying principles and concepts within the area of expertise They lead and supervise a team, guiding and supporting professional development, allocating work requirements and coordinating team resources. If the position has leadership responsibilities, People Leaders are expected to demonstrate a clear set of leadership behaviours to create an environment for colleagues to thrive and deliver to a consistently excellent standard. The four LEAD behaviours are: L – Listen and be authentic, E – Energise and inspire, A – Align across the enterprise, D – Develop others. OR for an individual contributor, they develop technical expertise in work area, acting as an advisor where appropriate. Will have an impact on the work of related teams within the area. Partner with other functions and business areas. Takes responsibility for end results of a team’s operational processing and activities. Escalate breaches of policies / procedure appropriately. Take responsibility for embedding new policies/ procedures adopted due to risk mitigation. Advise and influence decision making within own area of expertise. Take ownership for managing risk and strengthening controls in relation to the work you own or contribute to. Deliver your work and areas of responsibility in line with relevant rules, regulation and codes of conduct. Maintain and continually build an understanding of how own sub-function integrates with function, alongside knowledge of the organisations products, services and processes within the function. Demonstrate understanding of how areas coordinate and contribute to the achievement of the objectives of the organisation sub-function. Make evaluative judgements based on the analysis of factual information, paying attention to detail. Resolve problems by identifying and selecting solutions through the application of acquired technical experience and will be guided by precedents. Guide and persuade team members and communicate complex / sensitive information. Act as contact point for stakeholders outside of the immediate function, while building a network of contacts outside team and external to the organisation. All colleagues will be expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence and Stewardship – our moral compass, helping us do what we believe is right. They will also be expected to demonstrate the Barclays Mindset – to Empower, Challenge and Drive – the operating manual for how we behave.

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

telangana

On-site

Genpact is a global professional services and solutions firm with over 125,000 employees in 30+ countries. Driven by curiosity, entrepreneurial agility, and the desire to create value for clients, we serve leading enterprises worldwide with deep business knowledge, digital operations services, and expertise in data, technology, and AI. Our purpose is the relentless pursuit of a world that works better for people. We are currently seeking a Business Analyst - Data Scientist to join our team. In this role, you will be responsible for developing and implementing NLP models and algorithms, extracting insights from textual data, and working collaboratively with cross-functional teams to deliver AI solutions. Responsibilities: Model Development: - Proficiency in various statistical, ML, and ensemble algorithms - Strong understanding of Time series algorithms and forecasting use cases - Ability to evaluate model strengths and weaknesses for different problems Data Analysis: - Extracting meaningful insights from structured data - Preprocessing data for ML/AI applications Collaboration: - Working closely with data scientists, engineers, and business stakeholders - Providing technical guidance and mentorship to team members Integration and Deployment: - Integrating ML models into production systems - Implementing CI/CD pipelines for continuous integration and deployment Documentation and Training: - Documenting processes, models, and results - Providing training and support on NLP techniques and tools to stakeholders Qualifications: Minimum Qualifications / Skills: - Bachelor's degree in computer science, Engineering, or a related field - Strong programming skills in Python and R - Experience with DS frameworks (SKLEARN, NUMPY) - Knowledge of machine learning concepts and frameworks (TensorFlow, PyTorch) - Strong problem-solving and analytical skills - Excellent communication and collaboration abilities Preferred Qualifications / Skills: - Experience in Predictive Analytics and Machine Learning techniques - Proficiency in Python/R/Any other open-source programming language - Knowledge of visualization tools such as Tableau, Power BI, Qlikview, etc. - Applied statistics skills - Experience with big data technologies (Hadoop, Spark) - Knowledge of cloud platforms (AWS, Azure, GCP) If you are passionate about leveraging your skills in data science and analytics to drive innovation and value creation, we encourage you to apply for this exciting opportunity at Genpact.,

Posted 1 week ago

Apply

1.0 - 5.0 years

0 Lacs

punjab

On-site

You should have a Bachelor's or Master's degree in Computer Science, Data Science, or a related field. Additionally, you should possess proven experience (1-3 years) in machine learning, data science, or AI roles. Proficiency in programming languages like Python, R, or Java is essential. Experience with machine learning frameworks and libraries such as TensorFlow, PyTorch, and scikit-learn is required. A strong understanding of algorithms, data structures, and software design principles is also necessary. Familiarity with cloud platforms like AWS, Azure, and big data technologies such as Hadoop and Spark is preferred. Excellent problem-solving skills, analytical thinking, communication, collaboration skills, and the ability to work methodically and meet deadlines are important attributes. Your responsibilities will include developing and implementing machine learning models and algorithms for various applications, collaborating with cross-functional teams to understand project requirements and deliver AI solutions, preprocessing and analyzing large datasets to extract meaningful insights, designing and conducting experiments to evaluate model performance and fine-tune algorithms, deploying machine learning models to production ensuring scalability and reliability, staying updated with the latest advancements in AI and machine learning technologies, documenting model development processes, maintaining comprehensive project documentation, participating in code reviews, providing constructive feedback to team members, and contributing to the continuous improvement of our AI/ML capabilities and best practices. Join our fast-paced team of like-minded individuals who share the same passion as you and tackle new challenges every day. Work alongside an exceptionally talented and intellectual team, gaining exposure to new concepts and technologies. Enjoy a friendly and high-growth work environment that fosters learning and development. We offer a competitive compensation package based on experience and skill. This is a full-time position with day shift, fixed shift, and morning shift schedules available. The ideal candidate should have a total of 3 years of work experience and be willing to work in person.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

haryana

On-site

You will play a crucial role in enhancing the Analytics capabilities for our businesses. Your responsibilities will include engaging with key stakeholders to comprehend Fidelity's sales, marketing, client services, and propositions context. You will collaborate with internal teams such as the data support team and technology team to develop new tools, capabilities, and solutions. Additionally, you will work closely with IS Operations to expedite the development and sharing of customized data sets. Maximizing the adoption of Cloud-Based Data Management Services will be a significant part of your role. This involves setting up sandbox analytics environments using platforms like Snowflake, AWS, Adobe, and Salesforce. You will also support data visualization and data science applications to enhance business operations. In terms of stakeholder management, you will work with key stakeholders to understand business problems and translate them into suitable analytics solutions. You are expected to facilitate smooth execution, delivery, and implementation of these solutions through effective engagement with stakeholders. Your role will also involve collaborating with the team to share knowledge and best practices, including coaching on deep learning and machine learning methodologies. Taking independent ownership of projects and initiatives within the team is crucial, demonstrating leadership and accountability. Furthermore, you will be responsible for developing and evaluating tools, methodologies, or infrastructure to address long-term business challenges. This may involve enhancing modelling software, methodologies, data requirements, and optimization environments to elevate the team's capabilities. To excel in this role, you should possess 5 to 8 years of overall experience in Analytics, with at least 4 years of experience in SQL, Python, open-source Machine Learning Libraries, and Deep Learning. Experience working in an AWS Environment, preferably using Snowflake, is preferred. Proficiency in analytics applications such as Python, SAS, SQL, and interpreting statistical results is necessary. Knowledge of SPARK, Hadoop, and Big Data Platforms will be advantageous.,

Posted 1 week ago

Apply

0.0 - 2.0 years

0 Lacs

Bengaluru, Karnataka, India

Remote

Remote Work: Hybrid Overview: At Zebra, we are a community of innovators who come together to create new ways of working to make everyday life better. United by curiosity and care, we develop dynamic solutions that anticipate our customer’s and partner’s needs and solve their challenges. Being a part of Zebra Nation means being seen, heard, valued, and respected. Drawing from our diverse perspectives, we collaborate to deliver on our purpose. Here you are a part of a team pushing boundaries to redefine the work of tomorrow for organizations, their employees, and those they serve. You have opportunities to learn and lead at a forward-thinking company, defining your path to a fulfilling career while channeling your skills toward causes that you care about – locally and globally. We’ve only begun reimaging the future – for our people, our customers, and the world. Let’s create tomorrow together. A Data scientiest will be responsible for Designs, develops, programs and implements Machine Learning solutions , Implements Artificial/Augmented Intelligence systems/Agentic Workflows/Data Engineer Workflows, Performs Statistical Modelling and Measurements by applying data engineering, feature engineering, statistical methods, ML modelling and AI techniques on structured, unstructured, diverse “big data” sources of machine acquire data to generate actionable insights and foresights for real life business problem solutions and product features development and enhancements. Responsibilities: Integrates state-of-the-art machine learning algorithms as well as the development of new methods Develops tools to support analysis and visualization of large datasets Develops, codes software programs, implements industry standard auto ML models (Speech, Computer vision, Text Data, LLM), Statistical models, relevant ML models (devices/machine acquired data), AI models and algorithms Identifies meaningful foresights based on predictive ML models from large data and metadata sources; interprets and communicates foresights, insights and findings from experiments to product managers, service managers, business partners and business managers Makes use of Rapid Development Tools (Business Intelligence Tools, Graphics Libraries, Data modelling tools) to effectively communicate research findings using visual graphics, Data Models, machine learning model features, feature engineering / transformations to relevant stakeholders Analyze, review and track trends and tools in Data Science, Machine Learning, Artificial Intelligence and IoT space Interacts with Cross-Functional teams to identify questions and issues for data engineering, machine learning models feature engineering Evaluates and makes recommendations to evolve data collection mechanism for Data capture to improve efficacy of machine learning models prediction Meets with customers, partners, product managers and business leaders to present findings, predictions, foresights; Gather customer specific requirements of business problems/processes; Identify data collection constraints and alternatives for implementation of models Working knowledge of MLOps, LLMs and Agentic AI/Workflows Programming Skills: Proficiency in Python and experience with ML frameworks like TensorFlow, PyTorch LLM Expertise: Hands-on experience in training, fine-tuning, and deploying LLMs Foundational Model Knowledge: Strong understanding of open-weight LLM architectures, including training methodologies, fine-tuning techniques, hyperparameter optimization, and model distillation. Data Pipeline Development: Strong understanding of data engineering concepts, feature engineering, and workflow automation using Airflow or Kubeflow. Cloud & MLOps: Experience deploying ML models in cloud environments like AWS, GCP (Google Vertex AI), or Azure using Docker and Kubernetes.Designs and implementation predictive and optimisation models incorporating diverse data types strong SQL, Azure Data Factory (ADF) Qualifications: Bachelors degree, Masters or PhD in statistics, mathematics, computer science or related discipline preferred 0-2 years Statistics modeling and algorithms Machine Learning experience including deep learning and neural networks, genetics algorithm etc Working knowledge with big data – Hadoop, Cassandra, Spark R. Hands on experience preferred Data Mining Data Visualization and visualization analysis tools including R Work/project experience in sensors, IoT, mobile industry highly preferred Excellent written and verbal communication Comfortable presenting to Sr Management and CxO level executives Self-motivated and self-starting with high degree of work ethic Position Specific Information Travel Requirements (as a % of time): <10% Able to telework? Yes/no – if yes, % of time and expectations while teleworking Yes, 70%. To visit Zebra site 2-3 days a week or every other week Personal Protective Equipment (PPE) Required (safety glasses, steel-toed boots, gloves, etc.): No U.S. Only – Frequency Definitions for Physical Activities, Environmental Conditions and Physical Demands: Never – 0% Occasionally - 0-20 times per shift or up to 33% of the time Frequently - 20-100 times per shift or 33-66% of the time Constantly - Over 100 times per shift or 66-100% of the time Physical Activities (all U.S. only jobs): Enter in N, O, F or C as applicable Enter in Frequency (N)Never, (O)Occasionally, (F)Frequently or (C)Constantly Ascending or descending ladders, stairs, scaffolding, ramps, poles and the like. Working from heights such as roofs, ladders, or powered lifts. N N Moving self in different positions to accomplish tasks in various environments including awkward or tight and confined spaces. N Remaining in a stationary position, often standing or sitting for prolonged periods. Stooping, kneeling, bending, crouching, reaching, pushing/pulling. N N Moving about to accomplish tasks or moving from one worksite to another. N Adjusting or moving objects up to __ pounds in all directions. N Communicating with others to exchange information. F Repeating motions that may include the wrists, hands and/or fingers. F (typing) Operating machinery and/or power tools. N Operating motor vehicles, industrial vehicles, or heavy equipment. N Assessing the accuracy, neatness and thoroughness of the work assigned. F Environmental Conditions (U.S. only): Enter in N, O, F or C as applicable Enter in Frequency (N)Never, (O)Occasionally, (F)Frequently or (C)Constantly Exposure to extreme temperatures (high or low). N Outdoor elements such as precipitation and wind. N Noisy environments. N Other hazardous conditions such as vibration, uneven ground surfaces, or dust & fumes. N Small and/or enclosed spaces. N No adverse environmental conditions expected. N Physical Demands (U.S. only): Check only one below Check only one below Sedentary work that primarily involves sitting/standing. X Light work that includes moving objects up to 20 pounds. Medium work that includes moving objects up to 50 pounds. Heavy work that includes moving objects up to 100 pounds or more (team lift) Must be able to see color. To protect candidates from falling victim to online fraudulent activity involving fake job postings and employment offers, please be aware our recruiters will always connect with you via @zebra.com email accounts. Applications are only accepted through our applicant tracking system and only accept personal identifying information through that system. Our Talent Acquisition team will not ask for you to provide personal identifying information via e-mail or outside of the system. If you are a victim of identity theft contact your local police department.

Posted 1 week ago

Apply

10.0 - 14.0 years

0 Lacs

kolkata, west bengal

On-site

We are looking for a highly experienced and dynamic Lead Data Scientist to join our Gen AI team in Kolkata. You should have 10-12 years of hands-on experience in the Data Analytics Industry, with a successful track record of leading and delivering innovative AI solutions. As the Lead Data Scientist, you will be responsible for steering the strategic direction, ensuring technical excellence, and overall success of our data science initiatives. This role presents a unique opportunity to influence the future of AI at Adani while leading a team of talented data scientists. With 10 years of experience in Data Analytics, NLP & Text Analytics and at least 3 years of experience in GenAI & LLM, you should have demonstrable expertise in leading and delivering AI solutions. Your responsibilities will include leading and mentoring a team of data scientists, defining and driving the strategic direction of the Gen AI practice, collaborating with cross-functional teams, developing cutting-edge AI algorithms, and deploying scalable AI solutions that drive operational efficiency and business value. You should possess a Bachelors, Masters, or PhD in Computer Science, Statistics, Mathematics, or a related field, along with relevant certifications in data science, machine learning, or artificial intelligence. Exceptional leadership and team management abilities, strong problem-solving skills, excellent communication, and stakeholder management skills are essential for this role. Furthermore, you should have advanced proficiency in Python, R, or other relevant programming languages, deep understanding of machine learning, deep learning, and natural language processing techniques, experience with big data technologies, and expertise in model deployment and management. If you are passionate about leading high-impact data science initiatives and contributing to the growth of the organization, this Lead Data Scientist role at Adani offers an exciting opportunity to shape the future of AI in a collaborative environment. Join our team and help us drive innovation and make a meaningful impact.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

As a Databricks platform administrator, you will be responsible for managing the Databricks platform and associated cloud resources. Your primary focus will be on ensuring the optimal performance, security, and efficiency of Databricks clusters and workspaces. This is a long-term contract role based in Bengaluru, Karnataka, with a hybrid work arrangement. You must have at least 5 years of experience working with the Databricks platform specifically as an administrator, not as a data engineer. In addition, cloud experience is required for this role. Your responsibilities will include configuring, deploying, and maintaining Databricks clusters and workspaces using tools like Terraform. You will monitor cluster performance, troubleshoot issues, and optimize configurations for performance and cost-effectiveness. Security is a key aspect of the role, as you will be managing access controls, encryption mechanisms, and implementing security policies to protect sensitive data. Collaboration is essential in this role, as you will work closely with application development teams, data engineers, data scientists, and business analysts to understand their requirements and provide technical solutions. You will also conduct training sessions to educate users on platform best practices and capabilities. In addition, you will be responsible for managing platform costs, implementing backup and disaster recovery strategies, and integrating Databricks with other data sources, data warehouses, and data lakes. Working within an Agile delivery/DevOps methodology, you will support the application development teams in debugging and issue resolution. Overall, as a Databricks platform administrator, you will play a crucial role in ensuring the smooth operation and continuous improvement of the Databricks platform to meet the organization's data processing and analytics needs.,

Posted 1 week ago

Apply

6.0 - 10.0 years

0 Lacs

noida, uttar pradesh

On-site

FCM is one of the world's largest travel management companies and a trusted partner for national and multinational organizations. With a 24/7 reach in 97 countries, FCM's flexible technology anticipates and resolves client needs, backed by experts offering in-depth local knowledge and a commitment to duty of care. As part of the ASX-listed Flight Centre Travel Group, FCM delivers the best market-wide rates, unique added-value benefits, and exclusive solutions. A leader in the travel tech space, FCM offers proprietary client solutions and provides specialist services through FCM Consulting and FCM Meetings & Events. We are seeking a skilled Azure Data Engineer to join our dynamic team. The ideal candidate will have extensive experience in data engineering, working with Azure cloud services, and designing and implementing scalable data solutions. You will play a crucial role in developing, optimizing, and maintaining data pipelines and architectures, ensuring data quality and availability across various platforms. Key Responsibilities: - Design, develop, and maintain data pipelines and ETL processes using Azure Data Factory, Azure Databricks, and Azure Synapse Analytics. - Build and optimize data storage solutions using Azure Data Lake, Azure SQL Database, and Azure Cosmos DB. - Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and deliver solutions. - Implement data quality checks, data governance, and security best practices across data platforms. - Monitor, troubleshoot, and optimize data workflows for performance and scalability. - Develop and maintain data models, data cataloging, and metadata management. - Automate data integration and transformation processes using Azure DevOps and CI/CD pipelines. - Stay up-to-date with emerging Azure technologies and data engineering trends. Qualifications: - Bachelor's degree in Computer Science, Information Technology, or a related field. - 6+ years of experience in data engineering with a focus on Azure cloud services. - Proficiency in Azure Data Factory, Azure Databricks, Azure Synapse Analytics, and Azure SQL Database. - Strong experience with SQL, Python, or other scripting languages. - Familiarity with data modeling, ETL design, and big data tools such as Hadoop or Spark. - Experience with data warehousing concepts, data lakes, and data pipelines. - Understanding of data governance, data quality, and security best practices. - Excellent problem-solving skills and ability to work in a fast-paced, collaborative environment. Preferred Skills: - Azure certification (e.g., Microsoft Certified: Azure Data Engineer Associate) is a plus. - Experience with Azure Logic Apps, Azure Functions, and API Management. - Knowledge of Power BI, Tableau, or other data visualization tools.,

Posted 1 week ago

Apply

7.0 - 12.0 years

0 Lacs

maharashtra

On-site

As a Lead Data Engineer, you will be responsible for leveraging your 7 to 12+ years of hands-on experience in SQL database design, data architecture, ETL, Data Warehousing, Data Mart, Data Lake, Big Data, Cloud (AWS), and Data Governance domains. Your expertise in a modern programming language such as Scala, Python, or Java, with a preference for Spark/ Pyspark, will be crucial in this role. Your role will require you to have experience with configuration management and version control apps like Git, along with familiarity working within a CI/CD framework. If you have experience in building frameworks, it will be considered a significant advantage. A minimum of 8 years of recent hands-on SQL programming experience in a Big Data environment is necessary, with a preference for experience in Hadoop/Hive. Proficiency in PostgreSQL, RDBMS, NoSQL, and columnar databases will be beneficial for this role. Your hands-on experience in AWS Cloud data engineering components, including API Gateway, Glue, IoT Core, EKS, ECS, S3, RDS, Redshift, and EMR, will play a vital role in developing and maintaining ETL applications and data pipelines using big data technologies. Experience with Apache Kafka, Spark, and Airflow is a must-have for this position. If you are excited about this opportunity and possess the required skills and experience, please share your CV with us at omkar@hrworksindia.com. We look forward to potentially welcoming you to our team. Regards, Omkar,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

You will be responsible for deploying and maintaining machine learning models, pipelines, and workflows in a production environment. Your role will involve re-packaging ML models developed in the non-production environment for deployment to the production environment, as well as refactoring non-production ML model implementations to an "ML as Code" implementation. You will create, manage, and execute ServiceNow change requests to facilitate the deployment of new models, and build scalable, reliable, and efficient machine learning infrastructure. Collaboration with data scientists and software engineers to design and implement machine learning workflows will be a key aspect of your role. Monitoring and logging tools will be implemented by you to ensure optimal performance of machine learning models, and you will be responsible for identifying and evaluating new technologies to enhance performance, maintainability, and reliability of machine learning systems. Your responsibilities will also include applying software engineering best practices to machine learning, supporting model development with a focus on auditability, versioning, and data security, as well as creating and maintaining technical documentation for machine learning infrastructure and workflows. Staying updated with the latest advancements in machine learning and cloud computing technologies will be crucial. You will provide expertise in data PaaS on Azure storage, big data platform services, server-less architectures, Azure SQL DB, NoSQL databases, and secure automated data pipelines. Working collaboratively and exercising sound judgement in developing robust solutions while seeking guidance on complex issues will be essential. Basic qualifications for this role include a Bachelor's or Master's degree in computer science, engineering, or a related field, along with at least 5 years of experience in software development, machine learning engineering, or a related field. A strong understanding of machine learning concepts and frameworks, hands-on experience in Python, and familiarity with DevOps practices and tools like Kubernetes, Docker, Jenkins, and Git are must-have skills. Experience in developing and deploying machine learning models in a production environment, working with cloud computing and database systems, building custom integrations between cloud-based systems using APIs, and maintaining ML systems with open-source tools are also required. Additionally, experience in developing with containers and Kubernetes in cloud computing environments, ability to translate business needs into technical requirements, and proficiency in data pipeline design, development, and delivery are necessary. Strong analytical and problem-solving skills are essential for this role. Good to have skills include knowledge of cloud migration methodologies and processes, experience with Hadoop file formats and compression techniques, DevOps on an Azure platform, familiarity with developer tools like Visual Studio, GitLabs, Jenkins, and experience with private and public cloud architectures. Proven ability to work independently and in a team-oriented environment, as well as excellent written and oral communication skills, organizational skills, and multitasking abilities, will be beneficial for this role. Experience with MLOps in Azure and Azure native data/big-data tools, technologies, and services is preferred.,

Posted 1 week ago

Apply

8.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Job Category: Job Type: Job Location: Salary: Years of Experience: : Data Modeler Architect Position: Data Modeler Architect Experience: 8 years Location: Gurugram Department: IT/Data Architecture Reports to: Data Architecture Lead / Data Manager Position Overview We are looking for a skilled and motivated Data Modeler Architect with 8 years of experience to join our IT team. The Data Modeler will be responsible for designing, developing, and maintaining data models that support business requirements and data-driven decision-making. You will work closely with business analysts, data architects, and other stakeholders to ensure the organizations data is structured effectively and efficiently. This position offers the opportunity to influence how the company handles data at a large scale and play a pivotal role in the companys data strategy. Key Responsibilities Data Modeling & Design: Design and develop conceptual, logical, and physical data models in line with the companys data strategy and business objectives. Collaborate with business stakeholders and IT teams to gather requirements and ensure models meet business needs. Create and maintain metadata and data dictionaries to ensure consistency across various data sets. Review and optimize data models to improve system performance, scalability, and quality. Data Integration & Management Assist in integrating and transforming data from multiple sources into usable formats. Work on data governance and standardization initiatives to ensure data consistency, integrity, and security. Collaborate with ETL developers to ensure proper data flow and transformation within the models. Collaboration & Support Work with other teams (e.g., data engineers, data analysts) to ensure alignment and proper implementation of data models across the organization. Provide guidance and support for data migration, quality assurance, and reporting efforts. Act as a liaison between business users and technical teams, translating business requirements into data specifications. Performance Monitoring & Reporting Monitor data model performance and proactively suggest optimizations to improve efficiency. Assist in the creation of dashboards and reports for business users by providing insights into data structures and relationships. Skills & Qualifications: Education Bachelors degree in Computer Science, Information Technology, Data Science, or a related field. Masters degree preferred. Experience 5 years of experience in data modeling, database design, or a related role in an IT company. Hands-on experience with data modeling tools (e.g., Erwin, IBM Infosphere Data Architect, Microsoft SQL Server Management Studio, or similar tools). Extensive experience in Snowflake. Strong experience with database management systems such as SQL Server, Oracle, MySQL, or PostgreSQL. Technical Skills Expertise in designing relational, dimensional, and NoSQL data models. Proficiency in SQL and understanding of database query optimization techniques. Familiarity with ETL processes and data warehousing concepts. Experience with data governance practices and tools. Soft Skills Strong analytical and problem-solving skills. Good communication and collaboration skills to work with technical and non-technical teams. Ability to work in a fast-paced environment, prioritize tasks, and manage multiple projects simultaneously. Detail-oriented with a focus on data quality and consistency. Must Skills: Experience of cloud-based data platforms such as AWS, Google Cloud, or Azure. (Must) Experience with big data tools (e.g., Hadoop, Spark) and platforms. Experience in Snowflake. Experience with data visualization tools (e.g., Power BI, Tableau) is a plus. Why Join Us: Work with industry-leading data solutions and innovative technologies to help clients transform their businesses. Competitive salary and performance-based incentives. Collaborative work environment with opportunities to engage in exciting, data-centric projects. Exposure to clients across various industries, giving you a broad range of experience and growth opportunities. Travel opportunities to meet with clients and explore new markets. Ongoing training and development to ensure you stay ahead of industry trends in the data space.

Posted 1 week ago

Apply

4.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Title: Data Engineer – C11/Officer (India) The Role The Data Engineer is accountable for developing high quality data products to support the Bank’s regulatory requirements and data driven decision making. A Data Engineer will serve as an example to other team members, work closely with customers, and remove or escalate roadblocks. By applying their knowledge of data architecture standards, data warehousing, data structures, and business intelligence they will contribute to business outcomes on an agile team. Responsibilities Developing and supporting scalable, extensible, and highly available data solutions Deliver on critical business priorities while ensuring alignment with the wider architectural vision Identify and help address potential risks in the data supply chain Follow and contribute to technical standards Design and develop analytical data models Required Qualifications & Work Experience First Class Degree in Engineering/Technology (4-year graduate course) 5 to 8 years’ experience implementing data-intensive solutions using agile methodologies Experience of relational databases and using SQL for data querying, transformation and manipulation Experience of modelling data for analytical consumers Ability to automate and streamline the build, test and deployment of data pipelines Experience in cloud native technologies and patterns A passion for learning new technologies, and a desire for personal growth, through self-study, formal classes, or on-the-job training Excellent communication and problem-solving skills Technical Skills (Must Have) ETL: Hands on experience of building data pipelines. Proficiency in two or more data integration platforms such as Ab Initio, Apache Spark, Talend and Informatica Big Data : Experience of ‘big data’ platforms such as Hadoop, Hive or Snowflake for data storage and processing Data Warehousing & Database Management : Understanding of Data Warehousing concepts, Relational (Oracle, MSSQL, MySQL) and NoSQL (MongoDB, DynamoDB) database design Data Modeling & Design : Good exposure to data modeling techniques; design, optimization and maintenance of data models and data structures Languages : Proficient in one or more programming languages commonly used in data engineering such as Python, Java or Scala DevOps : Exposure to concepts and enablers - CI/CD platforms, version control, automated quality control management Technical Skills (Valuable) Ab Initio : Experience developing Co>Op graphs; ability to tune for performance. Demonstrable knowledge across full suite of Ab Initio toolsets e.g., GDE, Express>IT, Data Profiler and Conduct>IT, Control>Center, Continuous>Flows Cloud : Good exposure to public cloud data platforms such as S3, Snowflake, Redshift, Databricks, BigQuery, etc. Demonstratable understanding of underlying architectures and trade-offs Data Quality & Controls : Exposure to data validation, cleansing, enrichment and data controls Containerization : Fair understanding of containerization platforms like Docker, Kubernetes File Formats : Exposure in working on Event/File/Table Formats such as Avro, Parquet, Protobuf, Iceberg, Delta Others : Basics of Job scheduler like Autosys. Basics of Entitlement management Certification on any of the above topics would be an advantage. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Digital Software Engineering ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster.

Posted 1 week ago

Apply

4.0 - 8.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

The Applications Development Intermediate Programmer Analyst is an intermediate level position responsible for participation in the establishment and implementation of new or revised application systems and programs in coordination with the Technology team. The overall objective of this role is to contribute to applications systems analysis and programming activities. Responsibilities: Utilize knowledge of applications development procedures and concepts, and basic knowledge of other technical areas to identify and define necessary system enhancements, including using script tools and analyzing/interpreting code Consult with users, clients, and other technology groups on issues, and recommend programming solutions, install, and support customer exposure systems Apply fundamental knowledge of programming languages for design specifications. Analyze applications to identify vulnerabilities and security issues, as well as conduct testing and debugging Serve as advisor or coach to new or lower level analysts Identify problems, analyze information, and make evaluative judgements to recommend and implement solutions Resolve issues by identifying and selecting solutions through the applications of acquired technical experience and guided by precedents Has the ability to operate with a limited level of direct supervision. Can exercise independence of judgement and autonomy. Acts as SME to senior stakeholders and /or other team members. Appropriately assess risk when business decisions are made, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency. Key Responsibilities: Design and implement ETL pipelines using PySpark and Big Data tools on platforms like Hadoop, Hive, HDFS etc. Write scalable Python code for Machine Learning preprocessing tasks and work with libraries such as pandas, Scikit-learn etc. Develop data pipelines to support model training, evaluation and inference. Skills: Proficiency in Python programming with experience in PySpark for large-scale data processing. Hands-on experience in Big Data technologies: Hadoop, Hive HDFS etc. Exposure to machine learning workflows, model lifecycle and data preparation. Experience with ML libraries: Scikit-learn, XGBoost, Tensorflow, PyTorch etc. Exposure to cloud platforms (AWS/GCP) for data and AI workloads. Qualifications: 4-8 years of relevant experience in the Financial Service industry Intermediate level experience in Applications Development role Consistently demonstrates clear and concise written and verbal communication Demonstrated problem-solving and decision-making skills Ability to work under pressure and manage deadlines or unexpected changes in expectations or requirements Education: Bachelor’s degree/University degree or equivalent experience This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Applications Development ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster.

Posted 1 week ago

Apply

5.0 - 8.0 years

0 Lacs

Thiruvananthapuram, Kerala, India

On-site

Sr. Engineer-Cloud DevOps Qualification and Experience:B.Tech/B.E/ MSc/MCA - 5- 8 Years Qualification and Experience B.Tech/B.E/ MSc/MCA Experience: 5- 8 Years Responsibilities Design, code, test and debug new features and enhancement to Flytxt products over Big Data platform following best practices of agile software development Take advantage of new technologies and techniques in enterprise programming to keep Flytxt products at the forefront of mobile marketing and advertising technology Mentor software engineers in the team to contribute beyond their level and provide interesting challenges at all times to each individual to grow Resolve technical issues and development impediments faced by team members Assist in continuously optimizing software engineering processes with latest tools and practices to allow for agile, responsive and cost effective delivery of technology Profiling and performance tuning of the applications Requirements of the role Core and Advanced Java /J2EE skills 6 years using and developing in Core Java/J2EE technologies Min 3 Yrs experience in using Eclipse, Spring Framework, STRUTS, Web services Working knowledge of AWS cloud, Voltdb, Hbase Working knowledge of XML, HTTP, REST, SOAP, JSON, AJAX Working knowledge in Hibernate, Hadoop, XST, MySQL Basic Knowledge in Linux Shell Commands and Scripting Knowledge of Agile and Scrum development process

Posted 1 week ago

Apply

10.0 years

0 Lacs

Thiruvananthapuram, Kerala, India

On-site

Software Architect -Cloud Qualification and Experience:B.Tech/B.E/ MSc /MCA - 10 years Qualification and Experience B.Tech/B.E/ MSc /MCA Experience: 10 years Responsibilities Architect and Implement the AI driven Cloud /SaaS offering Research and design new frameworks and functional and nonfunctional features for various products meeting high quality standards Ensure Product delivered are designed for the required scale, resiliency and efficiency. Motivate Lead and Senior developers by assisting them to contribute beyond their levels for their professional and technical growth Contribution to academic outreach programs and other company branding activities Requirements of the role Designed and delivered on one or more widely used enterprise-class SaaS Application(s) preference will be given to candidates who have domain knowledge on Marketing technologies Knowledge of cloud computing infrastructure AWS Certification will be an advantage Hands-on experience in designing and managing scalable distributed systems Awareness of AI/ML technologies and their adoption in enterprise applications. Hands-on experience with the big data technologies (Hadoop, MapReduce, Spark, Hive, HBase) Hands-on experience in working with In-Memory databases and caching systems Hands-on experience with ETL (Extract-Transform-Load) tools Hands-on experience in containerization solutions such as Kubernetes Experience with large-scale RDBMS deployments and SQL optimization. Hands-on experience in building and managing Agile and Scrum development processes. Hands-on development experience in JAVA, Spring (Core, JPA, Boot, Cloud) technologies Hands-on experience in Git Aligned and experienced in DevOps

Posted 1 week ago

Apply

0 years

0 Lacs

Thiruvananthapuram, Kerala, India

On-site

Senior Software Engineer-R&D Qualification and Experience:B.Tech / M.Tech / M.E / MS / M.Sc in Computer Science Qualification and Experience B.Tech / M.Tech / M.E / MS / M.Sc in Computer Science or a related discipline (Applied Mathematics, Statistics, Electrical and/or Computer Engineering) or MCA Demonstrated commitment towards mastering AI/machine learning through own initiatives (side projects, books, MOOC courses etc.) would be a strong plus Responsibilities Implement and/or productize AI/Machine Learning algorithms at scale, utilizing distributed computing techniques, research findings, AI best practices and state of the art frameworks/libraries Setup and manage infrastructure, tools and frameworks for data management and transformation to facilitate AI R&D Package AI/ML algorithms to construct reusable AI recipes/components and/or create APIs for consuming packaged AI models Create examples and prototypes demonstrating consumption of packaged AI/Machine Learning algorithms Follow best practices to modularize, validate and package source code and follow proper source control management guidelines Conduct code reviews and mentor junior team members Work closely with AI researchers to productize innovations Requirements Requirements of the role The candidate should be strong in fundamentals of computer science, especially in algorithm analysis and design, and should be proficient in python programming. The candidate should have experience in working with and maintaining Linux-based systems, and should be hands-on in some/all of the following areas: Applying AI/machine learning, natural language processing and information retrieval algorithms on large datasets Creating and/or consuming AI/machine learning algorithms using tools/frameworks/libraries such as Jupyter/Zeppelin, scikit-learn, numpy, scipy, matplotlib, pandas, Tensorflow, Keras, Apache Spark etc. ETL/data cleansing and enrichment using Hadoop/Spark/other Big Data frameworks Gathering and processing raw data at scale using web scraping, API calls, crawling public repositories etc. Experience in working with SCM tools such as Git/GitHub/Bitbucket, exposure to Extreme Programming (XP), DevOps and/or Agile methodology Experience in conducting design and/or code reviews Job Code: SSE R&D_TVM Location: Trivandrum For more information, Please mail to: recruitment@flytxt.com

Posted 1 week ago

Apply

3.0 - 8.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Big Data Engineer Qualification and Experience:B.Tech/B.E/ MSc/MCA - any specialization; 3- 8 Years Qualification and Experience B.Tech/B.E/ MSc/MCA any specialization; Experience: 3- 8 Years Responsibilities Manage the core analytics support for the account assigned and leading the analysis and publication of reports Importing, cleaning, transforming, validating or modeling data with the purpose of understanding or making conclusions from the data for business decisions Evaluating CVM campaign offerings for their impact, profitability, ROI and targets. Skilled in segment identification and base management to enhance the MOU and ARPU, Customer Retention & Loyalty. Regular data mining to find out Recharge pattern, MoU trend, Grace & churn, Unique recharge, Power STV penetration, Incremental ARPU, zero usage. Extensive use of data-mining tools such as SAS, MS-Access, SQL and MS Excel in order to identify & exploit potential revenue streams. Monitoring and analyzing market penetration of various products. Use Business Intelligence tool for making Pre and Post launch analysis of products Requirements of the role 3 year relevant experience in Hadoop architecture,Mapreduce/Yarn Concepts,Hive/Pig/Sqoop/OOzie Job Code: Big Data Engineer Location: Trivandrum/Mumbai For more information, Please mail to: recruitment@flytxt.com

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies