Home
Jobs

4851 Hadoop Jobs - Page 44

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

The Data Engineer is accountable for developing high quality data products to support the Bank’s regulatory requirements and data driven decision making. A Data Engineer will serve as an example to other team members, work closely with customers, and remove or escalate roadblocks. By applying their knowledge of data architecture standards, data warehousing, data structures, and business intelligence they will contribute to business outcomes on an agile team. Responsibilities Developing and supporting scalable, extensible, and highly available data solutions Deliver on critical business priorities while ensuring alignment with the wider architectural vision Identify and help address potential risks in the data supply chain Follow and contribute to technical standards Design and develop analytical data models Required Qualifications & Work Experience First Class Degree in Engineering/Technology (4-year graduate course) 2-4 years’ experience implementing data-intensive solutions using agile methodologies Experience of relational databases and using SQL for data querying, transformation and manipulation Experience of modelling data for analytical consumers Ability to automate and streamline the build, test and deployment of data pipelines Experience in cloud native technologies and patterns A passion for learning new technologies, and a desire for personal growth, through self-study, formal classes, or on-the-job training Excellent communication and problem-solving skills T echnical Skills (Must Have) ETL: Hands on experience of building data pipelines. Proficiency in at least one of the data integration platforms such as Ab Initio, Apache Spark, Talend and Informatica Big Data: Exposure to ‘big data’ platforms such as Hadoop, Hive or Snowflake for data storage and processing Data Warehousing & Database Management: Understanding of Data Warehousing concepts, Relational (Oracle, MSSQL, MySQL) and NoSQL (MongoDB, DynamoDB) database design Data Modeling & Design: Good exposure to data modeling techniques; design, optimization and maintenance of data models and data structures Languages: Proficient in one or more programming languages commonly used in data engineering such as Python, Java or Scala DevOps: Exposure to concepts and enablers - CI/CD platforms, version control, automated quality control management Technical Skills (Valuable) Ab Initio: Experience developing Co>Op graphs; ability to tune for performance. Demonstrable knowledge across full suite of Ab Initio toolsets e.g., GDE, Express>IT, Data Profiler and Conduct>IT, Control>Center, Continuous>Flows Cloud: Good exposure to public cloud data platforms such as S3, Snowflake, Redshift, Databricks, BigQuery, etc. Demonstratable understanding of underlying architectures and trade-offs Data Quality & Controls: Exposure to data validation, cleansing, enrichment and data controls Containerization: Fair understanding of containerization platforms like Docker, Kubernetes File Formats: Exposure in working on Event/File/Table Formats such as Avro, Parquet, Protobuf, Iceberg, Delta Others: Basics of Job scheduler like Autosys. Basics of Entitlement management Certification on any of the above topics would be an advantage. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Digital Software Engineering ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster.

Posted 1 week ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Greetings from LTIMindtree!! About the job Are you looking for a new career challenge? With LTIMindtree, are you ready to embark on a data-driven career? Working for global leading manufacturing client for providing an engaging product experience through best-in-class PIM implementation and building rich, relevant, and trusted product information across channels and digital touchpoints so their end customers can make an informed purchase decision – will surely be a fulfilling experience. Location: Pan India. Key Skill : Spark +Python Interested candidates kindly apply in below link and share updated cv to Hemalatha1@ltimindtree.com https://forms.office.com/r/zQucNTxa2U Job Description Key Skill: Hadoop-Spark SparkSQL – Python Mandatory Skills: Relevant Experience in ETL and Data Engineering Strong Knowledge in Spark, Python Strong experience in Hive/SQL, PL/SQL Good Understanding of ETL & DW Concepts, Unix Scripting Design, implement and maintain Dat Pipeline to meet business requirements. Convert the Business need into Technical complex PySpark Code. Ability to write complex SQL queries for reporting purpose. Monitor Pyspark code performance and troubleshoot issues Why join us? Work in industry leading implementations for Tier-1 clients Accelerated career growth and global exposure Collaborative, inclusive work environment rooted in innovation Exposure to best-in-class automation framework Innovation first culture: We embrace automation, AI insights and clean data Know someone who fits this perfectly? Tag them – let’s connect the right talent with right opportunity Fill | LTIMindtree is looking for a Big Data Engineer!! A post on Microsoft Forms provided by: forms.office.com

Posted 1 week ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Job Summary AI Engineer Rank: Senior Associate The selected candidate ¿ Collaborates with senior data scientists to build, fine tune, and evaluate generative AI models, including large language models (LLMs), for various applications. ¿ Implements data pipelines and workflows to gather, clean, and prepare data from diverse sources for analysis and model training. ¿ Works on developing and deploying machine learning algorithms, ensuring models are optimized for performance and scalability ¿ Assists in translating complex technical findings into actionable insights and recommendations for non technical stakeholders, contributing to impactful business decisions Your key responsibilities include ¿ Implementing AI solutions that integrate seamlessly with existing business systems to enhance functionality and user interaction. ¿ Reviewing complex data sets to establish data quality and highlighting where data cleansing is required to remediate the data ¿ Designing and implementing data models to manage data within the organization, ¿ Migrating data from one system to another using multiple sources, identifying and implementing storage requirements ¿ Analyzing the latest trends such as cloud computing and distributed processing, and their uses in business, building industry knowledge ¿ Peer reviewing models and working with relevant business areas to seek their input and ensuring they are fit for the designed purpose ¿ Automating important infrastructure for the data science team. ¿ Staying current with AI trends and suggesting improvements to existing systems and workflows. Skills and attributes for success ¿ 5+ years¿ experience in machine learning, large scale data acquisition, transformation, and cleaning, both structured and unstructured data ¿ Experience with generative LLM fine tuning and prompt engineering. ¿ Good knowledge with cloud ecosystems, Azure is preferred ¿ Strong programming skills in Python and developing APIs ¿ Knowledge of containerization platforms like Docker is a must. ¿ Experience with Git and modern software development workflow. ¿ Experience with tools in the distributed computing, GPU, cloud platforms, and Big Data domains (e.g., GCP, AWS, MS Azure, Hadoop, Spark, Databricks) is a plus ¿ Experience in ML/NLP algorithms, including supervised and unsupervised learning. ¿ Experience with SQL, Document DBs and Vector DBs ¿ Batch Processing Capability to design an efficient way of processing high volumes of data where a group of transactions is collected over a period Data ¿ Visualization and storytelling Capability to assimilate and present data, as well as more advanced visualization techniques.

Posted 1 week ago

Apply

1.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

About Lowe’s Lowe’s is a FORTUNE® 100 home improvement company serving approximately 16 million customer transactions a week in the United States. With total fiscal year 2024 sales of more than $83 billion, Lowe’s operates over 1,700 home improvement stores and employs approximately 300,000 associates. Based in Mooresville, N.C., Lowe’s supports the communities it serves through programs focused on creating safe, affordable housing, improving community spaces, helping to develop the next generation of skilled trade experts and providing disaster relief to communities in need. For more information, visit Lowes.com. Lowe’s India, the Global Capability Center of Lowe’s Companies Inc., is a hub for driving our technology, business, analytics, and shared services strategy. Based in Bengaluru with over 4,500 associates, it powers innovations across omnichannel retail, AI/ML, enterprise architecture, supply chain, and customer experience. From supporting and launching homegrown solutions to fostering innovation through its Catalyze platform, Lowe’s India plays a pivotal role in transforming home improvement retail while upholding strong commitment to social impact and sustainability. For more information, visit Lowes India About The Team The Pro Rapid Insights team supports Lowe’s Pro business by delivering timely, actionable insights that guide data-driven decisions. We work closely with stakeholders across Marketing, Merchandising, Operations, and Product teams to provide analytical support and business visibility across key functions. Our work balances quick-turnaround requests with the development of scalable dashboards and scorecards that drive strategic planning. In a dynamic and fast-paced environment, the team is known for its agility, problem-solving mindset, and ability to turn complex data into clear, impactful solutions. Job Summary The primary purpose of this role is to perform mathematical and statistical analysis or model building as appropriate. This includes following analytical best practices, analyzing and reporting accurate results, and identifying meaningful insights that directly support decision making. This role provides assistance in supporting one functional area of the business in partnership with other team members. At times, this role may work directly with the business function, but the majority of time is spent working with internal team members to identify and understand business needs. Roles & Responsibilities Core Responsibilities: Analyze structured and unstructured data to uncover trends, customer behaviors, and business drivers specific to the Pro segment. Build and maintain dashboards, scorecards, and automated reporting solutions that track business KPIs and enable fast decision-making. Partner with cross-functional teams to define problems, gather requirements, and deliver data solutions that meet evolving business needs. Conduct root cause analyses, design experiments, and apply statistical modeling (e.g., regression, clustering, A/B testing) to evaluate the effectiveness of programs and initiatives. Translate analytical findings into compelling, actionable recommendations and clearly communicate them to stakeholders at various levels. Continuously seek ways to improve data quality, operational efficiency, and the scalability of analytics solutions. Stay up to date with new analytics tools and methodologies, contributing to the team’s collective learning and innovation. Years Of Experience 1 to 3 yrs of experience data analytics Education Qualification & Certifications (optional) Required Minimum Qualifications Bachelor's degree in business administration, computer science, computer information systems (CIS), engineering, or related field (or equivalent work experience in lieu of degree) Skill Set Required Experience using basic analytical tools such as R, Python, SQL, SAS, Adobe, Alteryx, Knime, Aster Experience using visualization tools such as MicroStrategy VI, Power BI, Tableau Secondary Skills (desired) Experience with business intelligence and reporting tools (e.g., MicroStrategy, Business Objects, Cognos, Adobe, TM1, Alteryx, Knime, SSIS, SQL, Svr) and Enterprise level databases (Hadoop, GCP, Azure, Oracle, Teradata, DB2) Experience working with big, unstructured data in a retail environment Experience with analytical tools like Python, Alteryx, Knime, SAS, R, etc. Experience with visualization tools like MicroStrategy VI, Power BI, SAS-VA, Tableau, D3, R-Shiny Programming experience using tools such as R, Python Data Science experience using tools such as ML, Text mining Knowledge of SQL Project management experience Experience in home improvement retail 1 year of experience in digital analytics implementation using enterprise grade tools such as Adobe Dynamic Tag Management (DTM), Google Tag Manager (GTM), Tealium, Ensighten, etc. (specific to the Digital Analytics Implementation role) Lowe's is an equal opportunity employer and administers all personnel practices without regard to race, color, religious creed, sex, gender, age, ancestry, national origin, mental or physical disability or medical condition, sexual orientation, gender identity or expression, marital status, military or veteran status, genetic information, or any other category protected under federal, state, or local law. Starting rate of pay may vary based on factors including, but not limited to, position offered, location, education, training, and/or experience. For information regarding our benefit programs and eligibility, please visit https://talent.lowes.com/us/en/benefits.

Posted 1 week ago

Apply

5.0 years

0 Lacs

Lucknow, Uttar Pradesh, India

Remote

Linkedin logo

About Agoda Agoda is an online travel booking platform for accommodations, flights, and more. We build and deploy cutting-edge technology that connects travelers with a global network of 4.7M hotels and holiday properties worldwide, plus flights, activities, and more . Based in Asia and part of Booking Holdings, our 7,100+ employees representing 95+ nationalities in 27 markets foster a work environment rich in diversity, creativity, and collaboration. We innovate through a culture of experimentation and ownership, enhancing the ability for our customers to experience the world. Our Purpose – Bridging the World Through Travel We believe travel allows people to enjoy, learn and experience more of the amazing world we live in. It brings individuals and cultures closer together, fostering empathy, understanding and happiness. We are a skillful, driven and diverse team from across the globe, united by a passion to make an impact. Harnessing our innovative technologies and strong partnerships, we aim to make travel easy and rewarding for everyone. Get to Know our Team: The Security Department oversees security, governance, risk management, and compliance, and security operations for all Agoda. We are vigilant in ensuring there is no breach or vulnerability threatening our company or endangering our employees to keep Agoda safe and protected. Given that the security ecosystem is moving forward at tremendous speed, we like to be early adaptors of recent technology and products. This would be a great challenge for those who want to work with the best technology in a dynamic and advanced environment. The Opportunity: As a Security Analyst, you will focus on identifying, analyzing, and remediating vulnerabilities across our environment. You will be hands-on with penetration testing and vulnerability management, ensuring our systems remain secure and resilient. In this Role, you’ll get to: Develop Security Automation Tools to implement solutions at scale Triage security findings from multiple tools and work with hundreds of teams to get them remediated within the right SLA Conduct security assessments through code reviews, vulnerability assessments, penetration testing and risk analysis Research on the negative effects of a vulnerability, from minimizing the impact to altering security controls for future prevention Identify potential threats so that the organization can protect itself from malicious hackers. This includes Vulnerability Management, Bug Bounty Program, Penetration Testing Be responsible for developing Security Trainings for developers Work with DevSecOps team in integration of tools into CI/CD, as well as fine-tune the rules and precision What you’ll Need to Succeed: 5+ years in the information security field 5+ years of experience with Penetration Testing (Web, Infra, Mobile, APIs etc.) and Vulnerability Management Minimum 1 year of experience running a bug bounty platform Minimum 2years of experience with any of public/private cloud environments (Openshift, Rancher, K8s, AWS, GCP, Azure, etc.) Experience performing security testing, e.g. code review and web application security testing Familiarity with Gitlab, Defectdojo, JIRA, Confluence Proficient in one or more programming languages such as Python, Go, Node.js, Python etc. Familiar with analytics platform and databases such as GraphQL , REST APIs, Postgres, MSSQL, Kafka, Hadoop, S3 etc Strong knowledge of Security Assessment tools such as security scanners (Nessus, Acunetix and similar platforms) and fuzzers It’s great if you have: Knowledge in Container Image Security, Dependency Checking, Fuzzing and License Scanning Familiarity with security incident response processes and 0-days Security Certifications Relocation package is provided in case you prefer to relocate to Bangkok, Thailand. Our benefits are… Hybrid Working Model WFH Set Up Allowance 30 Days of Remote Working from anywhere globally every year Employee discount for accommodation globally Global team of 90+ nationalities 40+ offices and 25+ countries Annual CSR / Volunteer Time off Benevity Subscription for employee donations Volunteering opportunities globally Free Headspace subscription Free Odilo & Udemy subscriptions Access to Employee Assistance Program (third party for personal and workplace support) Enhanced Parental Leave Life, TPD & Accident Insurance #sanfrancisco #sanjose #losangeles #sandiego #oakland #denver #miami #orlando #atlanta #chicago #boston #detroit #newyork #portland #philadelphia #dallas #houston #austin #seattle #sydney #melbourne #perth #toronto #vancouver #montreal #shanghai #beijing #shenzhen #prague #Brno #Ostrava #cairo #alexandria #giza #estonia #paris #berlin #munich #hamburg #stuttgart #cologne #frankfurt #dusseldorf #dortmund #essen #Bremen #leipzig #dresden #hanover #nuremberg #athens #hongkong #budapest #jakarta #bali #dublin #telaviv #jerusalem #milan #rome #venice #florence #naples #turin #palermo #bologna #tokyo #osaka #yokohama #nagoya #okinawa #fukuoka #sapporo #kualalumpur #malta #amsterdam #oslo #manila #warsaw #krakow #bucharest #doha #alrayyan #moscow #saintpetersburg #riyadh #jeddah #mecca #medina #singapore #capetown #johannesburg #seoul #barcelona #madrid #stockholm #zurich #taipei #tainan #taichung #kaohsiung #bangkok #Phuket #istanbul #dubai #abudhabi #sharjah #london #manchester #edinburgh #kiev #hcmc #hanoi #amsterdam #bucharest #lodz #wroclaw #poznan #katowice #rio #salvador #newdelhi #Hyderabad #bangalore #Mumbai #Bengaluru #Chennai #Kolkata #Lucknow #bandung #yokohama #nagoya #okinawa #fukuoka #IT #4 Equal Opportunity Employer At Agoda, we pride ourselves on being a company represented by people of all different backgrounds and orientations. We prioritize attracting diverse talent and cultivating an inclusive environment that encourages collaboration and innovation. Employment at Agoda is based solely on a person’s merit and qualifications. We are committed to providing equal employment opportunity regardless of sex, age, race, color, national origin, religion, marital status, pregnancy, sexual orientation, gender identity, disability, citizenship, veteran or military status, and other legally protected characteristics. We will keep your application on file so that we can consider you for future vacancies and you can always ask to have your details removed from the file. For more details please read our privacy policy . Disclaimer We do not accept any terms or conditions, nor do we recognize any agency’s representation of a candidate, from unsolicited third-party or agency submissions. If we receive unsolicited or speculative CVs, we reserve the right to contact and hire the candidate directly without any obligation to pay a recruitment fee.

Posted 1 week ago

Apply

40.0 years

0 Lacs

India

On-site

Linkedin logo

Immediate Joiner Only - Must Have : Python, SQL and PySpark Overview: UsefulBI is looking for highly skilled candidates, with expertise in generating power business insights from very large datasets, in this space where the primary aim would be to enable needle moving business impacts through cutting edge statistical analysis. We are looking for passionate data engineers who can envision the design and development of analytical infrastructure which can support strategic and tactical decision-making. Experience Required: Minimum 4+ years Experience in Data Engineering. Must have good knowledge and experience in Python . Mush have good Knowledge of Pyspark . Mush have good Knowledge of Databricks . Must have good experience in AWS (Glue, Athena, Redshift, EMR) Typically requires relevant analysis work and domain-area work experience. Expert in the management, manipulation, and analysis of very large datasets. Superior verbal and written communication skills, ability to convey rigorous mathematical concepts and considerations to non-experts. Good knowledge of scientific programming in scripting languages like Python. Key Responsibilities: Create and maintain optimal data pipeline architecture. Assemble large, complex data sets that meet functional / non-functional business requirements. Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies. Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics. Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs. Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader. About UsefulBI: UsefulBI mission is to enable better business decisions through Business Intelligence. We do this by starting with a deep understanding of the business context and business need. Our founding team collectively has 40+ years of experience in the domains we focus on and we staff each engagement with a functional expert who drives the business centricity through the engagement. We are obsessed with being experts in the latest tools and technologies in our space – whether for data visualization, analysis or sourcing. Tableau, Qlik, Spotfire, Hadoop, R, SAS, Matlab etc. are all part of our core skillset. We are equally obsessive about our data science skills – we carefully select and apply the right data science algorithms and techniques to the right problems. We bring a “Full Solution” approach which combines very strong data architecture skills with cutting-edge predictive modeling/neural network capabilities and intuitive visualization concepts to ensure the best dissemination of the intelligence created by the Data Models. These data models are built by using advanced neural networks, predictive modeling, and machine learning concepts to build proactive models than reactive ones. We combine our industry and functional expertise with data, proprietary analytics, and software tools to help organizations get greater clarity in decision making and gain significant long-term performance improvement.

Posted 1 week ago

Apply

7.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

At PwC, our people in software and product innovation focus on developing cutting-edge software solutions and driving product innovation to meet the evolving needs of clients. These individuals combine technical experience with creative thinking to deliver innovative software products and solutions. In testing and quality assurance at PwC, you will focus on the process of evaluating a system or software application to identify any defects, errors, or gaps in its functionality. Working in this area, you will execute various test cases and scenarios to validate that the system meets the specified requirements and performs as expected. Driven by curiosity, you are a reliable, contributing member of a team. In our fast-paced environment, you are expected to adapt to working with a variety of clients and team members, each presenting varying challenges and scope. Every experience is an opportunity to learn and grow. You are expected to take ownership and consistently deliver quality work that drives value for our clients and success as a team. As you navigate through the Firm, you build a brand for yourself, opening doors to more opportunities. Skills Examples of the skills, knowledge, and experiences you need to lead and deliver value at this level include but are not limited to: Apply a learning mindset and take ownership for your own development. Appreciate diverse perspectives, needs, and feelings of others. Adopt habits to sustain high performance and develop your potential. Actively listen, ask questions to check understanding, and clearly express ideas. Seek, reflect, act on, and give feedback. Gather information from a range of sources to analyse facts and discern patterns. Commit to understanding how the business works and building commercial awareness. Learn and apply professional and technical standards (e.g. refer to specific PwC tax and audit guidance), uphold the Firm's code of conduct and independence requirements. Job Summary A career in our Managed Services team will provide you an opportunity to collaborate with a wide array of teams to help our clients implement and operate new capabilities, achieve operational efficiencies, and harness the power of technology. Our Analytics and Insights Managed Services team bring a unique combination of industry expertise, technology, data management and managed services experience to create sustained outcomes for our clients and improve business performance. We empower companies to transform their approach to analytics and insights while building your skills in exciting new directions. Have a voice at our table to help design, build and operate the next generation of software and services that manage interactions across all aspects of the value chain. Job Description To really stand out and make us fit for the future in a constantly changing world, each and every one of us at PwC needs to be a purpose-led and values-driven leader at every level. To help us achieve this we have the PwC Professional; our global leadership development framework. It gives us a single set of expectations across our lines, geographies and career paths, and provides transparency on the skills we need as individuals to be successful and progress in our careers, now and in the future. JD for ETL tester at Associate level As an ETL Tester, you will be responsible for designing, developing, and executing SQL scripts to ensure the quality and functionality of our ETL processes. You will work closely with our development and data engineering teams to identify test requirements and drive the implementation of automated testing solutions. Minimum Degree Required : Bachelor Degree Degree Preferred : Bachelors in Computer Engineering Minimum Years of Experience : 7 year(s) of IT experience Certifications Required : NA Certifications Preferred : Automation Specialist for TOSCA, Lambda Test Certifications Required Knowledge/Skills Collaborate with data engineers to understand ETL workflows and requirements. Perform data validation and testing to ensure data accuracy and integrity. Create and maintain test plans, test cases, and test data. Identify, document, and track defects, and work with development teams to resolve issues. Participate in design and code reviews to provide feedback on testability and quality. Develop and maintain automated test scripts using Python for ETL processes. Ensure compliance with industry standards and best practices in data testing. Qualifications Solid understanding of SQL and database concepts. Proven experience in ETL testing and automation. Strong proficiency in Python programming. Familiarity with ETL tools such as Apache NiFi, Talend, Informatica, or similar. Knowledge of data warehousing and data modeling concepts. Strong analytical and problem-solving skills. Excellent communication and collaboration abilities. Experience with version control systems like Git. Preferred Knowledge/Skills Demonstrates extensive knowledge and/or a proven record of success in the following areas: Experience with cloud platforms such as AWS, Azure, or Google Cloud. Familiarity with CI/CD pipelines and tools like Jenkins or GitLab. Knowledge of big data technologies such as Hadoop, Spark, or Kafka.

Posted 1 week ago

Apply

10.0 - 17.0 years

0 - 0 Lacs

Chennai

Work from Office

Naukri logo

Job Purpose This role includes designing and building AI/ML products at scale to improve customer Understanding & Sentiment analysis, recommend customer requirements, recommend optimal inputs, Improve efficiency of Process. This role will collaborate with product owners and business owners. Key Responsibilities Leading a team of junior and experienced data scientists Lead and participate in end-to-end ML projects deployments that require feasibility analysis, design, development, validation, and application of state-of-the art data science solutions. Push the state of the art in terms of the application of data mining, visualization, predictive modelling, statistics, trend analysis, and other data analysis techniques to solve complex business problems including lead classification, recommender systems, product life-cycle modelling, Design Optimization problems, Product cost & weigh optimization problems. Leverage and enhance applications utilizing NLP, LLM, OCR, image based models and Deep Learning Neural networks for use cases including text mining, speech and object recognition Identify future development needs, advance new emerging ML and AI technology, and set the strategy for the data science team Cultivate a product-centric, results-driven data science organization Write production ready code and deploy real time ML models; expose ML outputs through APIs Partner with data/ML engineers and vendor partners for input data pipes development and ML models automation Provide leadership to establish world-class ML lifecycle management processes Job Requirements Qualifications MTech / BE / BTech / MSc in CS or Stats or Maths Experience Over 10 years of Applied Machine learning experience in the fields of Machine Learning, Statistical Modelling, Predictive Modelling, Text Mining, Natural Language Processing (NLP), LLM, OCR, Image based models, Deep learning and Optimization Expert Python Programmer: SQL, C#, extremely proficient with the SciPy stack (e.g. numpy, pandas, sci-kit learn, matplotlib) Proficiency in work with open source deep learning platforms like TensorFlow, Keras, Pytorch Knowledge of the Big Data Ecosystem: (Apache Spark, Hadoop, Hive, EMR, MapReduce) Proficient in Cloud Technologies and Service (Azure Databricks, ADF, Databricks MLflow) Functional Competencies A demonstrated ability to mentor junior data scientists and proven experience in collaborative work environments with external customers Proficient in communicating technical findings to non-technical stakeholders Holding routine peer code review of ML work done by the team Experience in leading and / or collaborating with small to midsized teams Experienced in building scalable / highly available distribute systems in production Experienced in ML lifecycle mgmt. and ML Ops tools & frameworks

Posted 1 week ago

Apply

0 years

0 Lacs

New Delhi, Delhi, India

Remote

Linkedin logo

About Us: Chimera Rocket Labs is a leading aerospace technology company focused on pioneering new solutions for space exploration. We are a team of innovators, engineers, and scientists dedicated to advancing space missions and technology. As part of our continued growth, we are looking for a Data Analyst Intern to join our remote team and gain hands-on experience in the field of data analysis within the aerospace industry. Position Overview: The Data Analyst Intern will work closely with our data science and engineering teams to analyze and interpret large datasets, generate meaningful insights, and support critical projects in aerospace technologies. This internship provides an excellent opportunity to apply academic knowledge to real-world problems, develop key data analysis skills, and contribute to mission-critical projects in space exploration. The ideal candidate will have a passion for data and technology, along with a strong willingness to learn and grow in a fast-paced, dynamic environment. Key Responsibilities: Assist in collecting, cleaning, and preprocessing datasets from various sources, ensuring data quality and consistency. Help perform exploratory data analysis (EDA) to identify patterns, trends, and insights within datasets. Support the development of data visualizations and dashboards to communicate findings to technical and non-technical teams. Assist in generating reports and presenting data-driven insights to help inform decision-making processes. Help design and implement basic data models and statistical analyses to solve real-world aerospace challenges. Collaborate with cross-functional teams to understand data requirements and assist in meeting project objectives. Gain experience with data analysis tools and technologies (e.g., Python, SQL, Excel, Tableau, etc.). Contribute to the creation and maintenance of data pipelines for data integration and analysis. Help troubleshoot and resolve data-related issues to ensure the smooth operation of ongoing projects. Participate in team meetings, brainstorming sessions, and other activities to contribute to project ideas and innovations. Qualifications: Currently pursuing a Bachelor’s or Master’s degree in Data Science, Computer Science, Statistics, Mathematics, or a related field. Strong analytical skills with a passion for working with data. Proficiency in programming languages such as Python or R, especially for data analysis tasks. Basic knowledge of data analysis tools, such as SQL, Excel, or data visualization platforms like Tableau or Power BI. Familiarity with statistical methods and techniques for data analysis. Ability to work independently and manage time effectively in a remote work environment. Strong attention to detail, problem-solving skills, and a proactive attitude. Excellent communication skills, both written and verbal, to convey technical information to diverse audiences. A willingness to learn and develop new skills in data analysis, machine learning, and other emerging technologies. Preferred Qualifications: Previous internship or academic project experience involving data analysis. Exposure to cloud platforms (e.g., AWS, Google Cloud, Microsoft Azure) and big data tools (e.g., Hadoop, Spark). Familiarity with machine learning concepts and tools is a plus. Experience with version control systems like Git. Interest or prior knowledge in aerospace, space technologies, or engineering. Why Join Chimera Rocket Labs? Gain hands-on experience in the high-tech aerospace industry while contributing to impactful space missions. Flexible remote work environment with a supportive and collaborative team. Opportunity to learn from experienced data scientists, engineers, and other professionals in a cutting-edge field. Competitive internship compensation and potential for future career opportunities. Enhance your skill set with exposure to advanced data analysis techniques, tools, and industry best practices. Join Chimera Rocket Labs as a Data Analyst Intern and take the first step towards a rewarding career in space technology and data science! 🚀 Show more Show less

Posted 1 week ago

Apply

5.0 - 8.0 years

20 - 35 Lacs

Pune, Chennai, Bengaluru

Hybrid

Naukri logo

Greetings from LTIMindtree!! About the job Are you looking for a new career challenge? With LTIMindtree, are you ready to embark on a data-driven career? Working for global leading manufacturing client for providing an engaging product experience through best-in-class PIM implementation and building rich, relevant, and trusted product information across channels and digital touchpoints so their end customers can make an informed purchase decision will surely be a fulfilling experience. Location: Pan India. Key Skill : Spark +Python Interested candidates kindly apply in below link and share updated cv to Hemalatha1@ltimindtree.com https://forms.office.com/r/zQucNTxa2U Job Description Key Skill: Hadoop-Spark SparkSQL Python Mandatory Skills: Relevant Experience in ETL and Data Engineering Strong Knowledge in Spark, Python Strong experience in Hive/SQL, PL/SQL Good Understanding of ETL & DW Concepts, Unix Scripting Design, implement and maintain Dat Pipeline to meet business requirements. Convert the Business need into Technical complex PySpark Code. Ability to write complex SQL queries for reporting purpose. Monitor Pyspark code performance and troubleshoot issues Why join us? Work in industry leading implementations for Tier-1 clients Accelerated career growth and global exposure Collaborative, inclusive work environment rooted in innovation Exposure to best-in-class automation framework Innovation first culture: We embrace automation, AI insights and clean data Know someone who fits this perfectly? Tag them let’s connect the right talent with right opportunity DM or email to know more Let’s build something great together

Posted 1 week ago

Apply

4.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Indeed logo

Position: Senior Data Scientist Location: Bangalore, Karnataka (Hybrid) Experience: 4+ Years Role Overview Are you passionate about solving real-world business problems through data? We are looking for an experienced Senior Data Scientist to join our analytics team. In this role, you'll use statistical modeling, advanced analytics, and machine learning to extract actionable insights and drive decision-making. What You’ll Do Build and implement sophisticated algorithms for high-dimensional data challenges Use statistical techniques like hypothesis testing, predictive modeling, machine learning, and text mining to uncover trends Develop intuitive visualizations and reports to communicate complex findings Work with stakeholders and clients to translate business questions into analytical problems Assess and integrate new datasets and technologies into our analytics platform Drive improvements in data workflows, processes, and tools You Should Have 4+ years of hands-on experience in data science, analytics, or statistical modeling A strong academic foundation in quantitative/analytical disciplines Programming expertise in one or more of the following: Python, Java, R Working knowledge of Hadoop ecosystem, AWS, or similar data platforms Understanding of big data frameworks and techniques like recommender systems or social media analysis Exceptional communication skills and fluency in English Bonus Points For Prior experience collaborating with cross-functional teams A problem-solving mindset with strong analytical thinking Adaptability to fast-paced, evolving environments Job Types: Full-time, Permanent, Contractual / Temporary Schedule: Day shift Monday to Friday Morning shift Work Location: In person

Posted 1 week ago

Apply

5.0 - 8.0 years

20 - 35 Lacs

Pune, Chennai, Bengaluru

Hybrid

Naukri logo

Greetings from LTIMindtree!! About the job Are you looking for a new career challenge? With LTIMindtree, are you ready to embark on a data-driven career? Working for global leading manufacturing client for providing an engaging product experience through best-in-class PIM implementation and building rich, relevant, and trusted product information across channels and digital touchpoints so their end customers can make an informed purchase decision will surely be a fulfilling experience. Location: Pan India. Key Skill : Hadoop-Spark SparkSQL Scala Interested candidates kindly apply in below link and share updated cv to Hemalatha1@ltimindtree.com https://forms.office.com/r/zQucNTxa2U Job Description Experience in Scala programming languages Experience in Big Data technologies including Spark Scala and Kafka Who have a good understanding of organizational strategy architecture patterns Microservices Event Driven and technology choices and coaching the team in execution in alignment to these guidelines.Who can apply organizational technology patterns effectively in projects and make recommendations on alternate options.Who have hands on experience working with large volumes of data including different patterns of data ingestion processing batch realtime movement storage and access for both internal and external to BU and ability to make independent decisions within scope of project Who have a good understanding of data structures and algorithms Who can test debug and fix issues within established SLAs Who can design software that is easily testable and observable Who understand how teams goals fit a business need Who can identify business problems at the project level and provide solutions Who understand data access patterns streaming technology data validation data performance cost optimization Strong SQL skills Why join us? Work in industry leading implementations for Tier-1 clients Accelerated career growth and global exposure Collaborative, inclusive work environment rooted in innovation Exposure to best-in-class automation framework Innovation first culture: We embrace automation, AI insights and clean data Know someone who fits this perfectly? Tag them let’s connect the right talent with right opportunity DM or email to know more Let’s build something great together

Posted 1 week ago

Apply

2.0 - 4.0 years

6 - 10 Lacs

Bengaluru

Work from Office

Naukri logo

Who we are About Stripe Stripe is a financial infrastructure platform for businesses Millions of companies?from the worlds largest enterprises to the most ambitious startups?use Stripe to accept payments, grow their revenue, and accelerate new business opportunities Our mission is to increase the GDP of the internet, and we have a staggering amount of work ahead That means you have an unprecedented opportunity to put the global economy within everyones reach while doing the most important work of your career, About The Team The Batch Compute team at Stripe manages the infrastructure, tooling and systems behind running batch processing systems at Stripe, which are currently powered by Hadoop and Spark Batch processing systems power several core asynchronous workflows at Stripe and operate at significant scale, What youll do We're looking for a Software Engineer with experience designing, building and maintaining high-scale, distributed systems You will work with a team that is in charge of the core infrastructure used by the product teams to build and operate batch processing jobs You will have an opportunity to play a hands-on role in significantly rearchitecting our current infrastructure to be much more efficient and resilient This re-architecture will introduce disaggregation of Hadoop storage and compute with open source solutions, Responsibilities Scope and lead technical projects within the Batch Compute domain Build and maintain the infrastructure which powers the core of Stripe, Directly contribute to core systems and write code, Work closely with the open source community to identify opportunities for adopting new open source features as well contribute back to the OSS, Ensure operational excellence and enable a highly available, reliable and secure Batch Compute platform Who you are Were looking for someone who meets the minimum requirements to be considered for the role If you meet these requirements, you are encouraged to apply The preferred qualifications are a bonus, not a requirement, Minimum Requirements 10+ years of professional experience writing high quality production level code or software programs, Have experience with distributed data systems such as Spark, Flink, Trino, Kafka ,etc Experience developing, maintaining and debugging distributed systems built with open source tools, Experience building infrastructure as a product centered around user needs, Experience optimizing the end to end performance of distributed systems, Experience with scaling distributed systems in a rapidly moving environment, Preferred Qualifications Experience as a user of batch processing systems (Hadoop, Spark) Track record of open source contributions to data processing or big data systems (Hadoop, Spark, Celeborn, Flink, etc) In-office expectations Office-assigned Stripes in most of our locations are currently expected to spend at least 50% of the time in a given month in their local office or with users This expectation may vary depending on role, team and location For example, Stripes in Stripe Delivery Center roles in Mexico City, Mexico and Bengaluru, India work 100% from the office Also, some teams have greater in-office attendance requirements, to appropriately support our users and workflows, which the hiring manager will discuss This approach helps strike a balance between bringing people together for in-person collaboration and learning from each other, while supporting flexibility when possible, Pay and benefits Stripe does not yet include pay ranges in job postings in every country Stripe strongly values pay transparency and is working toward pay transparency globally,

Posted 1 week ago

Apply

5.0 - 7.0 years

8 - 12 Lacs

Bengaluru

Work from Office

Naukri logo

Overview: Dozee Health AI is a pioneer in Contactless Remote Patient Monitoring (RPM), proven to drive transformation at scale Headquartered in Bengaluru, India, Dozee has emerged as Indias no 1 RPM Company, We are seeking visionary individuals to help us in this very exciting journey As a part of our dynamic team, youll have the opportunity to collaborate with top healthcare providers in the country, applying AI-powered RPM solutions to tackle some of the most pressing challenges in healthcare enhancing staff efficiency, improving patient outcomes, and pioneering the next generation of care models, Responsibilities To initiate, develop and optimize algorithms to solve healthcare problems To ensure a constant flow of case studies/publications from acquired health datasets To extract feedback for product and marketing in order to improve and optimize their functions To optimize and upgrade existing pipelines and processesTo ensure performance on par with business requirements To keep the software stack and implementation up to date with the latest advancements in the industry Requirement 5+ years of relevant work experienceStrong fundamentals in statistics and machine learning Excellent scripting and programming skills with Python(preferred), Julia or R Experience of working with Time series data Experience of mining, handling and analyzing big datasets Experience of working with TensorFlow/Keras/Pytorch Experience of working with Spark/Hadoop and help setup pipelines Excellent data communication, sharp analytical abilities with proven design skills along with ability to think critically of the current system and existing projects Vision & Mission Save Million lives with Health AI Dozee is Indias leading AI-Powered contactless Remote Patient Monitoring (RPM) and Early Warning System (EWS) A solution that continuously monitors patients and provides early warnings of clinical deterioration, enabling timely interventions and enhancing patient safety in hospitals, nursing facilities and patient homes A "Made in India for the World" solution, Dozee has pioneered the worlds first non-contact blood pressure monitoring system Trusted by leading healthcare providers in India, the USA, and Africa, Dozee is transforming patient safety and care by enhancing outcomes and reducing costs, Dozee is adopted by 300+ hospitals and monitors 16000+ beds across 4 countries Dozee has monitored over 1 Mn Patients, Delivered 35000+ Life Saving Alerts and Saved 10 Mn+ Nursing Hours, Videos Science Behind Dozee : Ballistocardiography & Artificial Intelligence 100 Dozee deliver 144 life saving alerts and INR 2 7 Cr of saving Sattva Study Dozee saves life of a mother at home Leading Healthcare Game changers work with Dozee I I ntroducing Dozee VS Dozee Shravan A clinical grade RPM service Dozee In News Bloomberg Oct 21, 2024 From AI Beds to Remote ICUs, Startups are plugging India's health Gaps News18Oct 26, 2024 Now, You Can Remotely Monitor Your Loved Ones in Hospital With Bengaluru Start-Up's 'Shravan' Analytics India MagazineOct 29, 2024 Dozee Harness AI for Personalised Patient Care ET HealthWorldSep 16, 2024 We trust AI everyday From Google Maps to Smartphones, So why not use it to enhance patient safety in healthcare BW healthcareworldOct 29, 2024 Dozee's AI-Powered System Predicts Patient Deterioration 16 Hours in Advance A tertiary care hospital study published in JMIR , validated Dozees Early Warning System (EWS), showing it identified 97% of deteriorating patients, provided alerts ~19 hours in advance , and generated 5x fewer alerts, reducing alarm fatigue and improving patient outcomes, A study at King George Medical University, Lucknow, and published in Frontiers in Medical Technology demonstrated that Dozees automation can potentially save 2 5 hours of nursing time per shift , improving workflow efficiency and allowing more focus on patient care, A study on remote patient monitoring in general wards published in Cureus found that 90%+ of healthcare providers reported improved care and patient safety, 74% of patients felt safer , and there was a 43% increase in time for direct patient care, Research by Sattva , an independent consulting firm, demonstrates Dozee's substantial impact: for every 100 Dozee-connected beds, it can save approximately 144 lives , reduce nursestime for vital checks by 80% , and decrease ICU average length of stay by 1 3 days, Key Highlights Founded : October, 2015 Founders : Mudit Dandwate, Gaurav Parchani Headquarters : Bangalore, India | Houston, USA | Dubai, UAE Key Investors & Backers : Prime Ventures, 3one4 Capital, YourNest Capital, Gokul Rajaram, BIRAC (Department of Biotechnology, State Bank of India, and Dinesh Mody Ventures, Temasek Foundation, Horizons Ventures Stage : Series A+ Team Strength : 280+ Business : Providing Continuum of care with AI-powered contactless Remote Patient Monitoring (RPM) and Early Warning System (EWS) for Hospitals and Home, Certifications & Accreditations : ISO13485:2016 Certified, ISO27001:2022 Certified, CDSCO Registered, FDA510K Cleared for the flagship product Dozee Vitals Signs (VS) measurement system and SOC2 Type II Certified Achievements Forbes India 30 under 30 Forbes Asia 100 to Watch Times Network India Health Awards 2024 for AI innovation in Bharat Healthcare tech BML Munjal Award for Business Excellence using Learning and Development FICCI Digital Innovation in Healthcare Award Anjani Mashelkar Inclusive Innovation Award Marico Innovation For India Award, To know more about life@dozee, click here , Disclaimer: Dozee is an equal opportunity employer We celebrate diversity and are committed to creating an inclusive environment for all employees ? Dozee does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law All employment is decided on the basis of qualifications, merit, and business need Dozee will not tolerate discrimination or harassment based on any of these characteristics

Posted 1 week ago

Apply

8.0 - 13.0 years

9 - 13 Lacs

Bengaluru

Work from Office

Naukri logo

As a Sr Data Engineer in the Digital & Data team you will work hands-on to deliver and maintain the pipelines required by the business functions to derive value from their data For this, you will bring data from a varied landscape of source systems into our cloud-based analytics stack and implement necessary cleaning and pre-processing steps in close collaboration with our business customers Furthermore, you will work closely together with our teams to ensure that all data assets are governed according to the FAIR principles To keep the engineering team scalable, you and your peers will create reusable components, libraries, and infrastructure that will be used to accelerate the pace with which future use-cases can be delivered You will be part of a team dedicated to delivering state-of-the-art solutions for enabling data analytics use cases across the Healthcare sector of a leading, global Science & Technology company As such, you will have the unique opportunity to gain insight into our diverse business functions allowing you to expand your skills in various technical, scientific, and business domains Working in a project-based way covering a multitude of data domains and technological stacks, you will be able to significantly develop your skills and experience as a Data Engineer Who you are BE/M.Sc./PhD in Computer Science or related field and 8+ years of work experience in a relevant capacity Experience in working with cloud environments such as, Hadoop, AWS, GCP, and Azure. Experience with enforcing security controls and best practices to protect sensitive data within AWS data pipelines, including encryption, access controls, and auditing mechanisms. Agile mindset, a spirit of initiative, and desire to work hands-on together with your team Interest in solving challenging technical problems and developing the future data architecture that will enable the implementation of innovative data analytics use-cases Experience in leading small to medium-sized team. Experience in creating architectures for ETL processes for batch as well as streaming Ingestion Knowledge of designing and validating software stacks for GxP relevant contexts as well as working with PII data Familiarity with the data domains covering the Pharma value-chain (e.g. research, clinical, regulatory, manufacturing, supply chain, and commercial) Strong, hands-on experience in working with Python, Pyspark & R codebases, proficiency in additional programming languages (e.g. C/C++, Rust, Typescript, Java, ) is expected. Experience working with Apache Spark and the Hadoop ecosystem Working with heterogenous compute environments and multi-platform setups Experience in working with cloud environments such as, Hadoop, AWS, GCP, and Azure Basic knowledge of Statistics and Machine Learning algorithms is favorable This is the respective role description: The ability to easily find, access, and analyze data across an organization is key for every modern business to be able to efficiently make decisions, optimize processes, and to create new business models. The Data Architect plays a key role in unlocking this potential by defining and implementing a harmonized data architecture for Healthcare.

Posted 1 week ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

About Balancehero India Balancehero India Pvt. Ltd. (BHI), the wholly-owned subsidiary of Balancehero Co. Ltd., Korea which runs and operates the mobile app “True Balance”- a one-stop destination for financial services.” Founded by Charlie Lee in Korea in 2014, Balancehero started its operations in India in the year 2016. It started off as a balance check application and the company has expanded its business model to financial services. The company aims to build a financial platform for the next billion which set the context for loans, utility services, pay later services, and commerce services. The Company's wholly-owned subsidiary True Credits Private Limited, is a licensed NBFC that aims to bridge the financial gap in India by making Finance available for all. True Credits lends through the True Balance mobile application. About True Balance Owned and operated by BalanceHero Group, True Balance is an RBI authorized Prepaid Payment Instrument (PPI) issuing entity. It offers loans through its subsidiary and RBI licensed Non-Banking Financial Company - True Credits Private Limited and other RBI licensed partners. Founded in 2016, as a mobile app for users in India to efficiently manage their phone calls and data usage, True Balance is now India’s one of the top financial services platforms providing solutions to all the financial needs of its users - from obtaining instant loans, paying utility bills to do prepaid recharges seamlessly. To date, True Balance has raised more than US$84 million in equity funding from marquee global investors like Softbank, Naver, and Line to name a few. The company aims to become the go-to financial services platform for the next billion people in India, playing a key role in the nationwide push towards the goal of Digital India and advancing financial inclusion amongst the unbanked and underbanked people. Balancehero India | LinkedIn About True Credits Established in 2019, True Credits is the RBI licensed NBFC that provides innovative financial services to empower the next billion unbanked users. They cater to the personal and business needs of consumers by providing fast and hassle-free finance. True Credits is focused towards unbanked users who have created a huge demand for instant credit services in India. Job Description: About the Role : BalanceHero's ML Engineer builds and operates training/serving systems and data pipelines for ML/LLM powered applications related to marketing/review/recovery of loan products. About the Responsibilities Building and operating data pipelines and feature stores for training and serving ML/LLM models Building and operating ML/LLM model serving systems Building and operating monitoring systems for ML/LLM powered applications Managing and deploying ML/LLM model versions Requirements Fluent in Python Experience on designing and building data pipelines for large-scale data using Hadoop Ecosystem in a cloud environment Experience on managing model versions and deployments while continuously updating one or more ML-powered applications Experience on operating ML Products that provide real-time prediction services Experience on open source (Triton, torchserve, BentoML, ONNX, etc.) and cloud solutions (AWS Sagemaker Endpoint, GCP Vertex AI, etc.) for model serving Experience on monitoring data shift and consistency while maintaining ML/LLM powered applications Preferred qualification Experience on Pyspark and Polars in large scale project Experience on AWS ML-related services (EMR, Glue, Sagemaker, Athena, etc.) Experience optimizing high-performance models using GPU Roles and responsibilities: About the Role: BalanceHero's ML Engineer builds and operates training/serving systems and data pipelines for ML/LLM powered applications related to marketing/review/recovery of loan products. About the Responsibilities Building and operating data pipelines and feature stores for training and serving ML/LLM models Building and operating ML/LLM model serving systems Building and operating monitoring systems for ML/LLM powered applications Managing and deploying ML/LLM model versions . Required Experience : Fluent in Python Experience on designing and building data pipelines for large-scale data using Hadoop Ecosystem in a cloud environment Experience on managing model versions and deployments while continuously updating one or more ML-powered applications Experience on operating ML Products that provide real-time prediction services Experience on open source (Triton, torchserve, BentoML, ONNX, etc.) and cloud solutions (AWS Sagemaker Endpoint, GCP Vertex AI, etc.) for model serving Experience on monitoring data shift and consistency while maintaining ML/LLM powered applications

Posted 1 week ago

Apply

7.0 - 12.0 years

22 - 25 Lacs

India

On-site

GlassDoor logo

TECHNICAL ARCHITECT Key Responsibilities 1. Designing technology systems: Plan and design the structure of technology solutions, and work with design and development teams to assist with the process. 2. Communicating: Communicate system requirements to software development teams, and explain plans to developers and designers. They also communicate the value of a solution to stakeholders and clients. 3. Managing Stakeholders: Work with clients and stakeholders to understand their vision for the systems. Should also manage stakeholder expectations. 4. Architectural Oversight: Develop and implement robust architectures for AI/ML and data science solutions, ensuring scalability, security, and performance. Oversee architecture for data-driven web applications and data science projects, providing guidance on best practices in data processing, model deployment, and end-to-end workflows. 5. Problem Solving: Identify and troubleshoot technical problems in existing or new systems. Assist with solving technical problems when they arise. 6. Ensuring Quality: Ensure if systems meet security and quality standards. Monitor systems to ensure they meet both user needs and business goals. 7. Project management: Break down project requirements into manageable pieces of work, and organise the workloads of technical teams. 8. Tool & Framework Expertise: Utilise relevant tools and technologies, including but not limited to LLMs, TensorFlow, PyTorch, Apache Spark, cloud platforms (AWS, Azure, GCP), Web App development frameworks and DevOps practices. 9. Continuous Improvement: Stay current on emerging technologies and methods in AI, ML, data science, and web applications, bringing insights back to the team to foster continuous improvement. Technical Skills 1. Proficiency in AI/ML frameworks such as TensorFlow, PyTorch, Keras, and scikit-learn for developing machine learning and deep learning models. 2. Knowledge or experience working with self-hosted or managed LLMs. 3. Knowledge or experience with NLP tools and libraries (e.g., SpaCy, NLTK, Hugging Face Transformers) and familiarity with Computer Vision frameworks like OpenCV and related libraries for image processing and object recognition. 4. Experience or knowledge in back-end frameworks (e.g., Django, Spring Boot, Node.js, Express etc.) and building RESTful and GraphQL APIs. 5. Familiarity with microservices, serverless, and event-driven architectures. Strong understanding of design patterns (e.g., Factory, Singleton, Observer) to ensure code scalability and reusability. 6. Proficiency in modern front-end frameworks such as React, Angular, or Vue.js, with an understanding of responsive design, UX/UI principles, and state management (e.g., Redux) 7. In-depth knowledge of SQL and NoSQL databases (e.g., PostgreSQL, MongoDB, Cassandra), as well as caching solutions (e.g., Redis, Memcached). 8. Expertise in tools such as Apache Spark, Hadoop, Pandas, and Dask for large-scale data processing. 9. Understanding of data warehouses and ETL tools (e.g., Snowflake, BigQuery, Redshift, Airflow) to manage large datasets. 10. Familiarity with visualisation tools (e.g., Tableau, Power BI, Plotly) for building dashboards and conveying insights. 11. Knowledge of deploying models with TensorFlow Serving, Flask, FastAPI, or cloud-native services (e.g., AWS SageMaker, Google AI Platform). 12. Familiarity with MLOps tools and practices for versioning, monitoring, and scaling models (e.g., MLflow, Kubeflow, TFX). 13. Knowledge or experience in CI/CD, IaC and Cloud Native toolchains. 14. Understanding of security principles, including firewalls, VPC, IAM, and TLS/SSL for secure communication. 15. Knowledge of API Gateway, service mesh (e.g., Istio), and NGINX for API security, rate limiting, and traffic management. Experience Required Technical Architect with 7 - 12 years of experience Salary 22-25 LPA Job Types: Full-time, Permanent Pay: ₹2,200,000.00 - ₹2,500,000.00 per year Experience: total work: 1 year (Preferred) Work Location: In person

Posted 1 week ago

Apply

8.0 years

30 - 38 Lacs

Haryāna

Remote

GlassDoor logo

Role: AWS Data Engineer Location: Gurugram Mode: Hybrid Type: Permanent Job Description: We are seeking a talented and motivated Data Engineer with requisite years of hands-on experience to join our growing data team. The ideal candidate will have experience working with large datasets, building data pipelines, and utilizing AWS public cloud services to support the design, development, and maintenance of scalable data architectures. This is an excellent opportunity for individuals who are passionate about data engineering and cloud technologies and want to make an impact in a dynamic and innovative environment. Key Responsibilities: Data Pipeline Development: Design, develop, and optimize end-to-end data pipelines for extracting, transforming, and loading (ETL) large volumes of data from diverse sources into data warehouses or lakes. Cloud Infrastructure Management: Implement and manage data processing and storage solutions in AWS (Amazon Web Services) using services like S3, Redshift, Lambda, Glue, Kinesis, and others. Data Modeling: Collaborate with data scientists, analysts, and business stakeholders to define data requirements and design optimal data models for reporting and analysis. Performance Tuning & Optimization: Identify bottlenecks and optimize query performance, pipeline processes, and cloud resources to ensure cost-effective and scalable data workflows. Automation & Scripting: Develop automated data workflows and scripts to improve operational efficiency using Python, SQL, or other scripting languages. Collaboration & Documentation: Work closely with data analysts, data scientists, and other engineering teams to ensure data availability, integrity, and quality. Document processes, architectures, and solutions clearly. Data Quality & Governance: Ensure the accuracy, consistency, and completeness of data. Implement and maintain data governance policies to ensure compliance and security standards are met. Troubleshooting & Support: Provide ongoing support for data pipelines and troubleshoot issues related to data integration, performance, and system reliability. Qualifications: Essential Skills: Experience: 8+ years of professional experience as a Data Engineer, with a strong background in building and optimizing data pipelines and working with large-scale datasets. AWS Experience: Hands-on experience with AWS cloud services, particularly S3, Lambda, Glue, Redshift, RDS, and EC2. ETL Processes: Strong understanding of ETL concepts, tools, and frameworks. Experience with data integration, cleansing, and transformation. Programming Languages: Proficiency in Python, SQL, and other scripting languages (e.g., Bash, Scala, Java). Data Warehousing: Experience with relational and non-relational databases, including data warehousing solutions like AWS Redshift, Snowflake, or similar platforms. Data Modeling: Experience in designing data models, schema design, and data architecture for analytical systems. Version Control & CI/CD: Familiarity with version control tools (e.g., Git) and CI/CD pipelines. Problem-Solving: Strong troubleshooting skills, with an ability to optimize performance and resolve technical issues across the data pipeline. Desirable Skills: Big Data Technologies: Experience with Hadoop, Spark, or other big data technologies. Containerization & Orchestration: Knowledge of Docker, Kubernetes, or similar containerization/orchestration technologies. Data Security: Experience implementing security best practices in the cloud and managing data privacy requirements. Data Streaming: Familiarity with data streaming technologies such as AWS Kinesis or Apache Kafka. Business Intelligence Tools: Experience with BI tools (Tableau, Quicksight) for visualization and reporting. Agile Methodology: Familiarity with Agile development practices and tools (Jira, Trello, etc.) Job Type: Permanent Pay: ₹3,000,000.00 - ₹3,800,000.00 per year Benefits: Work from home Schedule: Day shift Monday to Friday Work Location: Hybrid remote in Haryana, Haryana

Posted 1 week ago

Apply

15.0 years

0 Lacs

Bhubaneshwar

On-site

GlassDoor logo

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Apache Spark Good to have skills : NA Minimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary: As an Application Developer, you will engage in the design, construction, and configuration of applications tailored to fulfill specific business processes and application requirements. Your typical day will involve collaborating with team members to understand project needs, developing innovative solutions, and ensuring that applications are optimized for performance and usability. You will also participate in testing and debugging processes to deliver high-quality applications that meet user expectations and business goals. Roles & Responsibilities: - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Assist in the documentation of application specifications and user guides. - Collaborate with cross-functional teams to gather requirements and provide technical insights. Professional & Technical Skills: - Must To Have Skills: Proficiency in Apache Spark. - Good To Have Skills: Experience with data processing frameworks such as Hadoop. - Strong understanding of distributed computing principles. - Familiarity with programming languages such as Java or Scala. - Experience in developing and deploying applications in cloud environments. Additional Information: - The candidate should have minimum 3 years of experience in Apache Spark. - This position is based at our Bhubaneswar office. - A 15 years full time education is required. 15 years full time education

Posted 1 week ago

Apply

4.0 years

5 - 10 Lacs

Noida

On-site

GlassDoor logo

Every day, Global Payments makes it possible for millions of people to move money between buyers and sellers using our payments solutions for credit, debit, prepaid and merchant services. Our worldwide team helps over 3 million companies, more than 1,300 financial institutions and over 600 million cardholders grow with confidence and achieve amazing results. We are driven by our passion for success and we are proud to deliver best-in-class payment technology and software solutions. Join our dynamic team and make your mark on the payments technology landscape of tomorrow. Summary of This Role Works throughout the software development life cycle and performs in a utility capacity to create, design, code, debug, maintain, test, implement and validate applications with a broad understanding of a variety of languages and architectures. Analyzes existing applications or formulate logic for new applications, procedures, flowcharting, coding and debugging programs. Maintains and utilizes application and programming documents in the development of code. Recommends changes in development, maintenance and system standards. Creates appropriate deliverables and develops application implementation plans throughout the life cycle in a flexible development environment. What Part Will You Play? Develops basic to moderately complex code using front and / or back end programming languages within multiple platforms as needed in collaboration with business and technology teams for internal and external client software solutions. Designs, creates, and delivers routine to moderately complex program specifications for code development and support on multiple projects/issues with a wide understanding of the application / database to better align interactions and technologies. Analyzes, modifies, and develops moderately complex code/unit testing in order to develop concise application documentation. Performs testing and validation requirements for moderately complex code changes. Performs corrective measures for moderately complex code deficiencies and escalates alternative proposals. Participates in client facing meetings, joint venture discussions, vendor partnership teams to determine solution approaches. Provides support to leadership for the design, development and enforcement of business / infrastructure application standards to include associated controls, procedures and monitoring to ensure compliance and accuracy of data. Applies a full understanding of procedures, methodology and application standards to include Payment Card Industry (PCI) security compliance. Conducts and provides basic billable hours and resource estimates on initiatives, projects and issues. Assists with on-the-job training and provides guidance to other software engineers. What Are We Looking For in This Role? Minimum Qualifications BS in Computer Science, Information Technology, Business / Management Information Systems or related field Typically minimum of 4 years - Professional Experience In Coding, Designing, Developing And Analyzing Data. Typically has an advanced knowledge and use of one or more front / back end languages / technologies and a moderate understanding of the other corresponding end language / technology from the following but not limited to; two or more modern programming languages used in the enterprise, experience working with various APIs, external Services, experience with both relational and NoSQL Databases. Preferred Qualifications BS in Computer Science, Information Technology, Business / Management Information Systems or related field 6+ years professional Experience In Coding, Designing, Developing And Analyzing Data and experience with IBM Rational Tools What Are Our Desired Skills and Capabilities? Skills / Knowledge - A seasoned, experienced professional with a full understanding of area of specialization; resolves a wide range of issues in creative ways. This job is the fully qualified, career-oriented, journey-level position. Job Complexity - Works on problems of diverse scope where analysis of data requires evaluation of identifiable factors. Demonstrates good judgment in selecting methods and techniques for obtaining solutions. Networks with senior internal and external personnel in own area of expertise. Supervision - Normally receives little instruction on day-to-day work, general instructions on new assignments. Operating Systems: Linux distributions including one or more for the following: Ubuntu, CentOS/RHEL, Amazon Linux Microsoft Windows z/OS Tandem/HP-Nonstop Database - Design, familiarity with DDL and DML for one or more of the following databases Oracle, MySQL, MS SQL Server, IMS, DB2, Hadoop Back-end technologies - Java, Python, .NET, Ruby, Mainframe COBOL, Mainframe Assembler Front-end technologies - HTML, JavaScript, jQuery, CICS Web Frameworks – Web technologies like Node.js, React.js, Angular, Redux Development Tools - Eclipse, Visual Studio, Webpack, Babel, Gulp Mobile Development – iOS, Android Machine Learning – Python, R, Matlab, Tensorflow, DMTK Global Payments Inc. is an equal opportunity employer. Global Payments provides equal employment opportunities to all employees and applicants for employment without regard to race, color, religion, sex (including pregnancy), national origin, ancestry, age, marital status, sexual orientation, gender identity or expression, disability, veteran status, genetic information or any other basis protected by law. If you wish to request reasonable accommodations related to applying for employment or provide feedback about the accessibility of this website, please contact jobs@globalpay.com.

Posted 1 week ago

Apply

0.0 - 5.0 years

27 - 30 Lacs

Pune, Maharashtra

Remote

Indeed logo

We’re Hiring: Talend Lead – ETL/Data Integration Location : Bengaluru / Hyderabad / Chennai / Pune (Hybrid – 1–2 days/week in-office) Position : Full-time | Open Roles : 1 Work Hours : 2 PM – 11 PM IST (Work from office till 6 PM, continue remotely after) CTC : ₹27–30 LPA (including 5% variable) Notice Period : 0–30 Days (Serving notice preferred) About the Role: We’re looking for a seasoned Talend Lead with 8+ years of experience, including 5–6 years specifically in Talend ETL development . This role demands hands-on technical expertise along with the ability to mentor teams, build scalable data integration pipelines, and contribute to high-impact enterprise data projects. Key Responsibilities: Lead and mentor a small team of Talend ETL developers (6+ months of lead experience acceptable) Design, build, and optimize data integration solutions using Talend and AWS Collaborate with business stakeholders and project teams to define requirements and architecture Implement robust and scalable ETL pipelines integrating various data sources: RDBMS, NoSQL, APIs, cloud platforms Perform advanced SQL querying , transformation logic, and performance tuning Ensure adherence to development best practices, documentation standards, and job monitoring Handle job scheduling , error handling , and data quality checks Stay updated with Talend platform features and contribute to the evolution of ETL frameworks Required Skills & Experience: 8+ years in data engineering/ETL, including 5+ years in Talend Minimum 6–12 months of team leadership or mentorship experience Proficient in Talend Studio , TAC/TMC , and AWS services (S3, Glue, Athena, EC2, RDS, Redshift) Strong command of SQL for data manipulation and transformation Experience integrating data from various formats and protocols: JSON, XML, REST/SOAP APIs, CSV, flat files Familiar with data warehousing principles , ETL/ELT processes , and data profiling Working knowledge of monitoring tools, job schedulers, and Git for version control Effective communicator with the ability to work in distributed teams Must not have short-term projects or gaps of more than 3 months No JNTU profiles will be considered Preferred Qualifications: Experience leading a team of 8+ Talend developers Experience working with US healthcare clients Bachelor’s degree in Computer Science/IT or related field Talend and AWS certifications (e.g., Talend Developer, AWS Cloud Practitioner) Knowledge of Terraform , GitLab , and CI/CD pipelines Familiarity with scripting (Python or Shell) Exposure to Big Data tools (Hadoop, Spark) and Talend Big Data Suite Experience working in Agile environments Job Types: Full-time, Permanent Pay: ₹2,700,000.00 - ₹3,000,000.00 per year Schedule: Day shift Evening shift Monday to Friday Ability to commute/relocate: Pune, Maharashtra: Reliably commute or planning to relocate before starting work (Required) Education: Bachelor's (Required) Experience: Talend: 8 years (Required) ETL: 5 years (Required) Work Location: In person

Posted 1 week ago

Apply

4.0 years

0 - 0 Lacs

India

Remote

GlassDoor logo

Job Title: Python AI/ML Developer Company: Weavers Web Solutions Private Limited Location: Kolkata Job Type: Full-Time Experience Level: 4 Years About Weavers Web Solutions Private Limited: Weavers Web Solutions is a leading technology-driven company specializing in providing innovative web and AI-driven solutions. We focus on delivering transformative products and services, leveraging emerging technologies like Artificial Intelligence, Machine Learning, and Data Science. Our team thrives on collaboration, problem-solving, and delivering exceptional results for our clients. Position Overview: We are looking for a highly skilled and experienced Python AI/ML Developer to join our growing team. You will be working on cutting-edge projects in AI, machine learning, and data analytics, helping us build intelligent systems that drive business success. The ideal candidate will have a strong background in Python programming, machine learning algorithms, data analysis, and model deployment. Key Responsibilities: Design, implement, and maintain scalable AI/ML models using Python. Collaborate with cross-functional teams to define project goals, data requirements, and deliverables. Develop machine learning pipelines for data preprocessing, feature engineering, model training, evaluation, and deployment. Optimize existing ML models for better performance, scalability, and efficiency. Stay updated with the latest trends in AI/ML technologies and apply them to real-world projects. Use libraries such as TensorFlow, Keras, Scikit-learn, Pandas, NumPy, and others to build models. Work with big data platforms and tools like Hadoop, Spark, and cloud-based services. Document solutions, processes, and best practices for team collaboration and knowledge sharing. Conduct data analysis, visualization, and reporting for business insights. Ensure model reproducibility and track experiment results using version control systems like Git. Provide technical leadership and mentoring to junior developers on AI/ML best practices. Required Skills and Qualifications: Experience: Minimum of 4 years of professional experience as an AI/ML Developer or Data Scientist. Programming Skills: Strong proficiency in Python, including experience with libraries such as Pandas, NumPy, SciPy, Scikit-learn, and TensorFlow/PyTorch. Machine Learning Algorithms: In-depth understanding of supervised and unsupervised learning, deep learning, natural language processing (NLP), reinforcement learning, etc. Model Development: Experience in building and deploying AI/ML models in production environments. Data Handling: Solid understanding of data preprocessing, feature extraction, and data normalization. Cloud Platforms: Familiarity with cloud platforms (AWS, Azure, GCP) and containerization technologies (Docker, Kubernetes). Version Control: Proficiency with Git or similar version control systems. Problem Solving: Strong analytical and problem-solving skills with a passion for working with large datasets. Communication Skills: Ability to explain complex technical concepts to non-technical stakeholders and collaborate effectively with cross-functional teams. Why Join Us? Innovation-Driven Culture: Work with a team of passionate professionals who are dedicated to pushing the boundaries of technology. Growth Opportunities: We offer a clear career progression path, along with opportunities to enhance your skills through training and exposure to the latest technologies. Collaborative Environment: Enjoy a collaborative, supportive, and flexible work culture. Competitive Compensation: We offer a competitive salary, benefits, and performance-based incentives. Work-Life Balance: Flexible working hours and the possibility of remote work. Job Types: Full-time, Permanent Pay: ₹30,000.00 - ₹60,000.00 per month Benefits: Health insurance Provident Fund Schedule: Monday to Friday Supplemental Pay: Performance bonus Work Location: In person

Posted 1 week ago

Apply

4.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Linkedin logo

Job Reference # 318227BR Job Type Full Time Your role Are you interested in pursuing a career in Data Science and AI with STAAT team in Global wealth management Americas? Building deployable machine learning models and embedding it in business workflow Defining AI research problems and criteria for evaluating success Contributing to product design, developing features to enhance the product based on Natural Language techniques Processing massive amounts of structured and unstructured data Researching new machine learning solutions for complex business problems and embedding the models within the business workflow Communicating findings Your team You’ll be working STAAT Data Science team in Mumbai Your expertise degree in Computer Science or related field strong understanding of probability, statistics, linear algebra and calculus 4+ years’ experience in developing and NLP Models Expert level proficiency in python 2+ years’ experience in building NLP models for Sentiment scoring, summarization, abstractions using deep learning, transfer learning techniques Experience in dealing with large-scale unstructured text data Experience in machine learning packages ML experience with different supervised and unsupervised learning algorithms knowledge of a variety of machine learning techniques such as classification, clustering, optimization, Random Forest, PCA, XgBoost, natural language processing, deep neural network, etc good understanding of mathematical underpinning and their realworld advantages/drawbacks hands on experience of using programming languages (Python, R, SQL, etc.) to manipulate data, develop models and derive insights hands on experience of database and analytical technologies in the industry, such as Greenplum, DB2, Dataiku, Hadoop, etc hands-on experience deploying analytical models to solve business problems ability to develop experimental and analytical plans for data modeling processes and A/B testing About Us UBS is the world’s largest and the only truly global wealth manager. We operate through four business divisions: Global Wealth Management, Personal & Corporate Banking, Asset Management and the Investment Bank. Our global reach and the breadth of our expertise set us apart from our competitors.. We have a presence in all major financial centers in more than 50 countries. How We Hire We may request you to complete one or more assessments during the application process. Learn more Join us At UBS, we know that it's our people, with their diverse skills, experiences and backgrounds, who drive our ongoing success. We’re dedicated to our craft and passionate about putting our people first, with new challenges, a supportive team, opportunities to grow and flexible working options when possible. Our inclusive culture brings out the best in our employees, wherever they are on their career journey. We also recognize that great work is never done alone. That’s why collaboration is at the heart of everything we do. Because together, we’re more than ourselves. We’re committed to disability inclusion and if you need reasonable accommodation/adjustments throughout our recruitment process, you can always contact us. Disclaimer / Policy Statements UBS is an Equal Opportunity Employer. We respect and seek to empower each individual and support the diverse cultures, perspectives, skills and experiences within our workforce. Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Andhra Pradesh, India

On-site

Linkedin logo

Key Responsibilities:Assess and understand existing data platforms e.g., Hadoop, Spark, onpremises systems, or other cloud data warehouses Design and implement end-to-end migration strategies to Databricks. Migrate data pipelines, notebooks, models, and jobs to Databricks. Optimize performance and cost-efficiency during and after the migration. Develop and maintain CI/CD pipelines for Databricks-based projects. Collaborate with data engineers, analysts, and business stakeholders to ensure smooth transition and minimal disruption. Document migration processes, architecture, and configurations. Required Skills & Qualifications 5+ years of experience in data engineering or related roles. 2+ years of hands-on experience with Databricks. Experience migrating from systems like Hadoop, Spark, or legacy ETL tools to Databricks. Strong expertise in Apache Spark, PySpark, SQL, and Delta Lake. Familiarity with cloud platforms (Azure, AWS, or GCP) especially Azure Databricks or AWS Databricks. Experience with CI/CD, DevOps, and version control tools (Git, Azure DevOps, etc.). Strong problem-solving skills and ability to work independently. Excellent communication and documentation skills. Preferred Qualifications Databricks certification (e.g., Databricks Certified Data Engineer Associate/Professional). Experience with MLflow, Unity Catalog, or data governance frameworks. Familiarity with Terraform or Infrastructure as Code (IaC) for Databricks setup. Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Linkedin logo

Experience of working in AWS cloud service (i.e. S3, AWS Glue, Glue catalogue, Step Functions, lambda, Event Bridge etc) Must have hands on DQ Libraries for data quality checks Proficiency in data modelling and database management. Strong programming skills in python, unix and on ETL technologies like Informatica Experience of DevOps and Agile methodology and associated toolsets (including working with code repositories) and methodologies Knowledge of big data technologies like Hadoop and Spark. Must have hands on Reporting tools Tableau, quick sight and MS Power BI Must have hands on databases like Postgres, MongoDB Experience of using industry recognised frameworks and experience on Streamsets & Kafka is preferred Experience in data sourcing including real time data integration Proficiency in Snowflake Cloud and associated data migration from on premise to Cloud with knowledge on databases like Snowflake, Azure data lake, Postgres Show more Show less

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies