Jobs
Interviews

1612 Pandas Jobs - Page 5

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 - 8.0 years

0 Lacs

hyderabad, telangana

On-site

The role of a member in the New Analytics Team based in Hyderabad involves understanding business processes and data to model requirements for creating analytics solutions. You will be responsible for building predictive models and recommendation engines using cutting-edge machine learning techniques to enhance the efficiency and effectiveness of business processes. Your tasks will include churning and analyzing data to identify actionable insights and patterns for business use. Additionally, you will assist the Function Head in data preparation and modeling tasks as required. Collaboration with both business and IT teams is essential for understanding and collecting data. Your responsibilities will also include collecting, collating, cleaning, processing, and transforming large volumes of primarily tabular data comprising numerical, categorical, and some textual information. Applying data preparation techniques such as data filtering, joining, cleaning, missing value imputation, feature extraction, feature engineering, feature selection, dimensionality reduction, feature scaling, and variable transformation will be a part of your routine tasks. You will be expected to apply basic algorithms like linear regression, logistic regression, ANOVA, KNN, various clustering methods, SVM, Naive Bayes, decision trees, principal components, and association rule mining. Additionally, ensemble modeling algorithms like bagging (Random Forest), boosting (GBM, LGBM, XGBoost, CatBoost), time-series modeling, and other state-of-the-art algorithms will also be utilized as required. Your role will involve employing modeling concepts such as hyperparameter optimization, feature selection, stacking, blending, K-fold cross-validation, bias and variance, and combating overfitting. Building predictive models using state-of-the-art machine learning techniques for regression, classification, clustering, recommendation engines, etc., will be a key part of your responsibilities. Furthermore, you will analyze business data to uncover hidden patterns and insights, identify explanatory causes, and make strategic recommendations based on your findings. To excel in this role, you should hold a BE/B. Tech degree in any stream and possess strong expertise in Python libraries like Pandas and Scikit Learn. Proficiency in coding according to the outlined requirements is essential. Experience with Python editors such as PyCharm and/or Jupyter Notebooks is a must, along with the ability to organize code into modules, functions, and/or objects. Knowledge of using ChatGPT for machine learning will be advantageous, while familiarity with basic SQL for querying and Excel for data analysis is necessary. Understanding basics of statistics, including distributions, hypothesis testing, and sampling techniques, is a prerequisite. Experience with Kaggle and familiarity with R are desirable. Ideal candidates will have a minimum of 4 years of experience solving business problems through data analytics, data science, and modeling, with at least 2 years of full-time experience as a data scientist. They should have worked on at least 3 projects involving ML model building that were utilized in production by businesses or other clients. Your primary responsibilities will include spending 35% of your time on data preparation for modeling, 35% on building ML/AI models for various business requirements, 20% on performing custom analytics to provide actionable insights to the business, and 10% on assisting the Function Head in data preparation and modeling tasks as needed. Candidates without familiarity with deep learning algorithms, image processing and classification, and text modeling using NLP techniques will not be considered for selection. For applying to this position, please email your application to careers@isb.edu.,

Posted 1 week ago

Apply

2.0 - 6.0 years

0 Lacs

chennai, tamil nadu

On-site

You are a talented and driven Machine Learning Engineer with 2-5 years of experience, looking to join a dynamic team in Chennai. Your expertise lies in machine learning principles and hands-on experience in building, deploying, and managing ML models in production environments. In this role, you will focus on MLOps practices and orchestration to ensure robust, scalable, and automated ML pipelines. Your responsibilities will include designing, developing, and implementing end-to-end MLOps pipelines for deploying, monitoring, and managing machine learning models in production. You will use orchestration tools such as Apache Airflow, Kubeflow, AWS Step Functions, or Azure Data Factory to automate ML workflows. Implementing CI/CD practices for ML code, models, and infrastructure will be crucial for ensuring rapid and reliable releases. You will also establish monitoring and alerting systems for deployed ML models, optimize performance, troubleshoot and debug issues across the ML lifecycle, and create and maintain technical documentation. To qualify for this role, you should have a Bachelor's or Master's degree in Computer Science, Data Science, Engineering, or a related quantitative field, along with 2-5 years of professional experience as a Machine Learning Engineer or MLOps Engineer. Your skills should include proficiency in Python and its ML ecosystem, hands-on experience with major cloud platforms and their ML/MLOps services, knowledge of orchestration tools, containerization technologies, CI/CD pipelines, and database systems. Strong problem-solving, analytical, and communication skills are essential for collaborating effectively with Data Scientists, Data Engineers, and Software Developers in an Agile environment.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

pune, maharashtra

On-site

As a Python Developer specializing in Artificial Intelligence (AI) and Machine Learning (ML), you will play a crucial role in our team by designing, deploying, and enhancing intelligent applications and solutions using Python and ML frameworks. Your responsibilities will include developing scalable Python applications with AI/ML capabilities, creating machine learning models for various purposes such as classification, prediction, and NLP, and collaborating closely with data scientists and product teams to integrate ML models into production systems. You will also be tasked with optimizing algorithms for performance, maintaining and improving existing ML pipelines, and ensuring clean, reusable, and well-documented code. To excel in this role, you should hold a Bachelor's or Master's degree in Computer Science, Engineering, or a related field, alongside 3-5 years of hands-on experience in Python programming. A strong grasp of AI/ML concepts and applications is essential, as well as proficiency in libraries and frameworks like NumPy, Pandas, Scikit-learn, TensorFlow, and PyTorch. Experience with data preprocessing, model training, and evaluation, as well as knowledge of REST APIs and deployment tools like Flask or FastAPI, will be beneficial. Familiarity with cloud platforms and ML Ops tools is a plus, and strong analytical, problem-solving, communication, and collaboration skills are necessary. Additionally, having experience with Natural Language Processing (NLP), exposure to deep learning and computer vision projects, and knowledge of containerization tools like Docker and orchestration with Kubernetes are considered advantageous. By joining our team, you will have the opportunity to work on real-world AI/ML projects in a collaborative and innovative work environment with a flexible work culture and various learning and development opportunities. Furthermore, we offer benefits such as Private Health Insurance, a Hybrid Work Mode, Performance Bonus, and Paid Time Off.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

Join us to lead data modernization and maximize analytics utility. As a Data Owner Lead at JPMorgan Chase within the Data Analytics team, you play a crucial role in enabling the business to drive faster innovation through data. You are responsible for managing customer application and account opening data, ensuring its quality and protection, and collaborating with technology and business partners to execute data requirements. Your primary job responsibilities include documenting data requirements for your product and coordinating with technology and business partners to manage change from legacy to modernized data. You will have to model data for efficient querying and use in LLMs by utilizing the business data dictionary and metadata. Moreover, you are expected to develop ideas for data products by understanding analytics needs and creating prototypes for productizing datasets. Additionally, developing proof of concepts for natural language querying and collaborating with stakeholders to rollout capabilities will be part of your tasks. You will also support the team in building backlog, grooming initiatives, and leading data engineering scrum teams. Managing direct or matrixed staff to execute data-related tasks will also be within your purview. To be successful in this role, you should hold a Bachelor's degree and have at least 5 years of experience in data modeling for relational, NoSQL, and graph databases. Expertise in data technologies such as analytics, business intelligence, machine learning, data warehousing, data management & governance, and AWS cloud solutions is crucial. Experience with natural language processing, machine learning, and deep learning toolkits (like TensorFlow, PyTorch, NumPy, Scikit-Learn, Pandas) is also required. Furthermore, you should possess the ability to balance short-term goals and long-term vision in complex environments, along with knowledge of open data standards, data taxonomy, vocabularies, and metadata management. A Master's degree is preferred for this position, along with the aforementioned qualifications, capabilities, and skills.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

hyderabad, telangana

On-site

As a Machine Learning/AI Engineer, your primary role will involve designing, developing, and implementing machine learning models and AI solutions using Python. You will collaborate closely with various teams to comprehend business requirements, pinpoint opportunities for leveraging machine learning and AI technologies, and create solutions to tackle complex challenges. Working with extensive datasets, you will apply statistical analysis and machine learning techniques to extract valuable insights and construct scalable and robust algorithms. Your key responsibilities will encompass understanding business problems in collaboration with stakeholders, collecting and preprocessing large datasets from diverse sources, developing machine learning models and AI algorithms using Python libraries like TensorFlow, PyTorch, scikit-learn, or similar, engineering features from raw data to enhance model performance and interpretability, training, validating, and fine-tuning machine learning models utilizing appropriate evaluation metrics and validation techniques, deploying machine learning models into production environments with a focus on scalability, reliability, and performance, monitoring model performance in production, conducting periodic model retraining, and addressing any arising issues, documenting code, algorithms, and processes to facilitate knowledge sharing and maintainability, staying informed about the latest advancements in machine learning and AI research, and exploring innovative solutions to enhance existing systems. Ideally, you should possess over 3 years of demonstrated experience in developing and deploying machine learning models and AI solutions using Python, with a preference for familiarity with deep learning frameworks. Proficiency in Python programming and libraries such as TensorFlow, PyTorch, scikit-learn, pandas, and NumPy is expected. A strong grasp of statistical concepts, linear algebra, calculus, and probability theory is essential. Furthermore, effective problem-solving skills, excellent communication abilities to interact with cross-functional teams and stakeholders, meticulous attention to detail in data analysis and model development, and a willingness to adapt to new technologies and changing project requirements are highly valued. Exposure to NetSuite cloud ERP/Platform is considered an added advantage. This is a full-time position offering health insurance and leave encashment benefits. The work schedule involves fixed shifts from Monday to Friday. Applicants are required to have a minimum of 3 years of experience in deep learning and be located in Hyderabad, Telangana, with work being conducted in person.,

Posted 1 week ago

Apply

4.0 - 8.0 years

0 Lacs

karnataka

On-site

You have 4 to 8 years of experience in Classic Autosar SW-C development with strong Embedded C knowledge. You should be proficient in Vector stack, RTE knowledge, CNAoE configuration, and scripting, as well as trace-32. Additionally, you are required to have good experience with expert-level knowledge in Python. Hands-on experience using Pandas and Pickle is essential. Moreover, familiarity with Element tree Parsing (XML parsing) and Jinja is highly valued. This role is located in Bangalore.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

bhubaneswar

On-site

At Rhythm, our values serve as the cornerstone of our organization. We are deeply committed to customer success, fostering innovation, and nurturing our employees. These values shape our decisions, actions, and interactions, ensuring that we consistently create a positive impact on the world around us. Rhythm Innovations is currently looking for a skilled and enthusiastic Machine Learning (ML) Developer to conceptualize, create, and implement machine learning models that enhance our supply chain risk management and other cutting-edge solutions. As an ML Developer, you will collaborate closely with our AI Architect and diverse teams to construct intelligent systems that tackle intricate business challenges and further our goal of providing unparalleled customer satisfaction. Key Responsibilities Model Development: Devise, execute, and train machine learning models utilizing cutting-edge algorithms and frameworks like TensorFlow, PyTorch, and scikit-learn. Data Preparation: Process, refine, and convert extensive datasets for the training and assessment of ML models. Feature Engineering: Identify and engineer pertinent features to enhance model performance and precision. Algorithm Optimization: Explore and implement advanced algorithms to cater to specific use cases such as classification, regression, clustering, and anomaly detection. Integration: Coordinate with software developers to integrate ML models into operational systems and guarantee smooth functionality. Performance Evaluation: Assess model performance using suitable metrics and consistently refine for accuracy, efficacy, and scalability. MLOps: Aid in establishing and overseeing CI/CD pipelines for model deployment and monitoring in production environments. Research and Development: Stay abreast of the latest breakthroughs in Gen AI AI/ML technologies and propose inventive solutions. Collaboration: Engage closely with data engineers, product teams, and stakeholders to grasp requirements and deliver customized ML solutions. Requirements Educational Background: Bachelor's in Engineering in Computer Science, Data Science, Artificial Intelligence, or a related field. Experience: 3 to 6 years of practical experience in developing and deploying machine learning models. Technical Skills Proficiency in Python and ML libraries/frameworks (e.g., scikit-learn, TensorFlow, PyTorch). Experience with data manipulation tools like Pandas, NumPy, and visualization libraries such as Matplotlib or Seaborn. Familiarity with big data frameworks (Hadoop, Spark) is advantageous. Knowledge of SQL/NoSQL databases and data pipeline tools (e.g., Apache Airflow). Hands-on experience with cloud platforms (AWS, Azure, Google Cloud) and their Gen AI AI/ML services. Thorough understanding of supervised and unsupervised learning, deep learning, and reinforcement learning. Exposure to MLOps practices and model deployment pipelines. Soft Skills Strong problem-solving and analytical abilities. Effective communication and teamwork skills. Capability to thrive in a dynamic, collaborative environment.,

Posted 1 week ago

Apply

0.0 - 4.0 years

0 Lacs

pune, maharashtra

On-site

The Junior AI/ML Engineer role at Fulcrum Digital is an opportunity for an aspiring AI innovator and technical enthusiast to kickstart their AI journey by contributing to the development of cutting-edge AI solutions. This hybrid work model position allows employees to work from the office twice a week, with office locations in Pune, Mumbai, or Coimbatore to choose from based on preference and convenience. As a Junior AI/ML Engineer, you will collaborate with experienced professionals to build and implement innovative AI models and applications that solve real-world problems. This role offers more than just an entry-level experience by providing hands-on experience in developing and deploying AI/ML models, working on impactful projects, and learning and growing in a dynamic and innovative environment. Key responsibilities include assisting in the development and implementation of AI/ML models and algorithms, contributing to data preprocessing and feature engineering processes, supporting model evaluation and optimization, collaborating on research initiatives, and assisting in deployment and monitoring of AI/ML solutions. The ideal candidate should have a foundational understanding of machine learning concepts, programming skills in Python, familiarity with TensorFlow or PyTorch, and basic knowledge of data manipulation using libraries like Pandas and NumPy. Strong analytical, problem-solving, and communication skills are essential, along with a proactive and eager-to-learn attitude. The successful candidate will have a Bachelor's degree in Computer Science, Data Science, or a related field, a passion for artificial intelligence and machine learning, and a desire for continuous learning in the field of AI. Superpowers should include the ability to identify patterns in data, explain technical concepts clearly, consider ethical implications in AI development, and maintain an interest in staying updated with advancements in the field. Joining Fulcrum Digital as a Junior AI/ML Engineer offers the opportunity to work on exciting AI projects, be mentored by experienced professionals, contribute to innovative technological solutions, and gain valuable experience in a rapidly evolving field. If you are ready to be part of the AI revolution and shape the future of technology, apply now by sending your resume to the provided email address with the subject line "Application for Junior AI/ML Engineer".,

Posted 1 week ago

Apply

2.0 - 6.0 years

0 Lacs

karnataka

On-site

As a Data Scientist at Setu, you will have the opportunity to be a part of a team that is revolutionizing the fintech landscape. Setu believes in empowering every company to become a fintech company by providing them with cutting-edge APIs. The Data Science team at Setu is dedicated to understanding the vast population of India and creating solutions for various fintech sectors such as personal lending, collections, PFM, and BBPS. In this role, you will have the unique opportunity to delve deep into the business objectives and technical architecture of multiple companies, leading to a customer-centric approach that fosters innovation and delights customers. The learning potential in this role is immense, with the chance to explore, experiment, and build critical, scalable, and high-value use cases. At Setu, innovation is not just a goal; it's a way of life. The team is constantly pushing boundaries and introducing groundbreaking methods to drive business growth, enhance customer experiences, and streamline operational processes. From computer vision to natural language processing and Generative AI, each day presents new challenges and opportunities for breakthroughs. To excel in this role, you will need a minimum of 2 years of experience in Data Science and Machine Learning. Strong knowledge in statistics, tree-based techniques, machine learning, inference, hypothesis testing, and optimizations is essential. Proficiency in Python programming, building Data Pipelines, feature engineering, pandas, sci-kit-learn, SQL, and familiarity with TensorFlow/PyTorch are also required. Experience with deep learning techniques and understanding of DevOps/MLOps will be a bonus. Setu offers a dynamic and inclusive work environment where you will have the opportunity to work closely with the founding team who built and scaled public infrastructure such as UPI, GST, and Aadhaar. The company is dedicated to your growth and provides various benefits such as access to a fully stocked library, tickets to conferences, learning sessions, and development allowance. Additionally, Setu offers comprehensive health insurance, access to mental health counselors, and a beautiful office space designed to foster creativity and collaboration. If you are passionate about making a tangible difference in the fintech landscape, Setu offers the perfect platform to contribute to financial inclusion and improve millions of lives. Join us in our audacious mission and obsession with craftsmanship in code as we work together to build infrastructure that directly impacts the lives of individuals across India.,

Posted 1 week ago

Apply

4.0 - 8.0 years

0 Lacs

karnataka

On-site

You are a highly skilled and experienced Senior Engineer in Data Science who will be responsible for designing and implementing next-generation data science solutions. Your role will involve shaping the data strategy and driving innovation through advanced analytics and machine learning. In this position, your responsibilities will include providing technical leadership and designing end-to-end data science solutions. This encompasses data acquisition, ingestion, processing, storage, modeling, and deployment. You will also be tasked with developing and maintaining scalable data pipelines and architectures using cloud-based platforms and big data technologies to handle large volumes of data efficiently. Collaboration with stakeholders to define business requirements and translate them into technical specifications is essential. As a Senior Engineer in Data Science, you will select and implement appropriate machine learning algorithms and techniques, staying updated on the latest advancements in AI/ML to solve complex business problems. Building and deploying machine learning models, monitoring and evaluating model performance, and providing technical leadership and mentorship to junior data scientists are also key aspects of this role. Furthermore, you will contribute to the development of data science best practices and standards. To qualify for this position, you should hold a B.Tech/M.Tech/M.Sc (Mathematics/Statistics)/PhD from India or abroad. You are expected to have at least 4+ years of experience in data science and machine learning, with a total of around 7+ years of overall experience. A proven track record of technical leadership and implementing complex data science solutions is required, along with a strong understanding of data warehousing, data modeling, and ETL processes. Expertise in machine learning algorithms and techniques, time series analysis, programming proficiency in Python, knowledge of general data science tools, domain knowledge in Industrial, Manufacturing, and/or Healthcare, proficiency in cloud-based platforms and big data technologies, and excellent communication and collaboration skills are all essential qualifications for this role. Additionally, contributions to open-source projects or publications in relevant fields will be considered an added advantage.,

Posted 1 week ago

Apply

2.0 - 6.0 years

0 Lacs

karnataka

On-site

As a Data Engineering Lead, you will be responsible for designing, developing, and maintaining data pipelines using Python and related technologies. You will lead and mentor a team of data engineers, offering technical guidance and support. Collaboration with cross-functional teams to comprehend data requirements and deliver solutions will be a key aspect of your role. Implementing and managing data quality and validation processes will also be part of your responsibilities. You will work on optimizing data pipelines for enhanced performance and scalability, contributing to the establishment of data engineering best practices and standards. To be successful in this role, you should possess at least 6 years of Python development experience, with a preference for Python 3. Additionally, you should have a minimum of 2 years of experience in leading development teams. A strong working knowledge of Linux CLI environments is essential, along with expertise in data processing using Pandas or Polars. Proven experience in constructing data pipelines and familiarity with general data engineering practices are crucial. Proficiency in database technologies such as Snowflake and ORMs like SQLAlchemy is required. You should also be adept at developing REST APIs in Python, have a solid grasp of Python testing frameworks, and experience with Docker containerization of Python applications. Strong Git version control skills, excellent communication, and leadership abilities are indispensable for this role. Prior experience with Snowflake Database will be an added advantage.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

You will be joining Apexon, a digital-first technology services firm that specializes in accelerating business transformation and delivering human-centric digital experiences. At Apexon, we meet customers at every stage of the digital lifecycle and help them outperform their competition through speed and innovation. With a focus on AI, analytics, app development, cloud, commerce, CX, data, DevOps, IoT, mobile, quality engineering, and UX, we leverage our deep expertise in BFSI, healthcare, and life sciences to help businesses capitalize on the opportunities presented by the digital world. Our reputation is built on a comprehensive suite of engineering services, a commitment to solving our clients" toughest technology problems, and a dedication to continuous improvement. With backing from Goldman Sachs Asset Management and Everstone Capital, Apexon has a global presence with 15 offices and 10 delivery centers across four continents. As a part of our #HumanFirstDIGITAL initiative, you will be expected to excel in data analysis, VBA, Macros, and Excel. Your responsibilities will include monitoring and supporting healthcare operations, addressing client queries, and effectively communicating with stakeholders. Proficiency in Python scripting, particularly in pandas, numpy, and ETL pipelines, is essential. You should be able to independently understand client requirements and queries and demonstrate strong skills in data analysis. Knowledge of Azure synapse basics, Azure DevOps basics, Git, T-SQL experience, and Sql Server will be beneficial. At Apexon, we are committed to diversity and inclusion, and our benefits and rewards program is designed to recognize your skills and contributions, enhance your learning and upskilling experience, and provide support for you and your family. As an Apexon Associate, you will have access to continuous skill-based development, opportunities for career growth, comprehensive health and well-being benefits, and support. In addition to a supportive work environment, we offer a range of benefits, including group health insurance covering a family of 4, term insurance, accident insurance, paid holidays, earned leaves, paid parental leave, learning and career development opportunities, and employee wellness programs.,

Posted 1 week ago

Apply

2.0 - 6.0 years

0 Lacs

hyderabad, telangana

On-site

You are ready to gain the skills and experience required to progress within your role and advance your career, and there is an excellent software engineering opportunity waiting for you. As a Software Engineer II at JPMorgan Chase in the Corporate Technology organization, you play a crucial role in the Data Services Team dedicated to enhancing, building, and delivering trusted market-leading Generative AI products securely, stably, and at scale. Being a part of the software engineering team, you will implement software solutions by designing, developing, and troubleshooting multiple components within technical products, applications, or systems while continuously enhancing your skills and experience. Your responsibilities include executing standard software solutions, writing secure and high-quality code in at least one programming language, designing and troubleshooting with consideration of upstream and downstream systems, applying tools within the Software Development Life Cycle for automation, and employing technical troubleshooting to solve basic complexity technical problems. Additionally, you will analyze large datasets to identify issues and contribute to decision-making for secure and stable application development, learn and apply system processes for developing secure code and systems, and contribute to a team culture of diversity, equity, inclusion, and respect. The qualifications, capabilities, and skills required for this role include formal training or certification in software engineering concepts with a minimum of 2 years of applied experience, experience with large datasets and predictive models, developing and maintaining code in a corporate environment using modern programming languages and database querying languages, proficiency in programming languages like Python, TensorFlow, PyTorch, PySpark, numpy, pandas, SQL, and familiarity with cloud services such as AWS/Azure. You should have a strong ability to analyze and derive insights from data, experience across the Software Development Life Cycle, exposure to agile methodologies, and emerging knowledge of software applications and technical processes within a technical discipline. Preferred qualifications include understanding of SDLC cycles for data platforms, major upgrade releases, patches, bug/hot fixes, and associated documentations.,

Posted 1 week ago

Apply

0.0 - 4.0 years

0 Lacs

pune, maharashtra

On-site

As a Junior Software Engineer at Verve Square, you will be an integral part of our dynamic team, supporting L3 application support and contributing to ongoing feature enhancements for a live enterprise product. If you are passionate about coding, enjoy tackling real-world problems, and are eager to develop in a fast-paced tech environment, we are excited to learn more about you! Key Skills: - Strong understanding of Python, Flask, Pandas, and SQLAlchemy ORM - Experience in React.js with a focus on building UIs using the Material UI (MUI) framework - Basic knowledge of SQL and database operations - Strong problem-solving and debugging skills Role & Responsibilities: - Collaborate closely with senior engineers to provide support and improve production applications - Address bugs, resolve user issues (L3 support), and assist in the implementation of new features - Work in collaboration with cross-functional teams to ensure the delivery of high-quality code - Write clean, maintainable, and efficient code while ensuring proper documentation is in place. If you possess these skills and are eager to contribute to a challenging and rewarding environment, we encourage you to apply for this exciting opportunity at Verve Square.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

As a Senior Robotic Process Automation (RPA) Developer for Digital Transformation, you will be responsible for designing, developing, and testing the automation of workflows. Your key role will involve supporting the implementation of RPA solutions, collaborating with the RPA Business Analyst to document process details, and working with the engagement team to implement and test solutions while managing exceptions. Additionally, you will be involved in the maintenance and change control of existing artifacts. To excel in this role, you should possess substantial experience in standard concepts, practices, technologies, tools, and methodologies related to Digital Transformations, including automation, analytics, and new/emerging technologies in AI/ML. Your ability to efficiently manage projects from inception to completion, coupled with strong execution skills, will be crucial. Knowledge of process reengineering would be advantageous. Your responsibilities will include executing projects on digital transformations, process redesign, and maximizing operational efficiency to identify cost-saving opportunities for the enterprise. You will also interact with Business Partners in India and the USA. Key Job Functions and Responsibilities: - Manage end-to-end execution of digital transformational initiatives - Drive ideation and pilot projects on new/emerging technologies such as AI/ML and predictive analytics - Evaluate multiple tools and select the appropriate technology stack for specific challenges - Collaborate with Subject Matter Experts (SMEs) to document current and future processes - Possess a clear understanding of process discovery and differentiate between RPA and regular automation - Provide guidance on designing "to be" processes for effective automation - Develop RPA solutions following best practices - Consult with internal clients and partners to offer automation expertise - Implement RPA solutions across various platforms (e.g., Citrix, web, Microsoft Office, database, scripting) - Assist in establishing a change management framework for updates - Offer guidance on process design from an automation perspective Qualifications: - Bachelor's/Master's/Engineering degree in IT, Computer Science, Software Engineering, or a relevant field - Minimum of 3-4 years of experience in UiPath - Strong programming skills in Python, SQL, and Pandas - Expertise in at least one popular Python framework (e.g., Django, Flask, or Pyramid) is advantageous - Application of Machine Learning/Deep Learning concepts in cognitive areas such as NLP, Computer Vision, and image analytics is highly beneficial - Proficiency in working with structured/unstructured data, image (OCR)/voice, and descriptive/prescriptive analytics - Excellent organizational and time management skills, with the ability to work independently - Certification in UiPath is recommended - Hands-on experience and deep understanding of AWS tools and technologies like EC2, EMR, ECS, Docker, Lambda, and SageMaker - Enthusiasm for collaborating with team members and other groups in a distributed work model - Willingness to support and learn from teammates while sharing knowledge - Comfortable working in a mid-day shift and remote setup Work Schedule or Travel Requirements: - 2-11 PM IST; 5 days a week,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

hyderabad, telangana

On-site

As a Back-End Developer at our company, you will be responsible for developing an AI-driven prescriptive remediation model for SuperZoom, CBRE's data quality platform. Your primary focus will be on analyzing invalid records flagged by data quality rules and providing suggestions for corrected values based on historical patterns. It is crucial that the model you develop learns from past corrections to continuously enhance its future recommendations. The ideal candidate for this role should possess a solid background in machine learning, natural language processing (NLP), data quality, and backend development. Your key responsibilities will include developing a prescriptive remediation model to analyze and suggest corrections for bad records, implementing a feedback loop for continuous learning, building APIs and backend workflows for seamless integration, designing a data pipeline for real-time processing of flagged records, optimizing model performance for large-scale datasets, and collaborating effectively with data governance teams, data scientists, and front-end developers. Additionally, you will be expected to ensure the security, scalability, and performance of the system in handling sensitive data. To excel in this role, you should have at least 5 years of backend development experience with a focus on AI/ML-driven solutions. Proficiency in Python, including skills in Pandas, PySpark, and NumPy, is essential. Experience with machine learning libraries like Scikit-Learn, TensorFlow, or Hugging Face Transformers, along with a solid understanding of data quality, fuzzy matching, and NLP techniques for text correction, will be advantageous. Strong SQL skills and familiarity with databases such as PostgreSQL, Snowflake, or MS SQL Server are required, as well as expertise in building RESTful APIs and integrating ML models into production systems. Your problem-solving and analytical abilities will also be put to the test in handling diverse data quality issues effectively. Nice-to-have skills for this role include experience with vector databases (e.g., Pinecone, Weaviate) for similarity search, familiarity with LLMs and fine-tuning for data correction tasks, experience with Apache Airflow for workflow automation, and knowledge of reinforcement learning to enhance remediation accuracy over time. Your success in this role will be measured by the accuracy and relevance of suggestions provided for data quality issues in flagged records, improved model performance through iterative learning, seamless integration of the remediation model into SuperZoom, and on-time delivery of backend features in collaboration with the data governance team.,

Posted 1 week ago

Apply

0.0 - 3.0 years

0 Lacs

chandigarh

On-site

As a Python Backend Developer at Lookfinity, you will be a part of the Backend Engineering team that is focused on building scalable, data-driven, and cloud-native applications to solve real-world business problems. We are dedicated to maintaining clean architecture, enhancing performance, and designing elegant APIs. Join our dynamic team that is enthusiastic about backend craftsmanship and modern infrastructure. You will be working with a tech stack that includes languages and frameworks such as Python, FastAPI, and GraphQL (Ariadne), databases like PostgreSQL, MongoDB, and ClickHouse, messaging and task queues such as RabbitMQ and Celery, cloud services like AWS (EC2, S3, Lambda), Docker, Kubernetes, data processing tools like Pandas and SQL, and monitoring and logging tools like Prometheus and Grafana. Additionally, you will be utilizing version control systems like Git, GitHub/GitLab, and CI/CD tools. Your responsibilities will include developing and maintaining scalable RESTful and GraphQL APIs using Python, designing and integrating microservices with databases, writing clean and efficient code following best practices, working with Celery & RabbitMQ for async processing, containerizing services using Docker, collaborating with cross-functional teams, monitoring and optimizing application performance, participating in code reviews, and contributing to team knowledge-sharing. We are looking for candidates with 6 months to 1 year of hands-on experience in backend Python development, a good understanding of FastAPI or willingness to learn, basic knowledge of SQL and familiarity with databases like PostgreSQL and/or MongoDB, exposure to messaging systems like RabbitMQ, familiarity with cloud platforms like AWS, understanding of Docker and containerization, curiosity towards learning new technologies, clear communication skills, team spirit, and appreciation for clean code. Additional experience with GraphQL APIs, Kubernetes, data pipelines, CI/CD processes, and observability tools is considered a bonus. In this role, you will have the opportunity to work on modern backend systems, receive mentorship, and have technical growth plans tailored to your career goals. This is a full-time position with a day shift schedule located in Panchkula. Join us at Lookfinity and be a part of our innovative team dedicated to backend development.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

haryana

On-site

As a Senior Machine Learning Engineer at our AI/ML team, you will be responsible for designing and building intelligent search systems. Your focus will be on utilizing cutting-edge techniques in vector search, semantic similarity, and natural language processing to create innovative solutions. Your key responsibilities will include designing and implementing high-performance vector search systems using tools like FAISS, Milvus, Weaviate, or Pinecone. You will develop semantic search solutions that leverage embedding models and similarity scoring for precise and context-aware retrieval. Additionally, you will be expected to research and integrate the latest advancements in ANN algorithms, transformer-based models, and embedding generation. Collaboration with cross-functional teams, including data scientists, backend engineers, and product managers, will be essential to bring ML-driven features from concept to production. Furthermore, maintaining clear documentation of methodologies, experiments, and findings for technical and non-technical stakeholders will be part of your role. To qualify for this position, you should have at least 3 years of experience in Machine Learning, with a focus on NLP and vector search. A deep understanding of semantic embeddings, transformer models (e.g., BERT, RoBERTa, GPT), and hands-on experience with vector search frameworks is required. You should also possess a solid understanding of similarity search techniques such as cosine similarity, dot-product scoring, and clustering methods. Strong programming skills in Python and familiarity with libraries like NumPy, Pandas, Scikit-learn, and Hugging Face Transformers are necessary. Exposure to cloud platforms, preferably Azure, and container orchestration tools like Docker and Kubernetes is preferred. This is a full-time position with benefits including health insurance, internet reimbursement, and Provident Fund. The work schedule consists of day shifts, fixed shifts, and morning shifts, and the work location is in-person. The application deadline for this role is 18/04/2025.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

You will be responsible for architecting and delivering Automation and AI solutions using cutting-edge technologies, with a strong focus on foundation models and large language models. Working closely with business stakeholders, you will understand business requirements and design custom AI solutions to address complex challenges. This role offers the opportunity to accelerate AI adoption across the business landscape in the Statistical Applications and Data domain. Your key responsibilities include defining and leading AI use cases in clinical projects from start-up through close-out. This involves tasks such as protocol development, site management, and data review. To excel in this role, you must be a proficient programmer capable of developing AI solutions thoroughly. Additionally, you will be expected to coach and mentor junior AI/ML engineers to enhance and accelerate AI adoption in business processes. Required Technical Skills: - Bachelors degree in a relevant scientific discipline (e.g., Biomedical engineering, Life Sciences, Nursing, Pharmacy) or clinical background (e.g., MD- Doctor of Medicine OR RN - Registered Nurse) - Advanced degree (e.g., Masters or Ph.D.) in Data Science, AI/ML - Overall experience of 10-12 years with a minimum of 5-7 years in clinical research, particularly managing clinical trials in pharmaceutical, biotechnology, or contract research organization (CRO) settings - Minimum of 5-7 years of experience in AI research and development, specifically focusing on healthcare or life sciences applications - Strong understanding of clinical trial regulations and guidelines, including Good Clinical Practice (GCP), International Conference on Harmonization (ICH), and applicable local regulations - Proven track record in designing and delivering AI solutions, emphasizing foundation models, large language models, or similar technologies - Experience in natural language processing (NLP) and text analytics is highly desirable - Proficiency in programming languages such as Python, R, and experience with AI frameworks like TensorFlow, PyTorch, or Hugging Face - Knowledge of libraries such as SciKit Learn, Pandas, Matplotlib, etc. - Familiarity with cloud platforms (e.g., AWS, Azure, GCP) and related services is a plus - Experience working with large datasets, performing data pre-processing, feature engineering, and model evaluation - Proficiency in solution architecture and design, translating business requirements into technical specifications, and developing scalable and robust AI solutions In summary, this role requires a seasoned professional with a strong background in AI, clinical research, and technical expertise to drive innovation and AI adoption in the pharmaceutical domain, particularly in clinical settings.,

Posted 1 week ago

Apply

5.0 - 10.0 years

0 Lacs

haryana

On-site

The Senior AI Engineer - Agentic AI position at JMD Megapolis, Gurugram requires a minimum of 5+ years of experience in Machine Learning engineering, Data Science, or similar roles focusing on applied data science and entity resolution. You will be expected to have a strong background in machine learning, data mining, and statistical analysis for model development, validation, implementation, and product integration. Proficiency in programming languages like Python or Scala, along with experience in working with data manipulation and analysis libraries such as Pandas, NumPy, and scikit-learn is essential. Additionally, experience with large-scale data processing frameworks like Spark, proficiency in SQL and database concepts, as well as a solid understanding of feature engineering, dimensionality reduction, and data preprocessing techniques are required. As a Senior AI Engineer, you should possess excellent problem-solving skills and the ability to devise creative solutions to complex data challenges. Strong communication skills are crucial for effective collaboration with cross-functional teams and explaining technical concepts to non-technical stakeholders. Attention to detail, ability to work independently, and a passion for staying updated with the latest advancements in the field of data science are desirable traits for this role. The ideal candidate for this position would hold a Masters or PhD in Computer Science, Data Science, Statistics, or a related quantitative field. They should have 5-10 years of industry experience in developing AI solutions, including machine learning and deep learning models. Strong programming skills in Python and familiarity with libraries such as TensorFlow, PyTorch, or scikit-learn are necessary. Furthermore, a solid understanding of machine learning algorithms, statistical analysis, data preprocessing techniques, and experience in working with large datasets to implement scalable AI solutions are required. Proficiency in data visualization and reporting tools, knowledge of cloud platforms like AWS, Azure, Google Cloud for AI deployment, familiarity with software development practices, and version control systems are all valued skills. Problem-solving abilities, creative thinking to overcome challenges, strong communication, and teamwork skills to collaborate effectively with cross-functional teams are essential for success in this role.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

haryana

On-site

Are you passionate about data and coding Do you enjoy working in a fast-paced and dynamic start-up environment If so, we are looking for a talented Python developer to join our team! We are a data consultancy start-up with a global client base, headquartered in London UK, and we are looking for someone to join us full time on-site in our cool Office in Gurugram. Uptitude is a forward-thinking consultancy that specializes in providing exceptional data and business intelligence solutions to clients worldwide. Our team is passionate about empowering businesses with data-driven insights, enabling them to make informed decisions and achieve remarkable results. At Uptitude, we embrace a vibrant and inclusive culture, where innovation, excellence, and collaboration thrive. As a Python Developer at Uptitude, you will be responsible for developing high-quality, scalable, and efficient software solutions. Your primary focus will be on designing and implementing Python-based applications, integrating data sources, and working closely with the data and business intelligence teams. You will have the opportunity to contribute to all stages of the software development life cycle, from concept and design to testing and deployment. In addition to your technical skills, you should be a creative thinker, have effective communication skills, and be comfortable working in a fast-paced and dynamic environment. Requirements: - 3-5 years of experience as a Python Developer or similar role. - Strong proficiency in Python and its core libraries (e.g., Pandas, NumPy, Matplotlib). - Proficiency in web frameworks (e.g., Flask, Django) and RESTful APIs. - Working knowledge of Database technologies (e.g., PostgreS, Redis, RDBMS) and data modeling concepts. - Hands-on experience with advanced excel. - Ability to work with cross-functional teams and communicate complex ideas to non-technical stakeholders. - Awareness of ISO:27001, creative thinker, and problem solver. - Strong attention to detail and ability to work in a fast-paced environment. - Head office based in London, UK, with the role located in Gurugram, India. At Uptitude, we embrace a set of core values that guide our work and define our culture: - Be Awesome: Strive for excellence in everything you do, continuously improving your skills and delivering exceptional results. - Step Up: Take ownership of challenges, be proactive, and seek opportunities to contribute beyond your role. - Make a Difference: Embrace innovation, think creatively, and contribute to the success of our clients and the company. - Have Fun: Foster a positive and enjoyable work environment, celebrating achievements and building strong relationships. Uptitude values its employees and offers a competitive benefits package, including: - Competitive Salary Commensurate With Experience And Qualifications. - Private health insurance coverage. - Offsite trips to encourage team building and knowledge sharing. - Quarterly team outings to unwind and celebrate achievements. - Corporate English Lessons with UK instructor. We are a fast-growing company with a global client base, so this is an excellent opportunity for the right candidate to grow and develop their skills in a dynamic and exciting environment. If you are passionate about coding, have experience with Python, and want to be part of a team that is making a real impact, we want to hear from you!,

Posted 1 week ago

Apply

4.0 - 8.0 years

0 Lacs

karnataka

On-site

We empower our people to stay resilient and relevant in a constantly changing world. We are looking for individuals who are always seeking creative ways to grow and learn, individuals who aspire to make a real impact, both now and in the future. If this resonates with you, then you would be a valuable addition to our dynamic international team. We are currently seeking a Senior Software Engineer - Data Engineer (AI Solutions). In this role, you will have the opportunity to: - Design, build, and maintain data pipelines to cater to the requirements of various stakeholders, including software developers, data scientists, analysts, and business teams. - Ensure that the data pipelines are modular, resilient, and optimized for performance and low maintenance. - Collaborate with AI/ML teams to support training, inference, and monitoring needs through structured data delivery. - Implement ETL/ELT workflows for structured, semi-structured, and unstructured data using cloud-native tools. - Work with large-scale data lakes, streaming platforms, and batch processing systems to ingest and transform data. - Establish robust data validation, logging, and monitoring strategies to uphold data quality and lineage. - Optimize data infrastructure for scalability, cost-efficiency, and observability in cloud-based environments. - Ensure adherence to governance policies and data access controls across projects. To excel in this role, you should possess the following qualifications and skills: - A Bachelor's degree in Computer Science, Information Systems, or a related field. - Minimum of 4 years of experience in designing and deploying scalable data pipelines in cloud environments. - Proficiency in Python, SQL, and data manipulation tools and frameworks such as Apache Airflow, Spark, dbt, and Pandas. - Practical experience with data lakes, data warehouses (e.g., Redshift, Snowflake, BigQuery), and streaming platforms (e.g., Kafka, Kinesis). - Strong understanding of data modeling, schema design, and data transformation patterns. - Experience with AWS (Glue, S3, Redshift, Sagemaker) or Azure (Data Factory, Azure ML Studio, Azure Storage). - Familiarity with CI/CD for data pipelines and infrastructure-as-code (e.g., Terraform, CloudFormation). - Exposure to building data solutions that support AI/ML pipelines, including feature stores and real-time data ingestion. - Understanding of observability, data versioning, and pipeline testing tools. - Previous engagement with diverse stakeholders, data requirement gathering, and support for iterative development cycles. - Background or familiarity with the Power, Energy, or Electrification sector is advantageous. - Knowledge of security best practices and data compliance policies for enterprise-grade systems. This position is based in Bangalore, offering you the opportunity to collaborate with teams that impact entire cities, countries, and shape the future. Siemens is a global organization comprising over 312,000 individuals across more than 200 countries. We are committed to equality and encourage applications from diverse backgrounds that mirror the communities we serve. Employment decisions at Siemens are made based on qualifications, merit, and business requirements. Join us with your curiosity and creativity to help shape a better tomorrow. Learn more about Siemens careers at: www.siemens.com/careers Discover the Digital world of Siemens here: www.siemens.com/careers/digitalminds,

Posted 1 week ago

Apply

2.0 - 6.0 years

0 Lacs

pune, maharashtra

On-site

As a LLM Engineer at HuggingFace, you will play a crucial role in bridging the gap between advanced language models and real-world applications. Your primary focus will be on fine-tuning, evaluating, and deploying LLMs using frameworks such as HuggingFace and Ollama. You will be responsible for developing React-based applications with seamless LLM integrations through REST, WebSockets, and APIs. Additionally, you will work on building scalable pipelines for data extraction, cleaning, and transformation, as well as creating and managing ETL workflows for training data and RAG pipelines. Your role will also involve driving full-stack LLM feature development from prototype to production. To excel in this position, you should have at least 2 years of professional experience in ML engineering, AI tooling, or full-stack development. Strong hands-on experience with HuggingFace Transformers and LLM fine-tuning is essential. Proficiency in React, TypeScript/JavaScript, and back-end integration is required, along with comfort working with data engineering tools such as Python, SQL, and Pandas. Familiarity with vector databases, embeddings, and LLM orchestration frameworks is a plus. Candidates with experience in Ollama, LangChain, or LlamaIndex will be given bonus points. Exposure to real-time LLM applications like chatbots, copilots, or internal assistants, as well as prior work with enterprise or SaaS AI integrations, are highly valued. This role offers a remote-friendly environment with flexible working hours and a high-ownership opportunity. Join our small, fast-moving team at HuggingFace and be part of building the next generation of intelligent systems. If you are passionate about working on impactful AI products and have the drive to grow in this field, we would love to hear from you.,

Posted 1 week ago

Apply

0.0 - 4.0 years

0 Lacs

delhi

On-site

As a Data Analyst Intern at our company based in Delhi, you will be responsible for aggregating, cleansing, and analyzing large datasets from various sources. Your role will involve engineering and optimizing complex SQL queries for data extraction, manipulation, and detailed analysis. Additionally, you will develop advanced Python scripts to automate data workflows, transformation processes, and create sophisticated visualizations. You will also be tasked with building dynamic dashboards and analytical reports using Excel's advanced features like Pivot Tables, VLOOKUP, and Power Query. Your key responsibilities include decoding intricate data patterns to extract actionable intelligence that drives strategic decision-making. It is essential to maintain strict data integrity, precision, and security protocols in all analytical outputs. As part of your role, you will design and implement automation frameworks to eliminate redundancies and improve operational efficiency. To excel in this role, you should have a mastery of SQL, including crafting complex queries, optimizing joins, executing advanced aggregations, and efficiently structuring data with Common Table Expressions (CTEs). Proficiency in Python is crucial, with hands-on experience in data-centric libraries such as Pandas, NumPy, Matplotlib, and Seaborn for data analysis and visualization. Advanced Excel skills are also required, encompassing Pivot Tables, Macros, and Power Query to streamline data processing and enhance analytical efficiency. Furthermore, you should possess superior analytical acumen with exceptional problem-solving abilities and the capability to extract meaningful insights from complex datasets. Strong communication and presentation skills are essential to distill intricate data findings into compelling narratives for stakeholder interactions. This position offers opportunities for full-time, permanent, and internship job types. Benefits include paid sick time, paid time off, a day shift schedule, performance bonuses, and yearly bonuses. The work location is in person. If you are looking to apply your analytical skills in a dynamic environment and contribute to strategic decision-making through data analysis, this Data Analyst Intern position could be the perfect fit for you.,

Posted 1 week ago

Apply

4.0 - 9.0 years

9 - 14 Lacs

Bengaluru

Work from Office

Job Posting TitleSR. DATA SCIENTIST Band/Level5-2-C Education ExperienceBachelors Degree (High School +4 years) Employment Experience5-7 years At TE, you will unleash your potential working with people from diverse backgrounds and industries to create a safer, sustainable and more connected world. Job Overview Solves complex problems and help stakeholders make data- driven decisions by leveraging quantitative methods, such as machine learning. It often involves synthesizing large volume of information and extracting signals from data in a programmatic way. Roles & Responsibilities Key Responsibilities Design, train, and evaluate supervised & unsupervised models (regression, classification, clustering, uplift). Apply automated hyperparameter optimization (Optuna, HyperOpt) and interpretability techniques (SHAP, LIME). Perform deep exploratory data analysis (EDA) to uncover patterns & anomalies. Engineer predictive features from structured, semistructured, and unstructured data; manage feature stores (Feast). Ensure data quality through rigorous validation and automated checks. Build hierarchical, intermittent, and multiseasonal forecasts for thousands of SKUs. Implement traditional (ARIMA, ETS, Prophet) and deeplearning (RNN/LSTM, TemporalFusion Transformer) approaches. Reconcile forecasts across product/category hierarchies; quantify accuracy (MAPE, WAPE) and bias. Establish model tracking & registry (MLflow, SageMaker Model Registry). Develop CI/CD pipelines for automated retraining, validation, and deployment (Airflow, Kubeflow, GitHub Actions). Monitor data & concept drift; trigger retuning or rollback as needed. Design and analyze A/B tests, causal inference studies, and Bayesian experiments. Provide statisticallygrounded insights and recommendations to stakeholders. Translate business objectives into datadriven solutions; present findings to exec & nontech audiences. Mentor junior data scientists, review code/notebooks, and champion best practices. Desired Candidate Minimum Qualifications M.S. in Statistics (preferred) or related field such as Applied Mathematics, Computer Science, Data Science. 5+ years building and deploying ML models in production. Expertlevel proficiency in Python (Pandas, NumPy, SciPy, scikitlearn), SQL, and Git. Demonstrated success delivering largescale demandforecasting or timeseries solutions. Handson experience with MLOps tools (MLflow, Kubeflow, SageMaker, Airflow) for model tracking and automated retraining. Solid grounding in statistical inference, hypothesis testing, and experimental design. Preferred / NicetoHave Experience in supplychain, retail, or manufacturing domains with highgranularity SKU data. Familiarity with distributed data frameworks (Spark, Dask) and cloud data warehouses (BigQuery, Snowflake). Knowledge of deeplearning libraries (PyTorch, TensorFlow) and probabilistic programming (PyMC, Stan). Strong datavisualization skills (Plotly, Dash, Tableau) for storytelling and insight communication. Competencies ABOUT TE CONNECTIVITY TE Connectivity plc (NYSETEL) is a global industrial technology leader creating a safer, sustainable, productive, and connected future. Our broad range of connectivity and sensor solutions enable the distribution of power, signal and data to advance next-generation transportation, energy networks, automated factories, data centers, medical technology and more. With more than 85,000 employees, including 9,000 engineers, working alongside customers in approximately 130 countries, TE ensures that EVERY CONNECTION COUNTS. Learn more atwww.te.com and onLinkedIn , Facebook , WeChat, Instagram and X (formerly Twitter). WHAT TE CONNECTIVITY OFFERS: We are pleased to offer you an exciting total package that can also be flexibly adapted to changing life situations - the well-being of our employees is our top priority! Competitive Salary Package Performance-Based Bonus Plans Health and Wellness Incentives Employee Stock Purchase Program Community Outreach Programs / Charity Events IMPORTANT NOTICE REGARDING RECRUITMENT FRAUD TE Connectivity has become aware of fraudulent recruitment activities being conducted by individuals or organizations falsely claiming to represent TE Connectivity. Please be advised that TE Connectivity never requests payment or fees from job applicants at any stage of the recruitment process. All legitimate job openings are posted exclusively on our official careers website at te.com/careers, and all email communications from our recruitment team will come only from actual email addresses ending in @te.com . If you receive any suspicious communications, we strongly advise you not to engage or provide any personal information, and to report the incident to your local authorities. Across our global sites and business units, we put together packages of benefits that are either supported by TE itself or provided by external service providers. In principle, the benefits offered can vary from site to site.

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies