Jobs
Interviews

4602 Numpy Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Associate Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. Job Description & Summary: A career within…. A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Responsibilities 0 -2 years of experience as AI/ML engineer or similar role. Strong knowledge of machine learning frameworks (e.g., TensorFlow, PyTorch, Scikit-learn). Hands-on experience with model development and deployment processes. Proficiency in programming languages such as Python. Experience with data preprocessing, feature engineering, and model evaluation techniques. Familiarity with cloud platforms (e.g., AWS) and containerization (e.g., Docker, Kubernetes). Familiarity with version control systems (e.g., GitHub). Proficiency in data manipulation and analysis using libraries such as NumPy and Pandas. Good to have knowledge of deep learning, ML Ops: Kubeflow, MLFlow, Nextflow. Knowledge on text Analytics, NLP, Gen AI Mandatory Skill Sets ML Ops, AI / ML Preferred Skill Sets ML Ops, AI / ML Years Of Experience Required 0 - 2 Education Qualification B.Tech / M.Tech / MBA / MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master of Engineering, Bachelor of Engineering, Master of Business Administration Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Data Science Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Apache Airflow, Apache Hadoop, Azure Data Factory, Communication, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline, Data Quality, Data Strategy {+ 22 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date

Posted 3 hours ago

Apply

3.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Job Title- Python developer Position type- Full Time Work Location- Noida/Gurugram Working style- Hybrid People Manager role: No Required education and certifications critical for the role- B.tech or MCA or engineering degree Required years of experience - Minimum 3 years of relevant experience Aon is looking for a Senior Analyst, Software Engineering. As part of an industry-leading team, you will help empower results for our clients by delivering innovative and effective solutions as part of our Aon Life Solutions within Aon Reinsurance Strategy and Technology Group, in India. As a software engineer and Senior Analyst, you will report directly to the Director, Global Life Risk Modelling Consulting offshore lead. Your Impact As a Senior Analyst Aon Life Solutions provides high efficiency computing software, consulting services as well as advisory services to financial intermediaries, primarily insurance companies. The high-performance software platform, PathWise, provides companies with the ability to easily model financial obligations without involved coding and is run on Graphical Processing Units (GPU), making it the fastest, readily available software platform currently in use in the insurance industry. Global Life Risk Modelling consulting services has focused primarily on actuarial analysis, modelling of insurance liabilities and financial assets and reviews of hedging programs. As a Senior Analyst and Software Engineer in Global Life Risk Modelling Consulting Team, you will work with different teams to support the development of models and processes for companies moving to PathWise. You will break down complex problems into steps that drive product development. You will provide valuable contributions to the Aon Life product vision and go-to-market strategy. What The Day Will Look Like Support the development of models and processes for companies moving into PathWise Support the development of complex actuarial and financial products and libraries powered by HPC on GPUs in an Agile environment. Provide technical guidance as an expert in multiple technologies to the software development team and supervise their activities. Develop innovative solutions to some of the most complex and challenging problems by collaborating as needed across regions, product areas and functions. Understand trends in financial and insurance industries, the cloud ecosystem, the competitive market and customer requirements in depth. Perform analysis and research into technology trends, products and competitors; provide recommendations for improvements and general changes to the product. How This Opportunity Is Different Be part of a growing multi-disciplinary team developing an exciting new product used globally by leading financial institutions. You will connect the technical and business worlds through your work on a disruptive high-performance computing platform leveraging GPU technology. Experience first-hand how your valuable contributions through all stages from inception, design and development, delivery, and go-to-market turn your product into a success. Skills And Experience That Will Lead To Success Motivated, self-driven individual, that is not afraid to take initiative and facilitate change where necessary. Bachelor's degree in a technical field such as Electrical/Computer Engineering or Computer Science, Master’s degree a plus. 3+ years’ experience as a Software Engineer in developing and launching products, libraries and technologies within the actuarial and/or financial industries in an agile environment. Expert knowledge of python (NumPy, Pandas) is a must. Expert knowledge of Agile methodologies and Software Development Life Cycle (SDLC). Experience with Azure Devops and Atlassian tools, Jira, Confluence. Knowledge of C, C++, C# a plus. Knowledge of parallel computing and Cuda a big plus. Experience with software version control systems: git, svn Experience developing software on Linux. Knowledge of actuarial and/or financial products and libraries such as QuantLib. Technical knowledge and experience in multiple Software Development Life Cycle (SDLC) models. How We Support Our Colleagues In addition to our comprehensive benefits package, we encourage an inclusive workforce. Plus, our agile environment allows you to manage your wellbeing and work/life balance, ensuring you can be your best self at Aon. Furthermore, all colleagues enjoy two “Global Wellbeing Days” each year, encouraging you to take time to focus on yourself. We offer a variety of working style solutions for our colleagues as well. Our continuous learning culture inspires and equips you to learn, share and grow, helping you achieve your fullest potential. As a result, at Aon, you are more connected, more relevant, and more valued. Aon values an innovative and inclusive workplace where all colleagues feel empowered to be their authentic selves. Aon is proud to be an equal opportunity workplace. Aon provides equal employment opportunities to all employees and applicants for employment without regard to race, color, religion, creed, sex, sexual orientation, gender identity, national origin, age, disability, veteran, marital, domestic partner status, or other legally protected status. We are committed to providing equal employment opportunities and fostering an inclusive workplace. If you require accommodations during the application or interview process, please let us know. You can request accommodations by emailing us at ReasonableAccommodations@Aon.com or your recruiter. We will work with you to meet your needs and ensure a fair and equitable experience. 2564424

Posted 3 hours ago

Apply

4.0 - 7.0 years

0 Lacs

India

On-site

Job Title: Python Developer Location: Hyderabad (On-Site) Job Type: Full-Time Experience: 4 to 7 Years Notice Period: Immediate to 15 Days Job Summary : We are looking for a talented and motivated Python Developer with strong experience in building APIs using FastAPI and Flask. The ideal candidate will possess excellent problem-solving and communication skills and a passion for delivering high-quality, scalable backend solutions. You will play a key role in developing robust backend services, integrating APIs, and collaborating with frontend and QA teams to deliver production-ready software. Key Responsibilities Design, develop, and maintain backend services using FastAPI and Flask. Write clean, reusable, and efficient Python code following best practices. Work with Large Language Models (LLMs) and contribute to building advanced AI-driven solutions. Collaborate with cross-functional teams to gather requirements and translate them into technical implementations. Optimize applications for maximum speed, scalability, and reliability. Implement secure API solutions and ensure compliance with data protection standards. Develop and maintain unit tests, integration tests, and documentation for code, APIs, and system architecture. Participate in code reviews and contribute to continuous improvement of development processes. Required Skills & Qualifications Strong programming skills in Python with hands-on experience in backend development. Proficiency in developing RESTful APIs using FastAPI and Flask frameworks. Solid understanding of REST principles and asynchronous programming in Python. Good communication skills and the ability to troubleshoot and solve complex problems effectively. Experience with version control tools like Git. Eagerness to learn and work with LLMs, Vector Databases, and other modern AI technologies. Bachelor's degree in Computer Science, Engineering, or a related field, or equivalent practical experience. Nice to Have Experience with LLMs, Prompt Engineering, and Vector Databases. Understanding of Transformer architecture, Embeddings, and Retrieval-Augmented Generation (RAG). Familiarity with data processing libraries like NumPy and Pandas. Knowledge of Docker for containerized application development and deployment. Skills Python, FastAPI, Flask, REST APIs, Asynchronous Programming, Git, API Security, Data Protection, LLMs, Vector DBs, Transformers, RAG, NumPy, Pandas, Docker. If you are passionate about backend development and eager to work on innovative AI solutions, we would love to hear from you! Job Type: Full-time Pay: ₹10,764.55 - ₹65,865.68 per month Benefits: Flexible schedule Health insurance Paid time off Provident Fund Schedule: Day shift Monday to Friday Work Location: In person

Posted 3 hours ago

Apply

0 years

1 Lacs

Calicut

On-site

Job Summary: We are seeking a highly motivated and enthusiastic Python Intern to join our tech team. This internship offers an excellent opportunity to gain real-world experience in software development using Python. You will work closely with senior developers on live projects, learning modern coding practices, tools, and frameworks. Key Responsibilities: Assist in designing, coding, and testing Python-based applications. Work on automation scripts, web scraping, or API integrations (as required). Support the development team in debugging and improving code. Participate in team meetings and contribute to project planning and problem-solving. Document code, processes, and technical specifications clearly. Learn and apply best practices in version control (e.g., Git) and coding standards. Requirements: Basic understanding of Python programming and OOP concepts. Familiarity with any Python frameworks/libraries (e.g., Flask, Django, Pandas, NumPy, etc.) is a plus. Knowledge of databases (MySQL, SQLite, or MongoDB) preferred. Strong problem-solving skills and willingness to learn. Good communication and teamwork abilities. Currently pursuing or recently completed a degree in Computer Science, IT, or related fields. Job Type: Internship Pay: Up to ₹10,000.00 per month Ability to commute/relocate: Kozhikode, Kerala: Reliably commute or planning to relocate before starting work (Required) Work Location: In person

Posted 3 hours ago

Apply

12.0 years

0 Lacs

Greater Kolkata Area

On-site

Role description We are looking for an experienced and hands-on Data Science Manager who can lead and deliver high-impact projects across domains such as Forecasting , Recommendation Systems , Causal Inference , and Generative AI (GenAI) . The ideal candidate will combine strong Python coding abilities with leadership and solutioning skills to manage a team and work closely with business and engineering stakeholders. Key Responsibilities: Lead a team of data scientists on end-to-end projects across one or more of the following areas: Time Series Forecasting (demand, sales, financial) Recommendation Engines (content, product, personalization) Causal Inference / Uplift Modeling / A/B Testing Generative AI (summarization, copilots, semantic search using LLMs) Translate complex business problems into actionable data science projects. Drive project scoping, solution design, development, deployment, and stakeholder communication. Write clean, modular, and scalable Python code to build, validate, and deploy ML models. Oversee model performance evaluation, tuning, and production monitoring. Guide the team in adopting modern tools: MLflow , LangChain , XGBoost , Prophet , Vector DBs , etc. Collaborate cross-functionally with product, engineering, and business stakeholders. Stay updated on advancements in ML, GenAI, and applied analytics, and bring innovation into project design. Required Qualifications: 8–12 years of experience in data science with at least 2+ years in a leadership role Strong hands-on experience in Python (Pandas, NumPy, Scikit-learn, Statsmodels, Matplotlib, etc.) Proven project experience in at least one of the following: Forecasting (ARIMA, Prophet, LSTM, etc.) Recommendation systems (collaborative filtering, content-based) Causal modeling (difference-in-differences, propensity scores, uplift models) GenAI (OpenAI APIs, LangChain, LLMs, vector databases) Strong knowledge of model evaluation metrics, feature engineering, and data wrangling Solid understanding of statistics, machine learning, and applied modeling Preferred Skills: Experience with cloud platforms ( AWS, Azure, GCP ) Exposure to MLOps tools (Docker, MLflow, Git, Airflow) Experience deploying models via REST APIs or integrating with applications Familiarity with business domains like Retail, BFSI, CPG, or Healthcare is a plus Soft Skills: Strong stakeholder management and communication abilities Ability to mentor and coach team members Comfortable managing multiple projects in a fast-paced environment Structured problem-solving and storytelling with data

Posted 4 hours ago

Apply

0 years

5 - 9 Lacs

Haryāna

On-site

Role Description: We are seeking a skilled professional to maintain and support batch jobs in a legacy environment. The role involves managing and monitoring ETL processes, addressing issues, and enhancing existing PL/SQL scripts. The ideal candidate will have strong expertise in Informatica, SQL Server, and data warehousing concepts, along with experience in troubleshooting and improving batch job performance.Key Responsibilities: Design and implement robust ETL pipelines using AWS Glue, Lambda, Redshift and S3.· Monitor and optimize the performance of data workflows and batch processing jobs.· Troubleshoot and resolve issues related to data pipeline failures, inconsistencies, and performance bottlenecks.· Collaborate with cross-functional teams to define data requirements and ensure data quality and accuracy.· Develop and maintain automated solutions for data transformation, migration, and integration tasks.· Implement best practices for data security, data governance, and compliance within AWS environments.· Continuously improve and optimize AWS Glue jobs, Lambda functions, and S3 storage management.· Maintain comprehensive documentation for data pipeline architecture, job schedules, and issue resolution processes. Required Skills and Experience: Strong experience with Data Engineering practices.· Experience in AWS services, particularly AWS Glue, Lambda, S3, and other AWS data tools.· Proficiency in SQL, python , Pyspark, numpy etc and experience in working with large-scale data sets.· Experience in designing and implementing ETL pipelines in cloud environments.· Expertise in troubleshooting and optimizing data processing workflows.· Familiarity with data warehousing concepts and cloud-native data architecture.· Knowledge of automation and orchestration tools in a cloud-based environment.· Strong problem-solving skills and the ability to debug and improve the performance of data jobs.· Excellent communication skills and the ability to work collaboratively with cross-functional teams.· Good to have knowledge of DBT & Snowflake We are an Equal Opportunity Employer: We value diversity at Incedo. We do not discriminate based on race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.

Posted 4 hours ago

Apply

4.0 years

3 - 9 Lacs

Gurgaon

On-site

Responsibilities: − Design data-driven solutions for business problems. − Understand automotive domain specifics; consult experts when needed. − Learn and work with vehicle IoT systems, especially the Telematics Control Unit. − Build expertise in handling time-series data. − Perform data cleaning, preparation, and ETL processes. − Explore data to find insights, trends, and patterns; collaborate with domain experts to test hypotheses. − Create and select meaningful features for modelling. − Choose and apply suitable ML/DL algorithms; build training pipelines and optimize models. − Validate models and use ensemble methods when beneficial. − Visualize and report findings using graphs and summaries. Mentor junior data scientists and support their development. Essential: A minimum of 4 years of industry experience in data science. Proficiency in Python programming with experience in pandas , NumPy , matplotlib , and sklearn libraries. Competence in statistical analysis , including descriptive, inferential statistics, and hypothesis testing. Practical experience in mathematical modelling and a variety of machine learning techniques, such as: o Generalized Linear Models (GLM), Boosting Algorithms, Decision Trees, Neural Networks, Support Vector Machines (SVM), and Bayesian Methods. o Econometric analysis. o Deep Learning models including Recurrent Neural Networks (RNNs), Long Short-Term Memory networks (LSTM), and Gated Recurrent Units (GRUs). o Unsupervised learning algorithms and image classification using computer vision. Proven track record of successful model deployment. Desirable: · Experience with accident simulation tools such as PC crash or OPENPass . Familiarity with time-series and IoT data analytics. Familiarity with LLMs Knowledge of automotive systems, vehicle fundamentals, and Controller Area Network (CAN) protocol.

Posted 4 hours ago

Apply

10.0 years

1 - 9 Lacs

Noida

On-site

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Our work area includes development of data science models to identify aberrant billing patterns in healthcare claims data. We are a team of data scientists working on developing, evaluating of advance data science models Role requires evaluation of various data science models through advanced statistical techniques and identify improvement opportunities resulting in business impact, creating different metrics for models, and building solid relationships with global analytics teams Candidate should be expert in data science and analytics with significant amount of team/stakeholder management and project delivery skills in one or more like, Model Building, Model Validation, Outlier detection Analytics, Risk Analytics, Multivariate Analysis and Model Deployment Primary Responsibilities: Leading a team of highly qualified team of data scientists with experience in formulating and solving complex problems Oversee development of predictive modeling algorithms using statistical tools, statistical modeling, optimization, algorithm development to synthesize large data sets Apply critical thinking skills and perform advanced analytics with the goal of solving complex and multi-faceted business problems and generate deep insights through the analysis of data and understanding of operational processes and turn them into actionable recommendations Providing both technical consulting to business leaders at an appropriate level of information encapsulation Develop actionable AI, NLP, machine learning solutions and propose recommendations and strategies to solve commercial opportunities in unique ways that return optimal value to our business Use a flexible, analytical approach to design, develop, and evaluate predictive and prescriptive models and advanced algorithms that result in optimal value extraction from the data Deliver robust and scalable analytic solutions in an automation ready state Collaborate with lines of business to understand their analytic needs, contribute to shaping business solutions and communicate results of analytics Develop and expand a fledgling applied research program with focus on impact-focused initiatives, rapid prototyping, and obsession with converting applied research into software Be a hands-on leader and teach by example by building prototypes using variety of predictive methods while adhering to the AI, ML development cycle (e.g. iterative EDA, prediction specification, feature engineering and model tuning) Participate in meetings business stakeholders, data experts, software/platform engineering, physicians to scope R&D use-cases and convert them into scope of work Be a thought leader and coach the junior team members in the wide variety of statistical/predictive methods (e.g. DL, VAE, Bayesian nets etc) to attack a given prediction use-case Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: PhD/M.Sc./MS College Degree or Higher in operational research, applied mathematics, statistics with a quantitative emphasis from tier I colleges only 10+ years of experience conducting statistical, research-minded analysis of variety of data types/forms with aim to utilize this analysis into building or optimizing predictive methods 10+ years of deep, hands-on experience in Deep Learning, NLP, AI, machine Learning and Computer Vision applied to the healthcare or other high-stakes domains (direct experience in a cloud environment is highly valued) 10+ years advanced programming exposure in SAS or Python including SQL, and good exposure to tool/libraries/forms such as Spark, pandas, scikit family, numpy/scipy, matplotlib, streamlit Experience applying computational algorithms, statistical and programming methods using (R and Python) to structured and unstructured data Experience with one or many of these topics such in customer churn, intent prediction, disease/damage progression, anomaly detection, sequence-based models, time-series forecasting or time to event modeling Experience in deploying machine learning models in a production setting (logging, monitoring, alerts) in a cloud environment Proven experience in understanding complex problems and datasets and driving data testing and strategic testing of ideas to come up with first-class solutions Apply knowledge of statistics, machine learning, programming, data modeling, simulation, and advanced mathematics to recognize patterns, identify opportunities, pose business questions, and make valuable discoveries to prototype development and product improvement Self-drive knowledge accumulation of methods and modeling advances by reading research publications, blogs and attending presentations and webinars Proven ability to offer thought leadership at a meta level Preferred Qualification: Experience and understanding of US healthcare system, billing of medical claims and associated data environments At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone–of every race, gender, sexuality, age, location and income–deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes — an enterprise priority reflected in our mission.

Posted 4 hours ago

Apply

3.0 - 4.0 years

5 - 7 Lacs

India

On-site

Job Title: Python (AI/ML Engineer) (3–4 Years Experience) Location: Onsite – Surat (Local Candidates Only) Job Type: Full-Time Apply at: hrd.kukamitechnology@gmail.com Key Responsibilities: - Design, develop, and deploy machine learning models for real-world applications - Analyze large datasets to extract actionable insights - Build, train, and fine-tune algorithms (classification, regression, clustering, NLP, etc.) - Integrate AI/ML models with existing web/mobile applications - Collaborate with developers and designers to deliver intelligent features - Stay updated with the latest research and industry trends in AI/ML Required Skills: - Strong programming skills in Python and experience with Pandas, NumPy, Scikit-learn - Hands-on with TensorFlow, PyTorch, or other deep learning frameworks - Knowledge of Natural Language Processing (NLP), Computer Vision, or Recommendation Systems - Experience with data preprocessing, model evaluation, and optimization - Understanding of REST APIs, cloud platforms (AWS, GCP, or Azure is a plus) Job Type: Full-time Pay: ₹550,000.00 - ₹700,000.00 per year Education: Bachelor's (Preferred) Experience: AI/ML Engineer: 4 years (Required) Language: English (Required) Location: Vesu, Surat, Gujarat (Required) Work Location: In person

Posted 4 hours ago

Apply

1.0 - 2.0 years

3 Lacs

Bhopal

On-site

We are seeking a highly motivated and detail-oriented Data Analyst with 1–2 years of experience to join our growing team. The ideal candidate will be responsible for collecting, analyzing, and interpreting large datasets to help drive data-informed decisions across the organization. Key Responsibilities Analyze structured and unstructured data using SQL, Python, and Excel Create dashboards and visualizations using Power BI and Google Sheets Develop and maintain reports to track key metrics and business performance Work with MongoDB to query and manage NoSQL data Interpret data trends and patterns to support business strategy Collaborate with cross-functional teams to understand data needs and deliver insights Apply statistical methods to solve real-world problems and validate hypotheses Ensure data accuracy, integrity, and security across platforms Required Skills & Qualifications 1–2 years of experience in a Data Analyst or similar role Proficient in Excel, including advanced formulas and pivot tables Strong knowledge of SQL for querying and data manipulation Hands-on experience with MongoDB and NoSQL databases Proficient in Python for data analysis (Pandas, NumPy, etc.) Experience in building reports/dashboards using Power BI Skilled in using Google Sheets for collaboration and automation Strong logical thinking and problem-solving abilities Good understanding of statistics and data modeling techniques Excellent communication skills and attention to detail Job Type: Full-time Pay: Up to ₹350,000.00 per year Benefits: Health insurance Provident Fund Schedule: Day shift Work Location: In person

Posted 4 hours ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

About Us Axi is a leading global provider of margin and deliverable Foreign Exchange, Contracts for Difference (CFDs), and Financial Spread betting. Our business has evolved into a world-class, multifaceted brokerage with offices in six regions. With heavy investment in the latest trading technology, Axi seeks to offer the most comprehensive end-to-end trading experience available, servicing traders of all levels from beginners to institutional-level clients. At Axi, data drives decisions. We’re hiring a BI Specialist to design and maintain data infrastructure, analyze metrics, and enable smart business choices across marketing, product, and finance. What You’ll Do Design data pipelines and dashboards using SQL, Python (Pandas/NumPy), and JavaScript. Execute daily BI operations: data ingestion, validation, and reporting. Analyze conversion funnels and marketing attribution; identify root causes of performance drops. Collaborate with stakeholders to define metrics, troubleshoot anomalies, and implement improvements. Ensure data quality and reliability across all reporting platforms. Maintain project flow using tools like Excel and Jira. About You Proficient in SQL, Python, JavaScript, and Excel. Experience with Pandas/NumPy; familiarity with funnel analysis and marketing attribution. Excellent analytical thinking, structured reasoning, and problem-solving mindset. Proactive self-starter with strong attention to detail. Highly organized, able to work under tight deadlines. Degree in Science or Mathematics—strong academic performance required. Fluent in English, with clear communication skills. Nice to Have BI/reporting experience in iGaming, insurance, e-commerce, or online casino industries. Experience with BI tools (Tableau, Looker, Power BI). Axi’s Bag of Delights Competitive compensation Training, development resources, and certification opportunities 18 annual leave days + 12 sick days Local public holidays Health insurance benefits Interview Process Talent Acquisition Screen (30 mins) BI Skills Assessment (1-hour case study or take-home) Technical Interview with BI/Product (60 mins) Hiring Manager Interview (30 mins)

Posted 4 hours ago

Apply

0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

About BNP Paribas India Solutions Established in 2005, BNP Paribas India Solutions is a wholly owned subsidiary of BNP Paribas SA, European Union’s leading bank with an international reach. With delivery centers located in Bengaluru, Chennai and Mumbai, we are a 24x7 global delivery center. India Solutions services three business lines: Corporate and Institutional Banking, Investment Solutions and Retail Banking for BNP Paribas across the Group. Driving innovation and growth, we are harnessing the potential of over 10000 employees, to provide support and develop best-in-class solutions. About BNP Paribas Group BNP Paribas is the European Union’s leading bank and key player in international banking. It operates in 65 countries and has nearly 185,000 employees, including more than 145,000 in Europe. The Group has key positions in its three main fields of activity: Commercial, Personal Banking & Services for the Group’s commercial & personal banking and several specialised businesses including BNP Paribas Personal Finance and Arval; Investment & Protection Services for savings, investment, and protection solutions; and Corporate & Institutional Banking, focused on corporate and institutional clients. Based on its strong diversified and integrated model, the Group helps all its clients (individuals, community associations, entrepreneurs, SMEs, corporates and institutional clients) to realize their projects through solutions spanning financing, investment, savings and protection insurance. In Europe, BNP Paribas has four domestic markets: Belgium, France, Italy, and Luxembourg. The Group is rolling out its integrated commercial & personal banking model across several Mediterranean countries, Turkey, and Eastern Europe. As a key player in international banking, the Group has leading platforms and business lines in Europe, a strong presence in the Americas as well as a solid and fast-growing business in Asia-Pacific. BNP Paribas has implemented a Corporate Social Responsibility approach in all its activities, enabling it to contribute to the construction of a sustainable future, while ensuring the Group's performance and stability Commitment to Diversity and Inclusion At BNP Paribas, we passionately embrace diversity and are committed to fostering an inclusive workplace where all employees are valued, respected and can bring their authentic selves to work. We prohibit Discrimination and Harassment of any kind and our policies promote equal employment opportunity for all employees and applicants, irrespective of, but not limited to their gender, gender identity, sex, sexual orientation, ethnicity, race, colour, national origin, age, religion, social status, mental or physical disabilities, veteran status etc. As a global Bank, we truly believe that inclusion and diversity of our teams is key to our success in serving our clients and the communities we operate in. About Business Line/Function Global Markets is currently recruiting talented people to join one of the most challenging and exciting part of our Quantitative Research team, the Data and Artificial Intelligence Lab! We are currently recruiting interns (in Mumbai) for the Global Market Data and Artificial Intelligence Lab of BNP Paribas: Global Market is part of the Corporate and Investment Bank and deals with all market activities on Equity, Foreign Exchange and Local Markets, G10 Rates, Primary and Secondary Credit and Financing asset classes. The Lab mission is to leverage the latest techniques of Machine Learning (Deep Learning, Natural Language Processing) on the vast amount of structured and un-structured data we are collecting while doing our business as well as any other public source of information. Position Purpose We are, among other things, building models to improve the service we give to our clients (issuing recommendation, anticipating their needs, bringing the relevant research…), to help traders better understanding and managing their risks or leverage alternative data sources (social media, news, images…) for the benefit of our strategists. We are looking for candidates with education in data science, who not only have experience in solving complex problems but as well understand how and why the model work the way they do. They need to be motivated with dealing with large amount of very diverse data and extracting valuable insights out of it. The right candidate needs to be able to adapt quickly to new challenges, not to be afraid to experiment many times and fail before finding the right solution, challenge themselves with the feedback of the users and they will have the excitement of seeing their work being used in real live by the business. For Internships, We Are Looking At Duration Of 6 Months And We Are Flexible On The Starting Date (the Earlier The Better!). The Intern Will Participate To The Life Of The LAB And Will Take Ownership Of One Or More Topic. We Have a Great Variety Of Topics, And Some Of The Historical Propositions Included Prediction of which products are the most likely to be interesting for a given client. Automated Generation of Market Comment. Optimal Risk Management of Interest Rates Swap Risk. Regime disentanglement for financial mixture of experts models. Generative modelling for model control. Transformers for quantitative investment strategies. Based on the skillset & business need, we can select a valuable proposition for you! Responsibilities Direct Responsibilities Explore and examine data from multiple diverse data sources. Conceptual modeling, statistical analysis, predictive modeling and optimization design. Data cleanup, normalization and transformation. Hypothesis testing: being able to develop hypothesis and test with careful experiments. Contributing Responsibilities Help build workflows for extraction, transformation and loading of different data from a variety of sources and enable linking them to existing systems and datasets. Ensure the integrity and security of data. Technical & Behavioral Competencies Education in data science, who not only have experience in solving complex problems but as well understand how and why the model work the way they do. Knowledge of key concepts in Statistics and Mathematics such as Probability Theory, Inference, and Linear Algebra. Knowledge or experience in Machine Learning procedures and tasks such as Classification, Prediction, and Clustering. Programming skills in Python and knowledge of common numerical and machine-learning packages (NumPy, scikit-learn, pandas, Keras, TensorFlow, PyTorch, langchain). Ability to write clear and concise code in python. Intellectually curious and willing to learn challenging concepts daily. Involvement with the Data Science community through platforms such as Kaggle, Numerai, Open ML, or others. Knowledge of current Machine Learning/Artificial Intelligence literature. Skills Referential Behavioural Skills: Ability to collaborate / Teamwork Critical thinking Communication skills - oral & written Attention to detail / rigor Transversal Skills Analytical Ability Education Level: Bachelor’s Degree or Master’s Degree or equivalent Experience Level: Beginner

Posted 4 hours ago

Apply

3.0 years

0 Lacs

Gurugram, Haryana, India

On-site

About Klook We are Asia’s leading platform for experiences and travel services, and we believe that we can help bring the world closer together through experiences . Founded in 2014 by 3 avid travelers, Ethan Lin, Eric Gnock Fah and Bernie Xiong, Klook inspires and enables more moments of joy for travelers with over half a million curated quality experiences ranging from the biggest attractions to paragliding adventures, iconic museums to rich cultural tours, and other convenient local travel services across 2,700 destinations around the world. Do you share our belief in the wonders of travel? Our international community of over 1,800 employees, based in 30+ locations, certainly do! Global citizens ourselves, Klookers are not only curating memorable experiences for others but also co-creating our world of joy within Klook. We work hard and play hard, upkeeping our high-performing culture as we are guided daily by our 6 core values: Customer First Push Boundaries Critical Thinking Build for Scale Less is More Win as One We never settle, and together, we believe in achieving greater heights and realizing endless possibilities ahead of us in the dynamic new era of travel. Care to be a part of this revolution? Join us! As a Data Scientist within the Pricing Strategy team, you will play a pivotal role in driving data-driven decision-making and optimizing pricing strategies. You will leverage your expertise in data science and analytics to develop and implement dynamic pricing models, predictive and prescriptive analysis, ultimately contributing to revenue growth and market competitiveness. What You'll Do Dynamic Pricing Model Development: Develop and implement advanced dynamic pricing models to optimize product pricing across various channels and markets. Predictive Analytics: Utilize predictive modeling techniques to forecast demand, market trends, and customer behavior, enabling proactive pricing adjustments. Prescriptive Analysis: Employ prescriptive analytics to identify optimal pricing strategies based on specific business objectives and constraints. Data Exploration and Analysis: Conduct in-depth data exploration and analysis to uncover valuable insights and inform pricing decisions. Model Evaluation and Refinement: Continuously evaluate and refine pricing models to ensure their accuracy and effectiveness. Collaboration: Collaborate with cross-functional teams (e.g., marketing, sales, finance) to align pricing strategies with overall business goals. Stay Updated: Stay abreast of the latest advancements in data science and pricing optimization techniques. What You'll Need Master's degree or PhD in Data Science, Statistics, Computer Science, Economics, or a related field. A minimum of 3-4 years of relevant experience in the field of Data Science. Strong programming skills in Python or R, including proficiency in data manipulation, analysis, and visualization libraries (e.g., Pandas, NumPy, Matplotlib, Seaborn). Experience with machine learning algorithms and techniques (e.g., regression, classification, clustering, time series analysis). Knowledge of statistical modeling and hypothesis testing. Experience with data warehousing and cloud computing platforms (e.g., AWS, GCP, Azure) is a plus. Excellent problem-solving and analytical skills. Ability to communicate complex technical concepts to both technical and non-technical audiences. Passion for data-driven decision-making and a continuous learner mindset. Klook is proud to be an equal opportunity employer. We hire talented and passionate people of all backgrounds. We believe that a joyful workplace is an inclusive workplace, one where employees from all walks of life have an equal opportunity to thrive. We’re dedicated to creating a welcoming and supportive culture where everyone belongs. Klook does not accept unsolicited resumes from any temporary staffing agency, placement service or professional recruiter (“Agency”). Klook will not be responsible for, and will not pay, any fees, commissions or other payments related to such unsolicited resumes. An Agency must obtain advance written approval from Klook’s Talent Acquisition Team to submit resumes, and then only in conjunction with a valid fully-executed agreement for service and in response to a specific job opening for which the Agency has been requested to submit resumes for. Klook will not be responsible for, and will not pay, any fees, commissions or other payments to any Agency that does not have such agreement in place or does not comply with the foregoing.

Posted 5 hours ago

Apply

2.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Position Purpose The position of Consultant is within TTEC Digital - Analytics team. Analytics group is responsible for Data Science and Engineering projects that include the design and validation of data models, build systems to collect, manage and convert transactional raw data to usable data structures to generate insights for decision making Our Data Engineers work with Data Scientists, Project Leads, Managers on implementation, upgrade, and migration projects. Key Responsibilities Analyzing raw data Developing and maintaining datasets Improving data quality and efficiency Create solution and design documentation Work on projects independently as well as being part of a large team Develop internal training, process and best practices Crosstrain Junior Data Engineers or other team members with your area of expertise Further develop skills both on the job and through formal learning channels Assist in pre-sales activities by providing accurate work estimate Interacts closely with Project Management to deliver projects that are done on time and on budget. Competencies Personal: Strong interpersonal skills, high energy and enthusiasm, integrity, and honesty; flexible, results oriented, resourceful, problem-solving ability, deal effectively with difficult situations, ability to prioritize. Leadership: Ability to gain credibility, motivate and provide leadership; work with a diverse customer base; maintain a positive attitude. Provide support and guidance to more junior team members, particularly for challenging and sensitive assignments Operations: Ability to manage multiple projects and products. Perform task at hand in a customer friendly manner while utilizing time and resources efficiently and effectively. Utilize high level expertise to address more difficult situations, both from a technical and customer service perspective. Technical: Ability to understand and communicate technical concepts; proficient with Microsoft Project, Visio and Office products. Technical Skills Python (pydata, pandas, numpy, pyspark) SQL (MS SQL, OracleDB, Terradata) Azure Data Factory Azure Data Bricks Big Data (Spark, pig, hive, scoop, kafka etc.) DevOps (using tools such as GITHUB Actions and Jenkins is preferred) Agile/Scrum Rest Services and API Management: Implementing API proxies through gateways using Apigee X and/or Apigee Edge API design, development, and testing including creating SWAGGER/Open API specs Education, Experience And Certification Post-Secondary Degree (or Diploma) related to Computer Science, MIS or IT-related field. BA/BS in unrelated field will also be considered depending on experience 2-4 years in Data Engineering Exposure to application design and development experience in a cloud environment 2+ years of experience building and deploying containerized applications in a Kubernetes enabled environment 2+ years of experience coding REST services and APIs using one or more of the following: Python, C#, Node.js , Java Certified Kubernetes Application Developer Google Cloud Certified Apigee API Engineer TTEC Digital and our 1,800+ employees, pioneer engagement and growth solutions that fuel the exceptional customer experience (CX). Our sister company, TTEC Engage, is a 60,000+ employee service company, with customer service representatives located around the world. TTEC Holdings Inc. is the parent company for both Digital and Engage. When clients have a holistic need, they can draw from these independently managed centers of excellence, TTEC Digital and TTEC Engage. TTEC is a proud equal opportunity employer where all qualified applicants will receive consideration for employment without regard to age, race, color, religion, sex, sexual orientation, gender identity, national origin, disability. TTEC has fully embraced and is committed to expanding our diverse and inclusive workforce. We strive to reflect the communities we serve while delivering amazing service and technology centered around humanity. Rarely do applicants meet all desired job qualifications, so if you feel you would succeed in the role above, please take a moment and share your qualifications.

Posted 5 hours ago

Apply

0.0 years

0 Lacs

Calicut, Kerala

On-site

Job Summary: We are seeking a highly motivated and enthusiastic Python Intern to join our tech team. This internship offers an excellent opportunity to gain real-world experience in software development using Python. You will work closely with senior developers on live projects, learning modern coding practices, tools, and frameworks. Key Responsibilities: Assist in designing, coding, and testing Python-based applications. Work on automation scripts, web scraping, or API integrations (as required). Support the development team in debugging and improving code. Participate in team meetings and contribute to project planning and problem-solving. Document code, processes, and technical specifications clearly. Learn and apply best practices in version control (e.g., Git) and coding standards. Requirements: Basic understanding of Python programming and OOP concepts. Familiarity with any Python frameworks/libraries (e.g., Flask, Django, Pandas, NumPy, etc.) is a plus. Knowledge of databases (MySQL, SQLite, or MongoDB) preferred. Strong problem-solving skills and willingness to learn. Good communication and teamwork abilities. Currently pursuing or recently completed a degree in Computer Science, IT, or related fields. Job Type: Internship Pay: Up to ₹10,000.00 per month Ability to commute/relocate: Kozhikode, Kerala: Reliably commute or planning to relocate before starting work (Required) Work Location: In person

Posted 5 hours ago

Apply

0.0 - 4.0 years

5 - 7 Lacs

Vesu, Surat, Gujarat

On-site

Job Title: Python (AI/ML Engineer) (3–4 Years Experience) Location: Onsite – Surat (Local Candidates Only) Job Type: Full-Time Apply at: hrd.kukamitechnology@gmail.com Key Responsibilities: - Design, develop, and deploy machine learning models for real-world applications - Analyze large datasets to extract actionable insights - Build, train, and fine-tune algorithms (classification, regression, clustering, NLP, etc.) - Integrate AI/ML models with existing web/mobile applications - Collaborate with developers and designers to deliver intelligent features - Stay updated with the latest research and industry trends in AI/ML Required Skills: - Strong programming skills in Python and experience with Pandas, NumPy, Scikit-learn - Hands-on with TensorFlow, PyTorch, or other deep learning frameworks - Knowledge of Natural Language Processing (NLP), Computer Vision, or Recommendation Systems - Experience with data preprocessing, model evaluation, and optimization - Understanding of REST APIs, cloud platforms (AWS, GCP, or Azure is a plus) Job Type: Full-time Pay: ₹550,000.00 - ₹700,000.00 per year Education: Bachelor's (Preferred) Experience: AI/ML Engineer: 4 years (Required) Language: English (Required) Location: Vesu, Surat, Gujarat (Required) Work Location: In person

Posted 6 hours ago

Apply

6.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Position Overview: We are looking for a highly skilled and experienced Senior AI/ML Engineer to join our team. The ideal candidate will have a strong background in artificial intelligence, machine learning, and deep learning, with a proven track record of building and deploying scalable AI models in real-world applications. Key Responsibilities: Design, develop, and deploy scalable AI/ML models and algorithms for real-time or batch processing applications. Collaborate with cross-functional teams including data engineering, software development, and product management to integrate AI solutions into products. Conduct in-depth research and experimentation to improve model performance and develop new approaches using state-of-the-art techniques. Evaluate and optimize ML models for speed, accuracy, and scalability. Maintain and improve existing ML infrastructure, pipelines, and toolkits. Mentor junior developers and contribute to AI/ML best practices and standards. Stay current with the latest research and developments in the AI/ML space. Required Skills and Qualifications: 6+ years of experience in and deploying machine learning and AI solutions. Strong programming skills in Python (with libraries like TensorFlow, PyTorch, Scikit-learn, NumPy, Pandas, etc.). Solid understanding of machine learning algorithms , statistical modelling , data preprocessing , and feature engineering . Experience in building and tuning deep learning models (CNNs, RNNs, Transformers, etc.). Proficiency in working with cloud platforms (AWS, GCP, Azure) and ML Ops tools (MLflow, Kubeflow, etc.). Experience with big data technologies (e.g., Spark, Hadoop) and data pipelines . Strong problem-solving skills and ability to translate business needs into technical solutions. Familiarity with model explainability , bias mitigation , and responsible AI practices. Experience in natural language processing (NLP) , computer vision , or recommendation systems . Familiarity with containerization and orchestration tools (Docker, Kubernetes). Published papers or contributions to open-source ML/AI projects. Certifications (Good to have any) Google Professional Machine Learning Engineer Microsoft Certified: Azure AI Engineer Associate AWS Certified Machine Learning – Specialty TensorFlow Developer Certificate Experience: 6+ years of experience in and deploying machine learning and AI solutions. Educational Qualification(s): Bachelor’s in computer science, Machine Learning, Data Science, or a related field. To know our Privacy Policy, please click on the link below or copy paste the URL on your browser: https://gedu.global/wp-content/uploads/2023/09/GEDU-Privacy-Policy-22092023-V2.0-1.pdf

Posted 6 hours ago

Apply

50.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

About The Opportunity Job Type: Permanent Application Deadline: 26 August 2025 Job Description Title Senior Test Analyst Department ISS DELIVERY - DEVELOPMENT - GURGAON Location INB905E Level 3 We’re proud to have been helping our clients build better financial futures for over 50 years. How have we achieved this? By working together - and supporting each other - all over the world. So, join our ISS Delivery team and feel like you’re part of something bigger. About Your Team The Investment Solutions Services (ISS) delivery team provides team provides systems development, implementation and support services for FIL’s global Investment Management businesses across asset management lifecyle. We support Fund Managers, Research Analysts, Traders and Investment Services Operations in all of FIL’s international locations, including London, Hong Kong, and Tokyo About Your Role You will be joining this position as Senior Test Analyst in QA chapter, and therefore be responsible for executing testing activities for all applications under IM technology based out of India. Here are the expectations and probably how your day in a job will look like Understand business needs and analyse requirements and user stories to carry out different testing activities. Collaborate with developers and BA’s to understand new features, bug fixes, and changes in the codebase. Create and execute functional as well as automated test cases on different test environments to validate the functionality Log defects in defect tracker and work with PM’s and devs to prioritise and resolve them. Develop and maintain automation script , preferably using python stack. Deep understanding of databases both relational as well as non-relational. Document test cases , results and any other issues encountered during testing. Attend team meetings and stand ups to discuss progress, risks and any issues that affects project deliveries Stay updated with new tools, techniques and industry trends. About You Seasoned Software Test analyst with more than 5+ years of hands on experience Hands-on experience in automating web and backend automation using open source tools ( Playwright, pytest, Selenium, request, Rest Assured, numpy , pandas). Proficiency in writing and understanding complex db queries in various databases ( Oracle, Snowflake) Good understanding of cloud ( AWS , Azure) Preferable to have finance investment domain. Strong logical reasoning and problem solving skills. Preferred programming language Python and Java. Familiarity with CI/CD tools (e.g., Jenkins, GitLab CI) for automating deployment and testing workflows Feel rewarded For starters, we’ll offer you a comprehensive benefits package. We’ll value your wellbeing and support your development. And we’ll be as flexible as we can about where and when you work – finding a balance that works for all of us. It’s all part of our commitment to making you feel motivated by the work you do and happy to be part of our team. For more about our work, our approach to dynamic working and how you could build your future here, visit careers.fidelityinternational.com. For more about our work, our approach to dynamic working and how you could build your future here, visit careers.fidelityinternational.com.

Posted 6 hours ago

Apply

2.0 - 3.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Requirements Bachelor’s degree in computer science, computer engineering, or related field. 2-3 years of experience as a Python developer. Can communicate with International clients independently. Expert knowledge of Python and related frameworks, including Django and Flask. Expertise in SQL, Machine Learning, Data Science, and AWS Explorer. A deep understanding of multi-process architecture and the threading limitations of Python. Familiarity with server-side templating languages, including Jinja2 and Mako. Ability to integrate multiple data sources into a single system. Familiarity with testing tools. Ability to collaborate on projects and work independently when required. Must Skills Expertise in AWSS3, RDS, DynamoDB, Elastic search, and dataflows and strong knowledge of SQL and NoSQL databases. Expertise in Python and various Python libraries like Pandas, Boto3, Numpy, etc. Roles and Responsibilities Coordinating with development teams to determine application requirements. Writing scalable code using Python programming language. Design and develop scalable data pipelines and ETL processes on AWS using Python-based technologies (e.g., AWS Glue, AWS Lambda, AWS Step Functions, and AWS Step Machines). Monitor and troubleshoot data pipelines, addressing any issues or bottlenecks that arise on time. Testing and debugging applications. Developing back-end components. Integrating user-facing elements using server-side logic. Assessing and prioritising client feature requests. Integrating data storage solutions. Coordinating with front-end developers. Reprogramming existing databases to improve functionality. Developing digital tools to monitor online traffic.

Posted 7 hours ago

Apply

0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Company Description Fox Trading Solutions is a quantitative trading firm managing funds with a strong focus on automation and data-driven strategies. Role Description We are looking for an enthusiastic Python Developer Intern who will work closely with our quant and tech team to support: Backtesting of trading strategies Data cleaning, ingestion, and storage pipelines Routine management of financial market datasets Writing scripts for data automation and scheduled jobs What we Expect Good command of Python and basic data structures Familiarity with Pandas and NumPy for time-series data Understanding of the stock market and derivatives is a big plus Interest in algorithmic trading or financial data analytics Self-motivated and able to work in a fast-paced team environment Send your resume (and GitHub if available) to [foxtradingsolutions@gmail.com]

Posted 7 hours ago

Apply

6.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Job Title: Data Scientist Engineer Location: Noida Experience: 6- 10years Mode: 6 month + contract Role Summary: Looking for one data scientist engineer with Strong experience in AI/ML, Data collection preprocessing, estimation, Architecture creation Responsibility : Model Development: Design and implement ML models to tackle complex business challenges. Data Preprocessing: Clean, preprocess, and analyze large datasets for meaningful insights and model features. Model Training: Train and fine-tune ML models using various techniques including deep learning and ensemble methods. Evaluation and Optimization: Assess model performance, optimize for accuracy, efficiency, and scalability. Deployment: Deploy ML models in production, monitor performance for reliability. Collaboration: Work with data scientists, engineers, and stakeholders to integrate ML solutions. Research: Stay updated on ML/AI advancements, contribute to internal knowledge. Documentation: Maintain comprehensive documentation for all ML models and processes. • Qualification - Bachelor's or master’s in computer science, Machine Learning, Data Science, or a related field and must be experience of 6-10 years. • Desirable Skills: Must Have 1. Experience in timeseries forecasting, regression Model, Classification Model 2. Python , R, Data analysis 3. Large size data handling with Panda , Numpy and Matplot Lib 4. Version Control: Git or any other 5. ML Framework: Hands on exp in Tensorflow, Pytorch, Scikit-Learn, Keras 6. Good knowledge on Cloud platform and ( AWS/ AZure/ GCP), Docker kubernetis 7. Model Selection, evaluation, Deployment, Data collection and preprocessing, Feature engineering Estimation Good to Have Experience with Big Data and analytics using technologies like Hadoop, Spark, etc. Additional experience or knowledge in AI/ML technologies beyond the mentioned frameworks. BFSI and banking domain

Posted 8 hours ago

Apply

4.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Senior Data Engineer: Incedo is a US-based consulting, data science and technology services firm with over 2,000 people helping clients from our six offices across US and India . We help our clients achieve competitive advantage through end-to-end digital transformation. Our uniqueness lies in bringing together strong engineering, data science, and design capabilities coupled with deep domain understanding. We combine services and products to maximize business impact for our clients in telecom, financial services, product engineering and life science & healthcare industries . Working at Incedo will provide you an opportunity to work with industry leading client organizations, deep technology and domain experts, and global teams . Incedo University , our learning platform, provides ample learning opportunities starting with a structured onboarding program and carrying throughout various stages of your career. A variety of fun activities are also an integral part of our friendly work environment . Our flexible career paths allow you to grow into a program manager, a technical architect or a domain expert based on your skills and interests. Location: Gurugram & Pune Experience: 4 Years to 7 Years Notice Period: Immediate OR Serving Notice OR 30 Days Official Notice Only. Role Description: We are seeking a skilled professional to maintain and support batch jobs in a legacy environment. The role involves managing and monitoring ETL processes, addressing issues, and enhancing existing PL/SQL scripts. The ideal candidate will have strong expertise in Informatica, SQL Server, and data warehousing concepts, along with experience in troubleshooting and improving batch job performance. Key Responsibilities: Design and implement robust ETL pipelines using AWS Glue, Redshift, Lambda, and S3. Monitor and optimize the performance of data workflows and batch processing jobs. Troubleshoot and resolve issues related to data pipeline failures, inconsistencies, and performance bottlenecks. Collaborate with cross-functional teams to define data requirements and ensure data quality and accuracy. Develop and maintain automated solutions for data transformation, migration, and integration tasks. Implement best practices for data security, data governance, and compliance within AWS environments. Continuously improve and optimize AWS Glue jobs, Lambda functions, and S3 storage management. Maintain comprehensive documentation for data pipeline architecture, job schedules, and issue resolution processes. Required Skills and Experience: Strong experience with Data Engineering practices. Experience in AWS services, particularly AWS Redshift, Glue, Lambda, S3, and other AWS data tools. Proficiency in SQL, python , Pyspark, numpy etc and experience in working with large-scale data sets. Experience in designing and implementing ETL pipelines in cloud environments. Expertise in troubleshooting and optimizing data processing workflows. Familiarity with data warehousing concepts and cloud-native data architecture. Knowledge of automation and orchestration tools in a cloud-based environment. Strong problem-solving skills and the ability to debug and improve the performance of data jobs. Excellent communication skills and the ability to work collaboratively with cross-functional teams. Good to have knowledge of DBT & Snowflake Preferred Qualifications: Bachelor’s degree in Computer Science, Information Technology, Data Engineering, or a related field. Experience with other AWS data services like Redshift, Athena, or Kinesis. Familiarity with Python or other scripting languages for data engineering tasks. Experience with containerization and orchestration tools like Docker or Kubernetes. We are an Equal Opportunity Employer: We value diversity at Incedo. We do not discriminate based on race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.

Posted 8 hours ago

Apply

0 years

0 Lacs

Haryana, India

On-site

Role Description: We are seeking a skilled professional to maintain and support batch jobs in a legacy environment. The role involves managing and monitoring ETL processes, addressing issues, and enhancing existing PL/SQL scripts. The ideal candidate will have strong expertise in Informatica, SQL Server, and data warehousing concepts, along with experience in troubleshooting and improving batch job performance. Key Responsibilities: Design and implement robust ETL pipelines using AWS Glue, Lambda, Redshift and S3 Monitor and optimize the performance of data workflows and batch processing jobs Troubleshoot and resolve issues related to data pipeline failures, inconsistencies, and performance bottlenecks Collaborate with cross-functional teams to define data requirements and ensure data quality and accuracy Develop and maintain automated solutions for data transformation, migration, and integration tasks Implement best practices for data security, data governance, and compliance within AWS environments Continuously improve and optimize AWS Glue jobs, Lambda functions, and S3 storage management Maintain comprehensive documentation for data pipeline architecture, job schedules, and issue resolution processes. Required Skills and Experience: Strong experience with Data Engineering practices Experience in AWS services, particularly AWS Glue, Lambda, S3, and other AWS data tools Proficiency in SQL, python , Pyspark, numpy etc and experience in working with large-scale data sets Experience in designing and implementing ETL pipelines in cloud environments Expertise in troubleshooting and optimizing data processing workflows Familiarity with data warehousing concepts and cloud-native data architecture Knowledge of automation and orchestration tools in a cloud-based environment Strong problem-solving skills and the ability to debug and improve the performance of data jobs Excellent communication skills and the ability to work collaboratively with cross-functional teams Good to have knowledge of DBT & Snowflake We are an Equal Opportunity Employer: We value diversity at Incedo. We do not discriminate based on race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.

Posted 8 hours ago

Apply

0.0 - 2.0 years

0 - 0 Lacs

Adambakkam, Chennai, Tamil Nadu

On-site

AI/ML Developer – Retail & F&B AI Solutions Location: Chennai, Tamil Nadu About Us Uth Software and DwarfEye Labs Pvt. Ltd. is a rapidly evolving tech company developing cutting-edge AI-powered solutions , modern ERP systems , and billing software tailored for the Retail and Food & Beverage (F&B) sectors. Our mission is to empower businesses through automation, data intelligence, and smart decision-making using Artificial Intelligence and Machine Learning. We are now expanding our AI/ML team and looking for talented developers to contribute to the next generation of business automation tools. Job Description We are seeking AI/ML Developers (6 months to 2 years of experience) who are passionate about building intelligent systems that solve real business challenges. You will work with our product and engineering teams to design, develop, and deploy AI/ML models and solutions integrated into our business software, including image and video-based AI applications . Key Responsibilities Develop and implement machine learning models for forecasting, customer behavior analysis, inventory optimization, etc. Work on image recognition , object detection , and video analytics using real-world datasets Handle data preprocessing , feature engineering, model training, validation, and performance tuning Collaborate with software developers to integrate ML and computer vision solutions into existing applications Assist in building AI-powered modules like chatbots, recommendation engines, predictive dashboards , and video-based intelligence tools Stay up-to-date with the latest AI/ML trends, technologies, and best practices Document your code, models, and logic for internal teams Requirements 6 months to 2 years of hands-on experience in AI/ML development Strong proficiency in Python and libraries such as Scikit-learn, Pandas, NumPy, TensorFlow, PyTorch , etc. Working knowledge of computer vision libraries like OpenCV, YOLO (You Only Look Once) or MediaPipe is a plus Good understanding of object detection , image classification , and frame-by-frame video analysis Good understanding of machine learning algorithms , model evaluation metrics , and data wrangling Experience with real-time or batch data processing and model deployment is a plus Familiarity with SQL, APIs , and version control (Git) Bachelor’s or Master’s degree in Computer Science, Data Science, AI, or related field Strong problem-solving mindset and ability to work independently or in small teams Additional Preferences Immediate joiners will be given priority Preference for candidates who are natives of Tamil Nadu or currently settled in Tamil Nadu Candidates with real-time project experience in: Video analytics Predictive analytics Decision intelligence will be given added consideration Preferred Skills (Good to Have) Experience with LLMs , Generative AI , or NLP Exposure to video processing , gesture recognition , or frame-by-frame analysis Experience with MLOps tools (MLflow, Docker, etc.) Knowledge of business domains like retail, restaurant, or ERP systems What We Offer Competitive monthly salary based on experience (₹20,000 to ₹35,000/month) Opportunity to work on real-world AI use cases in retail and F&B Access to modern tools, datasets, and resources to support R&D Clear career growth path with promotion and upskilling opportunities Certificate of Experience How to Apply Send your updated resume to vaikundamani@dwarfeye.ai with the subject line: “_ Application – AI/ML Developer (6M–2Y) _” . For more information, call us at: +91-8608721111 Shortlisted candidates will be contacted for a direct interview. Join us and build the future of intelligent business software powered by AI. Job Type: Full-time Pay: ₹20,000.00 - ₹35,000.00 per month Work Location: In person Application Deadline: 10/10/2025

Posted 12 hours ago

Apply

0 years

0 Lacs

Gujarat, India

On-site

Role Overview We are seeking an experienced and technically proficient AI/ML Data Scientist Team Lead to drive innovation and lead a team focused on building scalable machine learning models and AI-driven solutions. The candidate must have a strong foundation in data science, deep learning, NLP, and Python programming, along with proven leadership capabilities. This is a highly strategic role that combines hands-on model development with technical team leadership. Key Responsibilities Lead and mentor a team of data scientists and ML engineers, providing guidance on project design, implementation, and deployment. Collaborate with stakeholders to understand business problems and translate them into data science solutions. Design, build, and validate machine learning and deep learning models for classification, regression, clustering, recommendation systems, etc. Implement NLP pipelines for text classification, entity extraction, sentiment analysis, topic modeling, and language modeling. Leverage frameworks like TensorFlow, PyTorch, Scikit-learn, and spaCy to build, train, and fine-tune ML/DL models. Optimize models for accuracy, performance, and scalability in production environments. Integrate AI models into applications and workflows via REST APIs and cloud-based solutions. Conduct code reviews, model evaluations, and ensure reproducibility and model governance. Stay updated with the latest advancements in AI/ML research and contribute to the continuous improvement of the data science roadmap. Technical Requirements Strong programming skills in Python, including libraries such as NumPy, Pandas, Scikit-learn, TensorFlow, PyTorch, Matplotlib, Seaborn. Experience with machine learning algorithms, including supervised and unsupervised learning techniques. Proficiency in deep learning architectures such as CNNs, RNNs, Transformers, GANs, etc. Hands-on experience with Natural Language Processing (NLP) using tools like spaCy, NLTK, HuggingFace Transformers. Good understanding of model evaluation, hyperparameter tuning, and cross-validation techniques. Experience with ML model deployment using Flask/FastAPI and Docker is desirable. Knowledge of cloud platforms (AWS, Azure, or GCP) for training and deployment pipelines is a plus. Leadership & Soft Skills Proven ability to lead technical teams and manage end-to-end AI/ML project lifecycles. Excellent written and verbal communication to present complex results to technical and non-technical audiences. Strong organizational and time-management skills with a delivery-focused approach. Enthusiastic about building and mentoring a high-performing team in a fast-paced environment (ref:hirist.tech)

Posted 14 hours ago

Apply

Exploring numpy Jobs in India

Numpy is a widely used library in Python for numerical computing and data analysis. In India, there is a growing demand for professionals with expertise in numpy. Job seekers in this field can find exciting opportunities across various industries. Let's explore the numpy job market in India in more detail.

Top Hiring Locations in India

  1. Bangalore
  2. Pune
  3. Hyderabad
  4. Gurgaon
  5. Chennai

Average Salary Range

The average salary range for numpy professionals in India varies based on experience level: - Entry-level: INR 4-6 lakhs per annum - Mid-level: INR 8-12 lakhs per annum - Experienced: INR 15-20 lakhs per annum

Career Path

Typically, a career in numpy progresses as follows: - Junior Developer - Data Analyst - Data Scientist - Senior Data Scientist - Tech Lead

Related Skills

In addition to numpy, professionals in this field are often expected to have knowledge of: - Pandas - Scikit-learn - Matplotlib - Data visualization

Interview Questions

  • What is numpy and why is it used? (basic)
  • Explain the difference between a Python list and a numpy array. (basic)
  • How can you create a numpy array with all zeros? (basic)
  • What is broadcasting in numpy? (medium)
  • How can you perform element-wise multiplication of two numpy arrays? (medium)
  • Explain the use of the np.where() function in numpy. (medium)
  • What is vectorization in numpy? (advanced)
  • How does memory management work in numpy arrays? (advanced)
  • Describe the difference between np.array and np.matrix in numpy. (advanced)
  • How can you speed up numpy operations? (advanced)
  • ...

Closing Remark

As you explore job opportunities in the field of numpy in India, remember to keep honing your skills and stay updated with the latest developments in the industry. By preparing thoroughly and applying confidently, you can land the numpy job of your dreams!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies