Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
8.0 - 13.0 years
25 - 30 Lacs
Bengaluru
Work from Office
Job Title: Data Scientist – OpenCV. Experience: 2–3 Years. Location: Bangalore. Notice Period: Immediate Joiners Only. Job Overview. We are looking for a passionate and driven Data Scientist with a strong foundation in computer vision, image processing, and OpenCV. This role is ideal for professionals with 2–3 years of experience who are excited about working on real-world visual data problems and eager to contribute to impactful projects in a collaborative environment.. Key Responsibilities. Develop and implement computer vision solutions using OpenCV and Python.. Work on tasks including object detection, recognition, tracking, and image/video enhancement.. Clean, preprocess, and analyze large image and video datasets to extract actionable insights.. Collaborate with senior data scientists and engineers to deploy models into production pipelines.. Contribute to research and proof-of-concept projects in the field of computer vision and machine learning.. Prepare clear documentation for models, experiments, and technical processes.. Required Skills. Proficient in OpenCV and image/video processing techniques.. Strong coding skills in Python, with familiarity in libraries such as NumPy, Pandas, Matplotlib.. Solid understanding of basic machine learning and deep learning concepts.. Hands-on experience with Jupyter Notebooks; exposure to TensorFlow or PyTorch is a plus.. Excellent analytical, problem-solving, and debugging skills.. Effective communication and collaboration abilities.. Preferred Qualifications. Bachelor’s degree in computer science, Data Science, Electrical Engineering, or a related field.. Practical exposure through internships or academic projects in computer vision or image analysis.. Familiarity with cloud platforms (AWS, GCP, Azure) is an added advantage.. What We Offer. A dynamic and innovation-driven work culture.. Guidance and mentorship from experienced data science professionals.. The chance to work on impactful, cutting-edge projects in computer vision.. Competitive compensation and employee benefits.. Show more Show less
Posted 3 days ago
1.0 - 5.0 years
5 - 9 Lacs
Bengaluru
Work from Office
Qualifications. Experience:. At least 2 years Clinical Trial EDC (preferably EDC systems like Veeva, RAVE, or Oracle) and/or Reporting experience (preferably JReview, Spotfire, Jupyter Labs, SQL).. Technical Skills:. Familiarity with common troubleshooting tools and technologies (e.g., JIRA, Dynamics , etc.).. Familiarity with Microsoft Office products (e.g. Office, Excel, PowerPoint, Dynamics). Nice to have -. ? Understanding of web technologies and web services (e.g., RESTful APIs, SOAP, etc.).. ? Experience with understanding database technologies (Java, Python, SQL, for example) and troubleshooting issues related to cloud-based software systems.. Soft Skills:. Excellent verbal and written communication skills with the ability to explain technical concepts to non-technical audiences and vice versa.. Strong problem-solving skills. Customer-centric mindset with a passion for helping clients resolve issues.. Ability to multitask, prioritize, and manage a high volume of requests.. Education:. Bachelor's degree in Computer Science, Information Technology, Life Sciences, or related field (preferred but not required).. Show more Show less
Posted 3 days ago
8.0 - 13.0 years
25 - 30 Lacs
Bengaluru
Work from Office
Job Summary We are looking for a talented Data Scientist to join our team The ideal candidate will have a strong foundation in data analysis, statistical models, and machine learning algorithms You will work closely with the team to solve complex problems and drive business decisions using data This role requires strategic thinking, problem-solving skills, and a passion for data, Job Responsibilities Analyse large, complex datasets to extract insights and determine appropriate techniques to use, Build predictive models, machine learning algorithms and conduct A/B tests to assess the effectiveness of models, Present information using data visualization techniques, Collaborate with different teams (e-g , product development, marketing) and stakeholders to understand business needs and devise possible solutions, Stay updated with the latest technology trends in data science, Develop and implement real-time machine learning models for various projects, Engage with clients and consultants to gather and understand project requirements and expectations, Write well-structured, detailed, and compute-efficient code in Python to facilitate data analysis and model development, Utilize IDEs such as Jupyter Notebook, Spyder, and PyCharm for coding and model development, Apply agile methodology in project execution, participating in sprints, stand-ups, and retrospectives to enhance team collaboration and efficiency, Education IC Typically requires a minimum of 5 years of related experience Mgr & Exec Typically requires a minimum of 3 years of related experience, At NetApp, we embrace a hybrid working environment designed to strengthen connection, collaboration, and culture for all employees This means that most roles will have some level of in-office and/or in-person expectations, which will be shared during the recruitment process, Equal Opportunity Employer NetApp is firmly committed to Equal Employment Opportunity (EEO) and to compliance with all laws that prohibit employment discrimination based on age, race, color, gender, sexual orientation, gender identity, national origin, religion, disability or genetic information, pregnancy, and any protected classification, Why NetApp We are all about helping customers turn challenges into business opportunity It starts with bringing new thinking to age-old problems, like how to use data most effectively to run better but also to innovate We tailor our approach to the customer's unique needs with a combination of fresh thinking and proven approaches, We enable a healthy work-life balance Our volunteer time off program is best in class, offering employees 40 hours of paid time off each year to volunteer with their favourite organizations We provide comprehensive benefits, including health care, life and accident plans, emotional support resources for you and your family, legal services, and financial savings programs to help you plan for your future We support professional and personal growth through educational assistance and provide access to various discounts and perks to enhance your overall quality of life, If you want to help us build knowledge and solve big problems, let's talk,
Posted 4 days ago
2.0 - 5.0 years
3 - 7 Lacs
Gurugram
Work from Office
We're seeking a skilled Software Engineer with expertise in C++, Python and it would be nice to have experience of working on Large Language Models (LLM) to join our team. Desired Skills and Experience Essential skills Minimum of a bachelors degree in a technical or quantitative field with strong academic background Demonstrated ability to implement data engineering pipelines and real-time applications in C++ (python is a plus) Proficiency with C++ based tools like STL, object oriented programming in C++ is a must Experience with Linux/Unix shell/ scripting languages and Git is a must Experience with python based tools like Jupyter notebook, coding standards like pep8 is a plus Strong problem-solving skills and understanding of data structures and algorithms Experience with large-scale data processing and pipeline development Understanding of various LLM frameworks and experience with prompt engineering using Python or other scripting languages Nice to have Knowledge of natural language processing (NLP) concepts, familiarity with integrating and leveraging LLM APIs for various applications Key Responsibilities Design, develop, and maintain projects using C++ and Python along with operational support Transform a wide range of structured and unstructured data into standardized outputs for quantitative analysis and financial engineering Participate in code reviews, ensure coding standards, and contribute to the improvement of the codebase Develop the utility tools that can further automate the software development, testing and deployment workflow Collaborate with internal and external cross-functional teams Key Metrics C++ Behavioral Competencies Good communication (verbal and written), critical thinking, and attention to detail
Posted 4 days ago
3.0 - 8.0 years
5 - 15 Lacs
Bengaluru
Work from Office
Job Title: Data Scientist Location: Bangalore, India Experience: 3+ Years Role & Responsibilities: We are looking for a highly motivated Data Scientist to join our analytics team in Bangalore. This role is ideal for individuals who possess strong machine learning and data engineering skills and are eager to solve complex problems in the financial technology space. You will work on developing predictive models, building scalable data solutions, and contributing to key decision-making processes. Design, build, and validate machine learning models including Logistic Regression, Random Forests, GBM, and Survival Models. Perform rigorous model evaluation using techniques such as K-Fold Cross Validation, Out-of-Time (OOT) validation, and X-Tab analysis. Utilize Python (via PyCharm/Jupiter), scikit-learn, PyTorch or TensorFlow for end-to-end model development. Analyse large datasets using SQL and Excel to derive actionable insights. Develop and deploy APIs to serve machine learning models in production environments. Apply statistical techniques such as mean/variance analysis, probability distributions, and simulations to drive model accuracy and relevance. (Optional) Work with lending-specific metrics such as vintage curves, roll forward tracking, and bounce rates to enhance financial models. Strong ML skills: Logistic, Random Forest, GBM, survival model, K fold test, OOT, X-Tab etc. Experience using scikit-learn, pytorch or tensor flow. Experience: 3+ Years Python and usage of PyCharm/ Jupiter, SQL, Excel. Well versed in API development - Strong mathematical ability. mean, variance, probability distributions, simulation etc. Understanding of lending metrics such as vintage curve, roll forward, bounces etc. (Good to have not mandatory) Engineering background preferred IIT/NIT plus not must. Prior experience in Lending Fintech preferred.
Posted 5 days ago
1.0 - 3.0 years
5 - 10 Lacs
Bengaluru
Work from Office
Job Title: Data Analyst / AI & Deep Learning Location: Remote Type: Full-time Company Overview We work with organizations building advanced AI-powered solutions in areas such as real-time security, smart surveillance, and human behaviour analysis. Our clients combine technologies like computer vision, sensor fusion, and deep learning to drive innovation across embedded systems and cloud platforms. These solutions aim to make physical spaces safer, smarter, and more responsive. We're looking for early-career professionals who are eager to work with complex datasets, support machine learning workflows, and contribute to impactful, real-world AI products. Who would be a Good Fit? Candidates with strong foundation in data analysis, statistics, or machine learning fundamentals, and are passionate about turning messy, real-world data into meaningful insights. This role is perfect for someone with hands-on exposure to tools like Python and SQL, and an interest in working on next-generation AI products. Key Responsibilities Support the collection, cleaning, and preparation of datasets (image, video, sensor, time series). Assist in building and maintaining data pipelines for training and evaluating models. Perform exploratory data analysis (EDA) to uncover patterns or anomalies. Work with cross-functional teams to monitor model metrics and performance. Create visualizations and reports using tools like Seaborn, Plotly, or Tableau. Support annotation processes and help manage data quality checks. Automate routine data handling and reporting tasks using Python and SQL. Required Skills & Qualifications 1 to 3 years of experience in a Data Analysis, Data Science, or related role. Proficiency in Python, along with libraries like Pandas and NumPy. Solid understanding of data cleaning, preprocessing, and exploratory analysis. Working knowledge of SQL and experience working with relational databases. Familiarity with data visualization libraries or tools (e.g., Plotly, Tableau, PowerBI). Experience working with Jupyter or Google Colab for prototyping and analysis. Strong attention to detail and ability to follow data governance and documentation practices. Nice to Have Exposure to computer vision or sensor data projects. Experience with data labelling or annotation platforms (e.g., Label Studio, Roboflow). Familiarity with cloud platforms (e.g., AWS S3, EC2) and version control (Git).
Posted 5 days ago
6.0 - 10.0 years
12 Lacs
Hyderabad
Work from Office
Dear Candidate, We are seeking a highly skilled and motivated Software Engineer with expertise in Azure AI , Cognitive Services, Machine Learning , and IoT. The ideal candidate will design, develop, and deploy intelligent applications leveraging Azure cloud technologies, AI-driven solutions, and IoT infrastructure to drive business innovation and efficiency. Responsibilities: Develop and implement AI-driven applications using Azure AI and Cognitive Services . Design and deploy machine learning models to enhance automation and decision-making processes. Integrate IoT solutions with cloud platforms to enable real-time data processing and analytics. Collaborate with cross-functional teams to architect scalable, secure, and high-performance solutions. Optimize and fine-tune AI models for accuracy, performance, and cost-effectiveness. Ensure best practices in cloud security, data governance, and compliance. Monitor, maintain, and troubleshoot AI and IoT solutions in production environments. Stay updated with the latest advancements in AI, ML, and IoT technologies to drive innovation. Required Skills and Qualifications: Bachelors or Masters degree in Computer Science, Engineering, or a related field. Strong experience with Azure AI, Cognitive Services, and Machine Learning. Proficiency in IoT architecture, data ingestion, and processing using Azure IoT Hub , Edge , or related services. Expertise in deploying and managing machine learning models in cloud environments. Strong understanding of RESTful APIs, microservices, and cloud-native application development. Experience with DevOps practices, CI/CD pipelines, and containerization (Docker, Kubernetes). Knowledge of cloud security principles and best practices. Excellent problem-solving skills and the ability to work in an agile development environment. Preferred Qualifications: Certifications in Microsoft Azure AI, IoT, or related cloud technologies. Experience with Natural Language Processing (NLP) and Computer Vision. Familiarity with big data processing and analytics tools such as Azure Data. Prior experience in deploying edge computing solutions. Soft Skills: Problem-Solving: Ability to analyze complex problems and develop effective solutions. Communication Skills: Strong verbal and written communication skills to effectively collaborate with cross-functional teams. Analytical Thinking: Ability to think critically and analytically to solve technical challenges. Time Management: Capable of managing multiple tasks and deadlines in a fast-paced environment. Adaptability: Ability to quickly learn and adapt to new technologies and methodologies. Interview Mode : F2F for who are residing in Hyderabad / Zoom for other states Location : 43/A, MLA Colony,Road no 12, Banjara Hills, 500034 Time : 2 - 4pm
Posted 5 days ago
2.0 - 5.0 years
4 - 8 Lacs
Kolkata
Hybrid
Type: Contract-to-Hire (C2H) Job Summary We are looking for a skilled PySpark Developer with hands-on experience in building scalable data pipelines and processing large datasets. The ideal candidate will have deep expertise in Apache Spark , Python , and working with modern data engineering tools in cloud environments such as AWS . Key Skills & Responsibilities Strong expertise in PySpark and Apache Spark for batch and real-time data processing. Experience in designing and implementing ETL pipelines, including data ingestion, transformation, and validation. Proficiency in Python for scripting, automation, and building reusable components. Hands-on experience with scheduling tools like Airflow or Control-M to orchestrate workflows. Familiarity with AWS ecosystem, especially S3 and related file system operations. Strong understanding of Unix/Linux environments and Shell scripting. Experience with Hadoop, Hive, and platforms like Cloudera or Hortonworks. Ability to handle CDC (Change Data Capture) operations on large datasets. Experience in performance tuning, optimizing Spark jobs, and troubleshooting. Strong knowledge of data modeling, data validation, and writing unit test cases. Exposure to real-time and batch integration with downstream/upstream systems. Working knowledge of Jupyter Notebook, Zeppelin, or PyCharm for development and debugging. Understanding of Agile methodologies, with experience in CI/CD tools (e.g., Jenkins, Git). Preferred Skills Experience in building or integrating APIs for data provisioning. Exposure to ETL or reporting tools such as Informatica, Tableau, Jasper, or QlikView. Familiarity with AI/ML model development using PySpark in cloud environments Skills: ci/cd,zeppelin,pycharm,pyspark,etl tools,control-m,unit test cases,tableau,performance tuning,jenkins,qlikview,informatica,jupyter notebook,api integration,unix/linux,git,aws s3,hive,cloudera,jasper,airflow,cdc,pyspark, apache spark, python, aws s3, airflow/control-m, sql, unix/linux, hive, hadoop, data modeling, and performance tuning,agile methodologies,aws,s3,data modeling,data validation,ai/ml model development,batch integration,apache spark,python,etl pipelines,shell scripting,hortonworks,real-time integration,hadoop
Posted 6 days ago
6.0 - 11.0 years
8 - 12 Lacs
Chennai
Hybrid
Work Mode: Hybrid Interview Mode: Virtual (2 Rounds) Type: Contract-to-Hire (C2H) Job Summary We are looking for a skilled PySpark Developer with hands-on experience in building scalable data pipelines and processing large datasets. The ideal candidate will have deep expertise in Apache Spark , Python , and working with modern data engineering tools in cloud environments such as AWS . Key Skills & Responsibilities Strong expertise in PySpark and Apache Spark for batch and real-time data processing. Experience in designing and implementing ETL pipelines, including data ingestion, transformation, and validation. Proficiency in Python for scripting, automation, and building reusable components. Hands-on experience with scheduling tools like Airflow or Control-M to orchestrate workflows. Familiarity with AWS ecosystem, especially S3 and related file system operations. Strong understanding of Unix/Linux environments and Shell scripting. Experience with Hadoop, Hive, and platforms like Cloudera or Hortonworks. Ability to handle CDC (Change Data Capture) operations on large datasets. Experience in performance tuning, optimizing Spark jobs, and troubleshooting. Strong knowledge of data modeling, data validation, and writing unit test cases. Exposure to real-time and batch integration with downstream/upstream systems. Working knowledge of Jupyter Notebook, Zeppelin, or PyCharm for development and debugging. Understanding of Agile methodologies, with experience in CI/CD tools (e.g., Jenkins, Git). Preferred Skills Experience in building or integrating APIs for data provisioning. Exposure to ETL or reporting tools such as Informatica, Tableau, Jasper, or QlikView. Familiarity with AI/ML model development using PySpark in cloud environments Skills: ci / cd , zeppelin , pycharm , pyspark , etl tools,control-m,unit test cases,tableau,performance tuning , jenkins , qlikview , informatica , jupyter notebook,api integration,unix/linux,git,aws s3 , hive , cloudera , jasper , airflow , cdc , pyspark , apache spark, python, aws s3, airflow/control-m, sql, unix/linux, hive, hadoop, data modeling, and performance tuning,agile methodologies,aws,s3,data modeling,data validation,ai/ml model development,batch integration,apache spark,python,etl pipelines,shell scripting,hortonworks,real-time integration,hadoop
Posted 6 days ago
8.0 - 14.0 years
42 - 47 Lacs
Mumbai
Work from Office
Overview MSCI Data Collection team collects 2.5K raw data points across 70K companies from 300 public sources, representing 1.7+M news articles and 1M documents per year (filings, AGM, etc) leading to 20MN data updates per year. The team provides publicly disclosed raw input data used in different products models and indexes. We focus on applying QA on input data, extracted from filings, news, websites, NGO, or received from various data providers. MSCI Data Collection team comprises of 250+ internal team size (review/validation) and 600+ external team (vendors) to support collection. On a day-today basis, the team manages data production, coordinates with vendors, product teams, clients, and corporates. The team is responsible for quality review and ensures that data collected is up to date and adheres to data collection guidance and methodology defined by MSCI. Moreover, with the team’s consistent drive to innovate and leverage technology, the Data Acquisition and Collection team initiates and/or collaborates with other teams on programs related to data quality and process improvements, including but not limited to automation, workflow streamlining projects, and building data QA models. Responsibilities As a member of Data Collection team, you are expected to have a strong interest in general Environment, Social, Governance, Climate, and EU Regulation trends and an eye for detail on data quality. As a team leader, you will be responsible for overseeing a team of analysts based across locations, overseeing production and overall data quality, and managing vendors and stakeholders. To help you succeed in your role, you will have access to different learning and development opportunities, such as on leadership, stakeholder management, content, and other functional or technical trainings. Your specific responsibilities shall include: Help nurture a team of young and experienced professionals, support them in their professional growth and development while embracing our values of diversity and inclusion; Manage team production and capacity planning. Identify resource availability for production and other product support processes. Create production and resourcing proposal for new projects and initiatives; Lead the team into delivering top quality data aligned with MSCI methodology, service level agreements, and regulatory requirements; Drive process improvements to ensure consistent data quality and efficiency, such as a utomation of data quality diagnostics by developing a new system/tool that will enable quality assessment of data without manual intervention; Lead initiatives on scaling up operations, process changes, business priorities, and client expectations; Collaborating to working committees, projects, or perform other tasks as deemed necessary by the business; Work with vendor teams on quality (timeliness and accuracy) expectations and metrics. This also includes overseeing the vendor budget allocation for your product; Facilitate steering calls, regular stakeholders, and senior leader meetings to report on key performance metrics and initiatives; Escalate cases and challenges as necessary; Respond to client and issuer queries and be at the forefront representing the team on corporates and client engagements. Qualifications 15+ Years of experience in data collection, research, and team management; ESG or regulatory related experience would be an added advantage but is not a must. Must have: Good command over excel tools and functionality dealing with volume of data Strong skills (written and oral), proficiency in creating presentations, data analysis and excellent research and analytical skills Experience in issuer/corporate relation and client communication Experience working on a global team in an international company is preferred Comfortable working in a team environment across hierarchies, functions, and geographies Strong interpersonal skills and ability to work with people in different offices and time zones Sound knowledge about equities or financial markets in general and knowledge on the company disclosures, filings, public company reporting including 10-Ks and annual reports Good to have: Exposure to pandas or tools like Power BI and jupyter notebooks What we offer you Transparent compensation schemes and comprehensive employee benefits, tailored to your location, ensuring your financial security, health, and overall wellbeing. Flexible working arrangements, advanced technology, and collaborative workspaces. A culture of high performance and innovation where we experiment with new ideas and take responsibility for achieving results. A global network of talented colleagues, who inspire, support, and share their expertise to innovate and deliver for our clients. Global Orientation program to kickstart your journey, followed by access to our Learning@MSCI platform, LinkedIn Learning Pro and tailored learning opportunities for ongoing skills development. Multi-directional career paths that offer professional growth and development through new challenges, internal mobility and expanded roles. We actively nurture an environment that builds a sense of inclusion belonging and connection, including eight Employee Resource Groups. All Abilities, Asian Support Network, Black Leadership Network, Climate Action Network, Hola! MSCI, Pride & Allies, Women in Tech, and Women’s Leadership Forum. At MSCI we are passionate about what we do, and we are inspired by our purpose – to power better investment decisions. You’ll be part of an industry-leading network of creative, curious, and entrepreneurial pioneers. This is a space where you can challenge yourself, set new standards and perform beyond expectations for yourself, our clients, and our industry. MSCI is a leading provider of critical decision support tools and services for the global investment community. With over 50 years of expertise in research, data, and technology, we power better investment decisions by enabling clients to understand and analyze key drivers of risk and return and confidently build more effective portfolios. We create industry-leading research-enhanced solutions that clients use to gain insight into and improve transparency across the investment process. MSCI Inc. is an equal opportunity employer. It is the policy of the firm to ensure equal employment opportunity without discrimination or harassment on the basis of race, color, religion, creed, age, sex, gender, gender identity, sexual orientation, national origin, citizenship, disability, marital and civil partnership/union status, pregnancy (including unlawful discrimination on the basis of a legally protected parental leave), veteran status, or any other characteristic protected by law. MSCI is also committed to working with and providing reasonable accommodations to individuals with disabilities. If you are an individual with a disability and would like to request a reasonable accommodation for any part of the application process, please email Disability.Assistance@msci.com and indicate the specifics of the assistance needed. Please note, this e-mail is intended only for individuals who are requesting a reasonable workplace accommodation; it is not intended for other inquiries. To all recruitment agencies MSCI does not accept unsolicited CVs/Resumes. Please do not forward CVs/Resumes to any MSCI employee, location, or website. MSCI is not responsible for any fees related to unsolicited CVs/Resumes. Note on recruitment scams We are aware of recruitment scams where fraudsters impersonating MSCI personnel may try and elicit personal information from job seekers. Read our full note on careers.msci.com
Posted 6 days ago
6.0 - 8.0 years
8 - 12 Lacs
Gurugram
Hybrid
Interview Mode: Virtual (2 Rounds) Type: Contract-to-Hire (C2H) Job Summary We are looking for a skilled PySpark Developer with hands-on experience in building scalable data pipelines and processing large datasets. The ideal candidate will have deep expertise in Apache Spark , Python , and working with modern data engineering tools in cloud environments such as AWS . Key Skills & Responsibilities Strong expertise in PySpark and Apache Spark for batch and real-time data processing. Experience in designing and implementing ETL pipelines, including data ingestion, transformation, and validation. Proficiency in Python for scripting, automation, and building reusable components. Hands-on experience with scheduling tools like Airflow or Control-M to orchestrate workflows. Familiarity with AWS ecosystem, especially S3 and related file system operations. Strong understanding of Unix/Linux environments and Shell scripting. Experience with Hadoop, Hive, and platforms like Cloudera or Hortonworks. Ability to handle CDC (Change Data Capture) operations on large datasets. Experience in performance tuning, optimizing Spark jobs, and troubleshooting. Strong knowledge of data modeling, data validation, and writing unit test cases. Exposure to real-time and batch integration with downstream/upstream systems. Working knowledge of Jupyter Notebook, Zeppelin, or PyCharm for development and debugging. Understanding of Agile methodologies, with experience in CI/CD tools (e.g., Jenkins, Git). Preferred Skills Experience in building or integrating APIs for data provisioning. Exposure to ETL or reporting tools such as Informatica, Tableau, Jasper, or QlikView. Familiarity with AI/ML model development using PySpark in cloud environments Skills: ci/cd,zeppelin,pycharm,pyspark,etl tools,control-m,unit test cases,tableau,performance tuning,jenkins,qlikview,informatica,jupyter notebook,api integration,unix/linux,git,aws s3,hive,cloudera,jasper,airflow,cdc,pyspark, apache spark, python, aws s3, airflow/control-m, sql, unix/linux, hive, hadoop, data modeling, and performance tuning,agile methodologies,aws,s3,data modeling,data validation,ai/ml model development,batch integration,apache spark,python,etl pipelines,shell scripting,hortonworks,real-time integration,hadoop
Posted 1 week ago
6.0 - 8.0 years
8 - 12 Lacs
Hyderabad
Hybrid
Interview Mode: Virtual (2 Rounds) Type: Contract-to-Hire (C2H) Job Summary We are looking for a skilled PySpark Developer with hands-on experience in building scalable data pipelines and processing large datasets. The ideal candidate will have deep expertise in Apache Spark , Python , and working with modern data engineering tools in cloud environments such as AWS . Key Skills & Responsibilities Strong expertise in PySpark and Apache Spark for batch and real-time data processing. Experience in designing and implementing ETL pipelines, including data ingestion, transformation, and validation. Proficiency in Python for scripting, automation, and building reusable components. Hands-on experience with scheduling tools like Airflow or Control-M to orchestrate workflows. Familiarity with AWS ecosystem, especially S3 and related file system operations. Strong understanding of Unix/Linux environments and Shell scripting. Experience with Hadoop, Hive, and platforms like Cloudera or Hortonworks. Ability to handle CDC (Change Data Capture) operations on large datasets. Experience in performance tuning, optimizing Spark jobs, and troubleshooting. Strong knowledge of data modeling, data validation, and writing unit test cases. Exposure to real-time and batch integration with downstream/upstream systems. Working knowledge of Jupyter Notebook, Zeppelin, or PyCharm for development and debugging. Understanding of Agile methodologies, with experience in CI/CD tools (e.g., Jenkins, Git). Preferred Skills Experience in building or integrating APIs for data provisioning. Exposure to ETL or reporting tools such as Informatica, Tableau, Jasper, or QlikView. Familiarity with AI/ML model development using PySpark in cloud environments Skills: ci/cd,zeppelin,pycharm,pyspark,etl tools,control-m,unit test cases,tableau,performance tuning,jenkins,qlikview,informatica,jupyter notebook,api integration,unix/linux,git,aws s3,hive,cloudera,jasper,airflow,cdc,pyspark, apache spark, python, aws s3, airflow/control-m, sql, unix/linux, hive, hadoop, data modeling, and performance tuning,agile methodologies,aws,s3,data modeling,data validation,ai/ml model development,batch integration,apache spark,python,etl pipelines,shell scripting,hortonworks,real-time integration,hadoop
Posted 1 week ago
15.0 - 25.0 years
5 - 9 Lacs
Pune
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : SAP for Utilities Billing Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time educationKey Responsibilitiesa. Design, configure and build applications to meet business process and application requirements b. Knowledge in Analyzing requirements and enhancing and building highly optimized standard / custom applications as well as creating Business Process and related technical documentationc. Billing Execution Individual and batch for Daily reporting to Managers, Risk identification in your moduled. Knowledge on Analyzing Issues and working on bug fixes Technical Experiencea. Should have hands on knowledge of implementing Billing related enhancements, FQ events b. Should have knowledge on Standard Modules used in RICEFW development for Billing Objects c. Should have good knowledge on all Billing and Invoicing processes like Meter to Cash Cycle, Billing exceptions and reversals, Joint Invoicing, Bill Printing, Collective invoicing and Advance Billing functions like Real Time Pricing and Budget Billingd. Should have sound knowledge on Billing Mater Data and Integration points with Device Management and FICAe. Should have strong De-bugging skills , PWBAdditional infoa. Good Communication Skillb. Good interpersonal skill.c. A minimum of 15 years of full-time education is required. Qualification 15 years full time education
Posted 1 week ago
2.0 - 7.0 years
0 Lacs
Gurugram
Work from Office
About the Role As a Data Science Engineer, you will need strong technical skills in data modeling, machine learning, data engineering, and software development. You will have the ability to conduct literature reviews and critically evaluate research papers to identify applicable techniques. Additionally, you should be able to design and implement efficient and scalable data processing pipelines, perform exploratory data analysis, and collaborate with other teams to integrate data science models into production systems. Passion for conversational AI and a desire to solve some of the most complex problems in the Natural Language Processing space are essential. You will work on highly scalable, stable, and automated deployments, aiming for high performance. Taking on the challenge of building and scaling a truly remarkable AI platform to impact the lives of millions of customers will be part of your responsibilities. Working in a challenging yet enjoyable environment, where learning new things is the norm, you should think of solutions beyond boundaries. You should also drive outcomes with full ownership, deeply believe in customer obsession, and thrive in a fast-paced environment of learning and innovation. You will work in a challenging, consumer-facing problem space, where you can make an immediate impact. You will get to work with the latest technologies, learn to use new tools and get the opportunity to have your say in the final product. Youll work alongside a great team in an open, collaborative environment. We are part of Vimo, a well-funded, stable mid-size company with excellent salaries, medical/dental/vision coverage, and perks. Vimo is an Equal Opportunity Employer. Data Science Engineer Responsibilities: Build and maintain robust data pipelines to process data from varied sources like databases, APIs, and file systems. Harness data science tools and techniques to develop proof of concepts, evolving solutions through adept prompt engineering and fine-tuning of models like GPT. Conduct comprehensive literature reviews and critically evaluate research papers to identify innovative techniques, focusing on the latest advancements in LLM/Generative AI. Create and manage JSON APIs to expose data, machine learning services, and AI models to other systems and applications. Ensure data accuracy, completeness, and reliability through stringent quality control measures and data validation techniques. Optimize existing language models for generative AI tasks, focusing on enhancing their application across various platforms. Work in tandem with product teams to seamlessly integrate cutting-edge AI technologies, upholding the highest quality standards in product execution. Fine-tune and deploy LLMs, ensuring they are meticulously adjusted and ready for release. Engage in ongoing research and application of new methodologies to bolster the efficiency and output quality of our LLM operations. Design and develop LLMs dedicated to a range of content generation tasks, pushing the boundaries of AI's creative capabilities. Keep up to date of the latest trends and breakthroughs in NLP and large language model technology, incorporating novel approaches to refine our models. Lead experiments and analyses to fine-tune model designs and hyperparameters, ensuring superior model performance with continuous monitoring using KPIs and metrics. Demonstrate strong analytical and troubleshooting skills; and someone who enjoys owning and solving problems end-to-end. Excellent communication skills. Comfortable interacting with remote teams in multiple offices that practice agile methodologies. Requirements & Qualifications: Bachelors or masters degree in computer science, Engineering, Mathematics, Statistics, or a related field. Minimum of 2 years of experience in NLP oriented data engineering or data science roles with a significant emphasis on working with LLM/Generative AI models. Robust understanding of NLP concepts, with hands-on experience in conversational AI and expertise in natural language understanding and generation. Proficiency in Python programming and familiarity with libraries and frameworks such as Pandas, NumPy, Scikit-learn, Tensor flow, transformers, PyTorch, and Keras. Solid experience in building, maintaining, and fine-tuning large language models, with a keen understanding of prompt engineering techniques. Strong grasp of data architecture, database design, data modeling principles, and the integration of AI models into scalable systems. Experience with JSON APIs and building RESTful & GRPC web services. Excellent analytical and problem-solving skills, capable of working independently and collaboratively in a team-oriented environment. Showcase proven expertise in working with deep learning frameworks and LLMs, with a strong foundation in prompt engineering, tokenization, embeddings, model optimization, and deployment strategies. Demonstrate previous involvement in creating user-centric products leveraging ML/AI technologies, with a good understanding of predictive modeling, meta-learning, and transfer learning. Excellent problem-solving and communication skills BS degree in Information Technology, Computer Science, or relevant field Additional Experience We Would Love to Have Experience with cloud technologies such as AWS is a plus. Background in design and development of Technology for Government Health and Human Services Experience with design and development of SaaS solutions.
Posted 1 week ago
4.0 - 6.0 years
6 - 10 Lacs
Gurugram
Work from Office
Responsibilities: * Collaborate with cross-functional teams on project requirements and deliverables. * Develop scalable Python apps using Django, Pandas, AWS, REST APIs, SQL.
Posted 1 week ago
5.0 - 7.0 years
9 - 13 Lacs
Bengaluru
Work from Office
What you’ll do: Utilize advanced mathematical, statistical, and analytical expertise to research, collect, analyze, and interpret large datasets from internal and external sources to provide insight and develop data driven solutions across the company Build and test predictive models including but not limited to credit risk, fraud, response, and offer acceptance propensity Responsible for the development, testing, validation, tracking, and performance enhancement of statistical models and other BI reporting tools leading to new innovative origination strategies within marketing, sales, finance, and underwriting Leverage advanced analytics to develop innovative portfolio surveillance solutions to track and forecast loan losses, that influence key business decisions related to pricing optimization, credit policy and overall profitability strategy Use decision science methodologies and advanced data visualization techniques to implement creative automation solutions within the organization Initiate and lead analysis to bring actionable insights to all areas of the business including marketing, sales, collections, and credit decisioning Develop and refine unit economics models to enable marketing and credit decisions What you’ll need: 5 to 8 years of experience in data science or a related role with a focus on Python programming and ML models. Strong in Python, experience with Jupyter notebooks, and Python packages like polars, pandas, numpy, scikit-learn, matplotlib, etc. Experience with ML lifecycle: data preparation, training, evaluation, and deployment Hands-on experience with GCP services for ML & data science Experience with Vector Search and Hybrid Search techniques Embeddings generation using BERT, Sentence Transformers, or custom models Embedding indexing and retrieval (Elastic, FAISS, ScaNN, Annoy) Experience with LLMs and use cases like RAG Understanding of semantic vs lexical search paradigms Experience with Learning to Rank (LTR) and libraries like XGBoost, LightGBM with LTR support Proficient in SQL and BigQuery Experience with Dataproc clusters for distributed data processing using Apache Spark or PySpark Model deployment using Vertex AI, Cloud Run, or Cloud Functions Familiarity with BM25 ranking (Elasticsearch or OpenSearch) and vector blending Awareness of search relevance evaluation metrics (precision@k, recall, nDCG, MRR) Life at Next: At our core, we're driven by the mission of tailoring growth for our customers by enabling them to transform their aspirations into tangible outcomes. We're dedicated to empowering them to shape their futures and achieve ambitious goals. To fulfil this commitment, we foster a culture defined by agility, innovation, and an unwavering commitment to progress. Our organizational framework is both streamlined and vibrant, characterized by a hands-on leadership style that prioritizes results and fosters growth. Perks of working with us: Clear objectives to ensure alignment with our mission, fostering your meaningful contribution. Abundant opportunities for engagement with customers, product managers, and leadership. You'll be guided by progressive paths while receiving insightful guidance from managers through ongoing feedforward sessions. Cultivate and leverage robust connections within diverse communities of interest. Choose your mentor to navigate your current endeavors and steer your future trajectory. Embrace continuous learning and upskilling opportunities through Nexversity. Enjoy the flexibility to explore various functions, develop new skills, and adapt to emerging technologies. Embrace a hybrid work model promoting work-life balance. Access comprehensive family health insurance coverage, prioritizing the well-being of your loved ones. Embark on accelerated career paths to actualize your professional aspirations. Who we are? We enable high growth enterprises build hyper personalized solutions to transform their vision into reality. With a keen eye for detail, we apply creativity, embrace new technology and harness the power of data and AI to co-create solutions tailored made to meet unique needs for our customers. Join our passionate team and tailor your growth with us!
Posted 1 week ago
5.0 - 7.0 years
9 - 11 Lacs
Hyderabad
Work from Office
Role: PySpark DeveloperLocations:MultipleWork Mode: Hybrid Interview Mode: Virtual (2 Rounds) Type: Contract-to-Hire (C2H) Job Summary We are looking for a skilled PySpark Developer with hands-on experience in building scalable data pipelines and processing large datasets. The ideal candidate will have deep expertise in Apache Spark, Python, and working with modern data engineering tools in cloud environments such as AWS. Key Skills & Responsibilities Strong expertise in PySpark and Apache Spark for batch and real-time data processing. Experience in designing and implementing ETL pipelines, including data ingestion, transformation, and validation. Proficiency in Python for scripting, automation, and building reusable components. Hands-on experience with scheduling tools like Airflow or Control-M to orchestrate workflows. Familiarity with AWS ecosystem, especially S3 and related file system operations. Strong understanding of Unix/Linux environments and Shell scripting. Experience with Hadoop, Hive, and platforms like Cloudera or Hortonworks. Ability to handle CDC (Change Data Capture) operations on large datasets. Experience in performance tuning, optimizing Spark jobs, and troubleshooting. Strong knowledge of data modeling, data validation, and writing unit test cases. Exposure to real-time and batch integration with downstream/upstream systems. Working knowledge of Jupyter Notebook, Zeppelin, or PyCharm for development and debugging. Understanding of Agile methodologies, with experience in CI/CD tools (e.g., Jenkins, Git). Preferred Skills Experience in building or integrating APIs for data provisioning. Exposure to ETL or reporting tools such as Informatica, Tableau, Jasper, or QlikView. Familiarity with AI/ML model development using PySpark in cloud environments
Posted 1 week ago
4.0 - 6.0 years
6 - 12 Lacs
Gurugram
Work from Office
Responsibilities: * Design, develop, test & maintain Python applications using Django/Flask. * Collaborate with cross-functional teams on project delivery.
Posted 1 week ago
3.0 - 8.0 years
5 - 7 Lacs
Hyderabad
Work from Office
Key Responsibilities : Design and develop machine learning models and algorithms to solve business problems Write clean, efficient, and reusable Python code for data processing and model deployment Collaborate with data engineers and product teams to integrate models into production systems Analyze large datasets to derive insights, trends, and patterns Evaluate model performance and continuously improve through retraining and tuning Create dashboards, reports, and data visualizations as needed Maintain documentation and ensure code quality and version control Preference Must have hands-on experience in building, training, and deploying AI/ML models using relevant frameworks and tools within a Linux environment. Strong proficiency in Python with hands-on experience in data science libraries (NumPy, Pandas, Scikit-learn, TensorFlow/PyTorch, etc.) Experience working with Hugging Face Transformers , spaCy, ChatGPT (OpenAI APIs), and DeepSeek LLMs for building NLP or generative AI solutions Solid understanding of machine learning, statistics, and data modeling Experience with data preprocessing, feature engineering, and model evaluation Familiarity with SQL and working with structured/unstructured data Knowledge of APIs, data pipelines, and cloud platforms (AWS, GCP, or Azure) is a plus
Posted 2 weeks ago
15.0 - 25.0 years
5 - 9 Lacs
Pune
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : SAP for Utilities Billing Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time educationKey Responsibilitiesa. Design, configure and build applications to meet business process and application requirements b. Knowledge in Analyzing requirements and enhancing and building highly optimized standard / custom applications as well as creating Business Process and related technical documentationc. Billing Execution Individual and batch for Daily reporting to Managers, Risk identification in your moduled. Knowledge on Analyzing Issues and working on bug fixes Technical Experiencea. Should have hands on knowledge of implementing Billing related enhancements , FQ events b. Should have knowledge on Standard Modules used in RICEFW development for Billing Objects c. Should have good knowledge on all Billing and Invoicing processes like Meter to Cash Cycle, Billing exceptions and reversals, Joint Invoicing, Bill Printing, Collective invoicing and Advance Billing functions like Real Time Pricing and Budget Billingd. Should have sound knowledge on Billing Mater Data and Integration points with Device Management and FICAe. Should have strong De-bugging skills , PWBAdditional infoa. Good Communication Skillb. Good interpersonal skill.c. A minimum of 15 years of full-time education is required. Qualification 15 years full time education
Posted 2 weeks ago
15.0 - 25.0 years
5 - 9 Lacs
Pune
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : SAP for Utilities Billing Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time educationKey Responsibilitiesa. Design, configure and build applications to meet business process and application requirements b. Knowledge in Analyzing requirements and enhancing and building highly optimized standard / custom applications as well as creating Business Process and related technical documentationc. Billing Execution Individual and batch for Daily reporting to Managers, Risk identification in your moduled. Knowledge on Analyzing Issues and working on bug fixes Technical Experienceb. Should have knowledge on Standard Modules used in RICEFW development for Billing Objects c. Should have good knowledge on all Billing and Invoicing processes like Meter to Cash Cycle, Billing exceptions and reversals, Joint Invoicing, Bill Printing, Collective invoicing and Advance Billing functions like Real Time Pricing and Budget Billingd. Should have sound knowledge on Billing Mater Data and Integration points with Device Management and FICAe. Should have strong De-bugging skills , PWBAdditional infoa. Good Communication Skillb. Good interpersonal skill.c. A minimum of 15 years of full-time education is required. Qualification 15 years full time education
Posted 2 weeks ago
15.0 - 20.0 years
5 - 9 Lacs
Pune
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : SAP for Utilities Billing Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time educationKey Responsibilitiesa. Design, configure and build applications to meet business process and application requirementsb. Knowledge in Analyzing requirements and enhancing and building highly optimized standard / custom applications as well as creating Business Process and related technical documentationc. Billing Execution Individual and batch for Daily reporting to Managers.d. Risk identification in your modulee. Knowledge on Analyzing Issues and working on bug fixes Technical Experiencea. Should have hands on knowledge of implementing Billing related enhancements , FQ eventsb. Should have knowledge on Standard Modules used in RICEFW development for Billing Objectsc. Should have good knowledge on all Billing and Invoicing processes like Meter to Cash Cycle, Billing exceptions and reversals, Joint Invoicing, Bill Printing, Collective invoicing and Advance Billing functions like Real Time Pricing and Budget Billingd. Should have understanding of Billing Mater Data and Integration points with Device Management and FICAe. Should have strong De-bugging skills , PWB Professional Attributes1 Good Communication Skill 2 Flexible to work 3 Ability to work under pressure 4 Good Analytical skill and presentation skill 5 Decision making ability Qualification 15 years full time education
Posted 2 weeks ago
5.0 - 8.0 years
27 - 42 Lacs
Bengaluru
Work from Office
Job Summary As a Performance Analysis engineer, you will work as part of a team responsible for modeling, measurement, and analysis of storage systems performance. The overall focus of the Research and Development function, of which this role is a part, is on competitive market and customer requirements, technology advances, product quality, product cost and time to market. Performance engineers focus on performance analysis and improvement for new products and features as well as enhancements to existing products and features. This position requires an individual to be broad-thinking and systems-focused, creative, team-oriented, technologically savvy, and driven to produce results. Job Requirements • Knowledge of performance analysis and modeling techniques, tools and benchmarking. • Extensive knowledge and experience in computer operating systems, hardware architecture and design, data structures and standard programming practices; systems programming in C is highly desirable. • Strong scripting skills in Perl and Python - primarily with Jupyter Notebooks and Shell. • Exceptional presentation and interpersonal skills. • Strong influencing and leadership skills. • The ability to make accurate work estimates and develop predictable plans. • Knowledge of storage and file systems. • Understanding of AL/ML workloads. • Understanding of performance tradeoffs when designing on-prem and cloud systems. • The ability and willingness to adapt to rapidly changing work environments, and enhance automation frameworks (generally python-based) to improve productivity. Responsibilities • Measure and analyze product performance to identify performance improvement opportunities. • Design, implement, execute, analyze, interpret, socialize and apply storage-oriented performance workloads and their results, including the creation of tools and automation as necessary. • Work closely with development teams to drive the performance improvement agenda. • Evaluate design alternatives for enhanced performance and prototype opportunities for performance enhancements. • Create analytical and simulation-based models to predict storage systems performance. • Successfully convey information to stakeholders at many levels related to the position. • Participate as a proactive contributor and subject matter expert on team projects. Education • 4 to 7 years of experience is preferred. • A Bachelor of Science, Master’s of Science, or PhD Degree in Electrical Engineering or Computer Science; or equivalent experience is required.
Posted 2 weeks ago
4.0 - 9.0 years
6 - 10 Lacs
Bengaluru
Work from Office
Job Posting TitleBUSINESS INTELLIGENCE ANALYST I Band/Level5-4-S Education ExperienceBachelors Degree (High School +4 years) Employment ExperienceLess than 1 year At TE, you will unleash your potential working with people from diverse backgrounds and industries to create a safer, sustainable and more connected world. Job Overview TE Connectivity s Business Intelligence Teams are responsible for the processing, mining and delivery of data to their customer community through repositories, tools and services. Roles & Responsibilities Tasks & Responsibilities Assist in the development and deployment of Digital Factory solutions and Machine Learning models across Manufacturing, Quality, and Supply Chain functions. Support data collection, cleaning, preparation, and transformation from multiple sources, ensuring data consistency and readiness. Contribute to the creation of dashboards and reports using tools such as Power BI or Tableau. Work on basic analytics and visualization tasks to derive insights and identify improvement areas. Assist in maintaining existing ML models, including data monitoring and model retraining processes. Participate in small-scale PoCs (proof of concepts) and pilot projects with senior team members. Document use cases, write clean code with guidance, and contribute to knowledge-sharing sessions Support integration of models into production environments and perform basic testing. Desired Candidate Proficiency in Python and/or R for data analysis, along with libraries like Pandas, NumPy, Matplotlib, Seaborn. Basic understanding of statistical concepts such as distributions, correlation, regression, and hypothesis testing. Familiarity with SQL or other database querying tools; e.g., pyodbc, sqlite3, PostgreSQL. Exposure to ML algorithms like linear/logistic regression, decision trees, k-NN, or SVM. Basic knowledge of Jupyter Notebooks and version control using Git/GitHub. Good communication skills in English (written and verbal), able to explain technical topics simply Collaborative, eager to learn, and adaptable in a fast-paced and multicultural environment. Exposure to or interest in manufacturing technologies (e.g., stamping, molding, assembly). Exposure to cloud platforms (AWS/Azure) or services like S3, SageMaker, Redshift is an advantage. Hands-on experience in image data preprocessing (resizing, Gaussian blur, PCA) or computer vision projects. Interest in AutoML tools and transfer learning techniques. Competencies ABOUT TE CONNECTIVITY TE Connectivity plc (NYSETEL) is a global industrial technology leader creating a safer, sustainable, productive, and connected future. Our broad range of connectivity and sensor solutions enable the distribution of power, signal and data to advance next-generation transportation, energy networks, automated factories, data centers, medical technology and more. With more than 85,000 employees, including 9,000 engineers, working alongside customers in approximately 130 countries, TE ensures that EVERY CONNECTION COUNTS. Learn more at www.te.com and on LinkedIn , Facebook , WeChat, Instagram and X (formerly Twitter). WHAT TE CONNECTIVITY OFFERS: We are pleased to offer you an exciting total package that can also be flexibly adapted to changing life situations - the well-being of our employees is our top priority! Competitive Salary Package Performance-Based Bonus Plans Health and Wellness Incentives Employee Stock Purchase Program Community Outreach Programs / Charity Events IMPORTANT NOTICE REGARDING RECRUITMENT FRAUD TE Connectivity has become aware of fraudulent recruitment activities being conducted by individuals or organizations falsely claiming to represent TE Connectivity. Please be advised that TE Connectivity never requests payment or fees from job applicants at any stage of the recruitment process. All legitimate job openings are posted exclusively on our official careers website at te.com/careers, and all email communications from our recruitment team will come only from actual email addresses ending in @te.com . If you receive any suspicious communications, we strongly advise you not to engage or provide any personal information, and to report the incident to your local authorities. Across our global sites and business units, we put together packages of benefits that are either supported by TE itself or provided by external service providers. In principle, the benefits offered can vary from site to site. Location
Posted 2 weeks ago
2.0 - 7.0 years
0 - 1 Lacs
Raipur
Work from Office
Job Title: Python Developer Django & AI/ML Location: Raipur, Chhattisgarh Job Type: Full-Time | On-site Job Overview: We are hiring a skilled Python Developer with expertise in Django and AI/ML technologies to join our growing team in Raipur. The ideal candidate will be responsible for developing robust web applications, designing APIs, and implementing intelligent machine learning solutions. Key Responsibilities: Develop scalable web applications using Python and Django Design and build RESTful APIs and integrate with frontend frameworks Work with relational databases (PostgreSQL, MySQL) and ORM tools Implement asynchronous task processing with Celery Develop, train, and deploy machine learning models Handle NLP, computer vision, and AI-driven tasks Visualize data and insights using tools like Matplotlib and Seaborn Collaborate with DevOps for deployment and monitoring Core Requirements: Strong proficiency in Python and object-oriented programming Experience with Django and Django REST Framework Solid understanding of web development principles and MVC architecture Hands-on experience with relational databases and ORM tools Technical Skills: Frontend technologies: HTML5, CSS3, JavaScript, Bootstrap Knowledge of React, Vue.js, or Angular API development using REST or GraphQL Version control using Git (GitHub/GitLab) Testing with pytest, unittest, or Django test framework Asynchronous processing with Celery AI/ML Skills: Machine Learning: scikit-learn, pandas, NumPy Deep Learning: TensorFlow, PyTorch, Keras Data analysis and visualization: pandas, matplotlib, seaborn Experience in model development, training, and deployment Exposure to NLP, Computer Vision, and MLOps Familiarity with Jupyter Notebooks Preferred / Additional Skills: Cloud experience (AWS, Azure, Google Cloud Platform) Docker and Kubernetes for containerization Big Data technologies (Apache Spark, Hadoop) NoSQL databases (MongoDB, Redis, Elasticsearch) CI/CD pipelines and DevOps practices Experience with FastAPI and microservices architecture Why Join Us: Work on innovative AI/ML and intelligent software solutions Collaborative, innovation-driven work culture Exposure to the latest tools and technologies Structured career growth and learning opportunities Competitive salary and benefits package How to Apply: Interested candidates may send their updated resume to career@srfcnbfc.in
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
17069 Jobs | Dublin
Wipro
9221 Jobs | Bengaluru
EY
7581 Jobs | London
Amazon
5941 Jobs | Seattle,WA
Uplers
5895 Jobs | Ahmedabad
Accenture in India
5813 Jobs | Dublin 2
Oracle
5703 Jobs | Redwood City
IBM
5669 Jobs | Armonk
Capgemini
3478 Jobs | Paris,France
Tata Consultancy Services
3259 Jobs | Thane