Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
7.0 - 12.0 years
10 - 14 Lacs
bengaluru
Work from Office
Your future role Take on a new challenge and apply your data engineering expertise in a cutting-edge field. Youll work alongside collaborative and innovative teammates. You'll play a key role in enabling data-driven decision-making across the organization by ensuring data availability, quality, and accessibility. Day-to-day, youll work closely with teams across the business (e.g., Data Scientists, Analysts, and ML Engineers), mentor junior engineers, and contribute to the architecture and design of our data platforms and solutions. Youll specifically take care of designing and developing scalable data pipelines, but also managing and optimizing object storage systems. Well look to you for: Designing, developing, and maintaining scalable and efficient data pipelines using tools like Apache NiFi and Apache Airflow. Creating robust Python scripts for data ingestion, transformation, and validation. Managing and optimizing object storage systems such as Amazon S3, Azure Blob, or Google Cloud Storage. Collaborating with Data Scientists and Analysts to understand data requirements and deliver production-ready datasets. Implementing data quality checks, monitoring, and alerting mechanisms. Ensuring data security, governance, and compliance with industry standards. Mentoring junior engineers and promoting best practices in data engineering. All about you We value passion and attitude over experience. Thats why we dont expect you to have every single skill. Instead, weve listed some that we think will help you succeed and grow in this role: Bachelors or Masters degree in Computer Science, Engineering, or a related field. 7+ years of experience in data engineering or a similar role. Strong proficiency in Python and data processing libraries (e.g., Pandas, PySpark). Hands-on experience with Apache NiFi for data flow automation. Deep understanding of object storage systems and cloud data architectures. Proficiency in SQL and experience with both relational and NoSQL databases. Familiarity with cloud platforms (AWS, Azure, or GCP). Exposure to the Data Science ecosystem, including tools like Jupyter, scikit-learn, TensorFlow, or MLflow. Experience working in cross-functional teams with Data Scientists and ML Engineers. Cloud certifications or relevant technical certifications are a plus.
Posted 4 hours ago
5.0 - 9.0 years
17 - 22 Lacs
coimbatore
Work from Office
Overview POSITION OVERVIEW – Senior Associate Role & Responsibilities - Build a high-performing team by recruiting, onboarding, coaching, and continuously developing talent. - Manage staff and functional activities to achieve team goals aligned with organizational objectives. - Ensure productivity, data quality, and SLA compliance through rigorous performance oversight - Evaluate and implement industry leading QA practices on both data and content, re-engineer quality assurance and quality control processes including working on effectively tracking and reporting quality improvements. - Undertake projects to improve the data workflow, QA processes, Vendor data onboarding and ensure that these are executed in line with the transformation program - Evaluate re-engineering opportunities by analyzing existing processes and systematically identify manual/off-system tasks and processes bottlenecks, create process maps as specifications for BA/IT and drive them. - Lead the data transformation working the Data Operations teams in Manila/Mumbai & collaboration with BA/Project Management, Technology & Data Science teams. - Strong data analysis skills with the ability to interpret large datasets and identify anomalies, report data trends and patterns. - Participate and shape discussions on scaling & transforming operations, process changes, business priorities and client expectations - Identify opportunities of leveraging ML/NLP for unstructured data extraction, quality enhancement - Lead projects involving new data onboarding and ensure all new data sourcing is in-line with data transformation agenda - Drive strategic planning and goal setting while fostering upskilling and career growth opportunities for the team. Responsibilities People Management: Focus on training pool of Juniors, and setting up of team workflows , process to detect data quality issues, metrics and KPIs. Consistent touch on Team members progress and helping team to acquire right level of trainings and skills. Develop Training plan and module that helps training new joiners and backfills. Delivery: Deliver quality assured data (financial/non-financial) as per agreed daily SLAs Work with internal stakeholders and downstream teams on meeting Service Level Agreements (SLA) on data Quality Assurance (accuracy, completeness, and timeliness). Create and implement the appropriate metrics on deliverables to the stakeholders and always ensure the agreed SLA is met. Process Improvement: Create new process metrics, KPI and scorecard. Can independently manage the deliverables and stakeholders’ expectations. Lead team Quality improvements by analyzing existing process and focus on areas for improvement, automation applying Lean Six Sigma, Balanced Scorecard, and other TQM approaches. Focused on process improvements including elimination of redundancies, brainstorming the process improvements ideas and partner with the engineering teams to improvise the current state of work. Work schedules is in India time zone could extend into EMEA hours for meetings Adaptability and flexibility to work in a fast paced and changing environment Ability to work within a team-oriented environment across hierarchies, functions and geographies Ability to communicate with various internal and external parties globally. Intent in automation and carry supporting skills like advanced excel knowledge , building dashboard using PowerBI , and automation through Jupyter/Python notebooks etc. Have high interests to learn new things and willingness to take and deliver challenging work. Good to have: Have sound knowledge on capital markets and functioning of the markets (Exposure to various financial products like Equities, Fixed Income, Derivatives, Commodities, Corporate actions etc.) Knowledge about Financial Statements like Annual reports, Income statement, Balance Sheet, and Cash flow statement Qualifications Necessary skills and qualifications: - 10+ years of experience in Operations/Team Management, Process Optimization, Project & Change Management. - 5+ years of people management experience, including leading teams and driving performance. - Analytical skills and strong attention to details - Should have keen interest in analyzing data, process flows and quality focused - It is an added advantage for the exposure of using tools such as Python/SQL etc. - Demonstrated experience in improving process/Automation through applications of Python/ML/RPA or showing interest in learning and scale. - Should have very good hands-on skills working with advanced excel features and exposure to visualization tools such as PowerBI would be an advantage. - Self-starter and self-motivated, must be solutions focused and could work in unstructured environments - Comfortable working in a team environment across hierarchies, functions and geographies - Should have experience of working in Financial/technology/Business Analysis domain - Strong financial services knowledge and experience; good to have exposure to ESG, Private Assets would be added advantage What we offer you Transparent compensation schemes and comprehensive employee benefits, tailored to your location, ensuring your financial security, health, and overall wellbeing. Flexible working arrangements, advanced technology, and collaborative workspaces. A culture of high performance and innovation where we experiment with new ideas and take responsibility for achieving results. A global network of talented colleagues, who inspire, support, and share their expertise to innovate and deliver for our clients. Global Orientation program to kickstart your journey, followed by access to our Learning@MSCI platform, LinkedIn Learning Pro and tailored learning opportunities for ongoing skills development. Multi-directional career paths that offer professional growth and development through new challenges, internal mobility and expanded roles. We actively nurture an environment that builds a sense of inclusion belonging and connection, including eight Employee Resource Groups. All Abilities, Asian Support Network, Black Leadership Network, Climate Action Network, Hola! MSCI, Pride & Allies, Women in Tech, and Women’s Leadership Forum. At MSCI we are passionate about what we do, and we are inspired by our purpose – to power better investment decisions. You’ll be part of an industry-leading network of creative, curious, and entrepreneurial pioneers. This is a space where you can challenge yourself, set new standards and perform beyond expectations for yourself, our clients, and our industry. MSCI is a leading provider of critical decision support tools and services for the global investment community. With over 50 years of expertise in research, data, and technology, we power better investment decisions by enabling clients to understand and analyze key drivers of risk and return and confidently build more effective portfolios. We create industry-leading research-enhanced solutions that clients use to gain insight into and improve transparency across the investment process. MSCI Inc. is an equal opportunity employer. It is the policy of the firm to ensure equal employment opportunity without discrimination or harassment on the basis of race, color, religion, creed, age, sex, gender, gender identity, sexual orientation, national origin, citizenship, disability, marital and civil partnership/union status, pregnancy (including unlawful discrimination on the basis of a legally protected parental leave), veteran status, or any other characteristic protected by law. MSCI is also committed to working with and providing reasonable accommodations to individuals with disabilities. If you are an individual with a disability and would like to request a reasonable accommodation for any part of the application process, please email Disability.Assistance@msci.com and indicate the specifics of the assistance needed. Please note, this e-mail is intended only for individuals who are requesting a reasonable workplace accommodation; it is not intended for other inquiries. To all recruitment agencies MSCI does not accept unsolicited CVs/Resumes. Please do not forward CVs/Resumes to any MSCI employee, location, or website. MSCI is not responsible for any fees related to unsolicited CVs/Resumes. Note on recruitment scams We are aware of recruitment scams where fraudsters impersonating MSCI personnel may try and elicit personal information from job seekers. Read our full note on careers.msci.com
Posted 9 hours ago
3.0 - 7.0 years
10 - 20 Lacs
bengaluru
Hybrid
Dear Job Seekers..! Greetings from Chimera Technologies..! We are looking for Data Scientist - 3-6 yrs experience - immediate joinee - willing to have 3-6 Months C2H mode . Kindly find the JD and JS as below Job Description (JD) Data Scientist Position Title: Data Scientist Experience Required: 56 Years Location: Bangalore Employment Type: C2H - 3-6 Months Joining: Immediate Role Overview: We are looking for a highly skilled and motivated Data Scientist with 56 years of experience to join our team immediately. The ideal candidate will have strong expertise in Python-based data science frameworks, machine learning, deep learning fundamentals, and cloud-based ML services. You will be responsible for building predictive models, performing data analysis, and delivering actionable insights through advanced analytics and visualization tools. Key Responsibilities: Develop and deploy machine learning models using Python frameworks such as Scikit-learn , XGBoost , and Scipy . Perform data manipulation and analysis using Pandas , NumPy , and Polars . Apply deep learning techniques (ANN, LSTM, etc.) for solving complex problems. Conduct model evaluation, tuning, and performance optimization. Create compelling visualizations using Matplotlib , Seaborn , Plotly , and BI tools like Tableau or Power BI . Work with cloud-based ML services such as AWS SageMaker , GCP BigQuery , or Azure ML . Write efficient SQL queries for data extraction and transformation. Collaborate with cross-functional teams to integrate data science solutions into business processes. (Optional) Work with ETL pipelines, Big Data tools like PySpark and Hadoop . Required Skills: Strong proficiency in Python ; knowledge of R is a plus. Solid understanding of SQL and data querying. Experience with Python libraries: Pandas , NumPy , Polars , Scikit-learn , XGBoost , Scipy . Knowledge of machine learning algorithms and model lifecycle. Basic understanding of deep learning architectures (ANN, LSTM). Visualization expertise using Matplotlib , Seaborn , Plotly , and BI tools ( Tableau , Power BI ). Experience with at least one cloud ML platform: AWS , GCP , or Azure . Familiarity with ETL processes and Big Data technologies is a plus. Job Specification (JS) Data Scientist Criteria Details Education - Bachelors or Masters in Data Science, Computer Science, Statistics, or related field Experience - 5–6 years in Data Science or related roles Technical Skills - Python, SQL, Scikit-learn, XGBoost, Pandas, NumPy, Tableau, Power BI Cloud Platforms - AWS SageMaker, GCP BigQuery, Azure ML (any one) Additional Skills - ETL, PySpark, Hadoop (preferred but not mandatory) Soft Skills- Analytical thinking, Communication, Collaboration, Problem-solving Availability - Immediate Joiner Location Bangalore - Hybrid Mode Regards HR Team
Posted 1 day ago
2.0 - 4.0 years
11 - 16 Lacs
bengaluru
Work from Office
Your Impact: As a Python Developer in the Debricked data science team, you will work on enhancing data intake processes and optimizing data pipelines. You will apply many different approaches, depending on the needs of the product and the challenges you encounter. In some cases, we use AI/LLM techniques, and we expect the number of such cases to increase. Your contributions will directly impact Debrickeds scope and quality and will help ensure future commercial growth of the product. What the role offers: As a Python Developer, you will: Innovative Data Solutions: Develop and optimize data pipelines that improve the efficiency, accuracy, and automation of the Debricked SCA tools data intake processes. Collaborative Environment: Work closely with engineers and product managers from Sweden and India to create impactful, data-driven solutions. Continuous Improvement: Play an essential role in maintaining and improving the data quality that powers Debrickeds analysis, improving the products competitiveness. Skill Development: Collaborate across teams and leverage OpenTexts resources (including an educational budget) to develop your expertise in software engineering,data science and AI, expanding your skill set in both traditional and cutting-edge technologies. What you need to Succeed: 2-4 years of experience in Python development, with a focus on optimizing data processes and improving data quality. Proficiency in Python and related tools and libraries like Jupyter, Pandas and Numpy. A degree in Computer Science or a related discipline. An interest in application security. Asset to have skills in Go, Java, LLMs (specifically Gemini), GCP, Kubernetes, MySQL, Elastic, Neo4J. A strong understanding of how to manage and improve data quality in automated systems and pipelines. Ability to address complex data challenges and develop solutions to optimize systems. Comfortable working in a distributed team, collaborating across different time zones.
Posted 1 day ago
4.0 - 8.0 years
6 - 10 Lacs
thane
Work from Office
Role OverviewEssentiallySports is seeking a Growth Product Manager who can scale our web platform's reach, engagement, and impact This is not a traditional marketing role—your job is to engineer growth through product innovation, user journey optimization, and experimentation You’ll be the bridge between editorial, tech, and analytics—turning insights into actions that drive sustainable audience and revenue growth Key ResponsibilitiesOwn the entire web user journey from page discovery to conversion to retention Identify product-led growth opportunities using scroll depth, CTRs, bounce rates, and cohort behavior Optimize high-traffic areas of the site—landing pages, article CTAs, newsletter modules—for conversion and time-on-page Set up and scale A/B testing and experimentation pipelines for UI/UX, headlines, engagement surfaces, and signup flows Collaborate with SEO and Performance Marketing teams to translate high-ranking traffic into engaged, loyal users Partner with content and tech teams to develop recommendation engines, personalization strategies, and feedback loops Monitor analytics pipelines from GA4 Athena dashboards to derive insights and drive decision-making Introduce AI-driven features (LLM prompts, content auto-summaries, etc ) that personalize or simplify the user experience Use tools like Jupyter, Google Analytics, Glue, and others to synthesize data into growth opportunities Who you are4+ years of experience in product growth, web engagement, or analytics-heavy roles Deep understanding of web traffic behavior, engagement funnels, bounce/exit analysis, and retention loops Hands-on experience running product experiments, growth sprints, and interpreting funnel analytics Strong proficiency in SQL, GA4, marketing analytics, and campaign managementUnderstand customer segmentation, LTV analysis, cohort behavior, and user funnel optimizationThrive in ambiguity and love building things from scratchPassionate about AI, automation, and building sustainable growth enginesThinks like a founderdrives initiatives independently, hunts for insights, moves fastA team player who collaborates across engineering, growth, and editorial teams Proactive and solution-oriented, always spotting opportunities for real growth Thrive in a fast-moving environment, taking ownership and driving impact
Posted 1 day ago
4.0 - 8.0 years
6 - 10 Lacs
nagpur
Work from Office
Role OverviewEssentiallySports is seeking a Growth Product Manager who can scale our web platform's reach, engagement, and impact This is not a traditional marketing role—your job is to engineer growth through product innovation, user journey optimization, and experimentation You’ll be the bridge between editorial, tech, and analytics—turning insights into actions that drive sustainable audience and revenue growth Key ResponsibilitiesOwn the entire web user journey from page discovery to conversion to retention Identify product-led growth opportunities using scroll depth, CTRs, bounce rates, and cohort behavior Optimize high-traffic areas of the site—landing pages, article CTAs, newsletter modules—for conversion and time-on-page Set up and scale A/B testing and experimentation pipelines for UI/UX, headlines, engagement surfaces, and signup flows Collaborate with SEO and Performance Marketing teams to translate high-ranking traffic into engaged, loyal users Partner with content and tech teams to develop recommendation engines, personalization strategies, and feedback loops Monitor analytics pipelines from GA4 Athena dashboards to derive insights and drive decision-making Introduce AI-driven features (LLM prompts, content auto-summaries, etc ) that personalize or simplify the user experience Use tools like Jupyter, Google Analytics, Glue, and others to synthesize data into growth opportunities Who you are4+ years of experience in product growth, web engagement, or analytics-heavy roles Deep understanding of web traffic behavior, engagement funnels, bounce/exit analysis, and retention loops Hands-on experience running product experiments, growth sprints, and interpreting funnel analytics Strong proficiency in SQL, GA4, marketing analytics, and campaign managementUnderstand customer segmentation, LTV analysis, cohort behavior, and user funnel optimizationThrive in ambiguity and love building things from scratchPassionate about AI, automation, and building sustainable growth enginesThinks like a founderdrives initiatives independently, hunts for insights, moves fastA team player who collaborates across engineering, growth, and editorial teams Proactive and solution-oriented, always spotting opportunities for real growth Thrive in a fast-moving environment, taking ownership and driving impact
Posted 1 day ago
4.0 - 8.0 years
6 - 10 Lacs
kannur
Work from Office
Role OverviewEssentiallySports is seeking a Growth Product Manager who can scale our web platform's reach, engagement, and impact This is not a traditional marketing role—your job is to engineer growth through product innovation, user journey optimization, and experimentation You’ll be the bridge between editorial, tech, and analytics—turning insights into actions that drive sustainable audience and revenue growth Key ResponsibilitiesOwn the entire web user journey from page discovery to conversion to retention Identify product-led growth opportunities using scroll depth, CTRs, bounce rates, and cohort behavior Optimize high-traffic areas of the site—landing pages, article CTAs, newsletter modules—for conversion and time-on-page Set up and scale A/B testing and experimentation pipelines for UI/UX, headlines, engagement surfaces, and signup flows Collaborate with SEO and Performance Marketing teams to translate high-ranking traffic into engaged, loyal users Partner with content and tech teams to develop recommendation engines, personalization strategies, and feedback loops Monitor analytics pipelines from GA4 Athena dashboards to derive insights and drive decision-making Introduce AI-driven features (LLM prompts, content auto-summaries, etc ) that personalize or simplify the user experience Use tools like Jupyter, Google Analytics, Glue, and others to synthesize data into growth opportunities Who you are4+ years of experience in product growth, web engagement, or analytics-heavy roles Deep understanding of web traffic behavior, engagement funnels, bounce/exit analysis, and retention loops Hands-on experience running product experiments, growth sprints, and interpreting funnel analytics Strong proficiency in SQL, GA4, marketing analytics, and campaign managementUnderstand customer segmentation, LTV analysis, cohort behavior, and user funnel optimizationThrive in ambiguity and love building things from scratchPassionate about AI, automation, and building sustainable growth enginesThinks like a founderdrives initiatives independently, hunts for insights, moves fastA team player who collaborates across engineering, growth, and editorial teams Proactive and solution-oriented, always spotting opportunities for real growth Thrive in a fast-moving environment, taking ownership and driving impact
Posted 1 day ago
4.0 - 8.0 years
6 - 10 Lacs
baddi
Work from Office
Role OverviewEssentiallySports is seeking a Growth Product Manager who can scale our web platform's reach, engagement, and impact This is not a traditional marketing role—your job is to engineer growth through product innovation, user journey optimization, and experimentation You’ll be the bridge between editorial, tech, and analytics—turning insights into actions that drive sustainable audience and revenue growth Key ResponsibilitiesOwn the entire web user journey from page discovery to conversion to retention Identify product-led growth opportunities using scroll depth, CTRs, bounce rates, and cohort behavior Optimize high-traffic areas of the site—landing pages, article CTAs, newsletter modules—for conversion and time-on-page Set up and scale A/B testing and experimentation pipelines for UI/UX, headlines, engagement surfaces, and signup flows Collaborate with SEO and Performance Marketing teams to translate high-ranking traffic into engaged, loyal users Partner with content and tech teams to develop recommendation engines, personalization strategies, and feedback loops Monitor analytics pipelines from GA4 Athena dashboards to derive insights and drive decision-making Introduce AI-driven features (LLM prompts, content auto-summaries, etc ) that personalize or simplify the user experience Use tools like Jupyter, Google Analytics, Glue, and others to synthesize data into growth opportunities Who you are4+ years of experience in product growth, web engagement, or analytics-heavy roles Deep understanding of web traffic behavior, engagement funnels, bounce/exit analysis, and retention loops Hands-on experience running product experiments, growth sprints, and interpreting funnel analytics Strong proficiency in SQL, GA4, marketing analytics, and campaign managementUnderstand customer segmentation, LTV analysis, cohort behavior, and user funnel optimizationThrive in ambiguity and love building things from scratchPassionate about AI, automation, and building sustainable growth enginesThinks like a founderdrives initiatives independently, hunts for insights, moves fastA team player who collaborates across engineering, growth, and editorial teams Proactive and solution-oriented, always spotting opportunities for real growth Thrive in a fast-moving environment, taking ownership and driving impact
Posted 1 day ago
4.0 - 8.0 years
6 - 10 Lacs
ajmer
Work from Office
Role OverviewEssentiallySports is seeking a Growth Product Manager who can scale our web platform's reach, engagement, and impact This is not a traditional marketing role—your job is to engineer growth through product innovation, user journey optimization, and experimentation You’ll be the bridge between editorial, tech, and analytics—turning insights into actions that drive sustainable audience and revenue growth Key ResponsibilitiesOwn the entire web user journey from page discovery to conversion to retention Identify product-led growth opportunities using scroll depth, CTRs, bounce rates, and cohort behavior Optimize high-traffic areas of the site—landing pages, article CTAs, newsletter modules—for conversion and time-on-page Set up and scale A/B testing and experimentation pipelines for UI/UX, headlines, engagement surfaces, and signup flows Collaborate with SEO and Performance Marketing teams to translate high-ranking traffic into engaged, loyal users Partner with content and tech teams to develop recommendation engines, personalization strategies, and feedback loops Monitor analytics pipelines from GA4 Athena dashboards to derive insights and drive decision-making Introduce AI-driven features (LLM prompts, content auto-summaries, etc ) that personalize or simplify the user experience Use tools like Jupyter, Google Analytics, Glue, and others to synthesize data into growth opportunities Who you are4+ years of experience in product growth, web engagement, or analytics-heavy roles Deep understanding of web traffic behavior, engagement funnels, bounce/exit analysis, and retention loops Hands-on experience running product experiments, growth sprints, and interpreting funnel analytics Strong proficiency in SQL, GA4, marketing analytics, and campaign managementUnderstand customer segmentation, LTV analysis, cohort behavior, and user funnel optimizationThrive in ambiguity and love building things from scratchPassionate about AI, automation, and building sustainable growth enginesThinks like a founderdrives initiatives independently, hunts for insights, moves fastA team player who collaborates across engineering, growth, and editorial teams Proactive and solution-oriented, always spotting opportunities for real growth Thrive in a fast-moving environment, taking ownership and driving impact
Posted 1 day ago
4.0 - 8.0 years
6 - 10 Lacs
dombivli
Work from Office
Role OverviewEssentiallySports is seeking a Growth Product Manager who can scale our web platform's reach, engagement, and impact This is not a traditional marketing role—your job is to engineer growth through product innovation, user journey optimization, and experimentation You’ll be the bridge between editorial, tech, and analytics—turning insights into actions that drive sustainable audience and revenue growth Key ResponsibilitiesOwn the entire web user journey from page discovery to conversion to retention Identify product-led growth opportunities using scroll depth, CTRs, bounce rates, and cohort behavior Optimize high-traffic areas of the site—landing pages, article CTAs, newsletter modules—for conversion and time-on-page Set up and scale A/B testing and experimentation pipelines for UI/UX, headlines, engagement surfaces, and signup flows Collaborate with SEO and Performance Marketing teams to translate high-ranking traffic into engaged, loyal users Partner with content and tech teams to develop recommendation engines, personalization strategies, and feedback loops Monitor analytics pipelines from GA4 Athena dashboards to derive insights and drive decision-making Introduce AI-driven features (LLM prompts, content auto-summaries, etc ) that personalize or simplify the user experience Use tools like Jupyter, Google Analytics, Glue, and others to synthesize data into growth opportunities Who you are4+ years of experience in product growth, web engagement, or analytics-heavy roles Deep understanding of web traffic behavior, engagement funnels, bounce/exit analysis, and retention loops Hands-on experience running product experiments, growth sprints, and interpreting funnel analytics Strong proficiency in SQL, GA4, marketing analytics, and campaign managementUnderstand customer segmentation, LTV analysis, cohort behavior, and user funnel optimizationThrive in ambiguity and love building things from scratchPassionate about AI, automation, and building sustainable growth enginesThinks like a founderdrives initiatives independently, hunts for insights, moves fastA team player who collaborates across engineering, growth, and editorial teams Proactive and solution-oriented, always spotting opportunities for real growth Thrive in a fast-moving environment, taking ownership and driving impact
Posted 1 day ago
4.0 - 8.0 years
6 - 10 Lacs
sangli
Work from Office
Role OverviewEssentiallySports is seeking a Growth Product Manager who can scale our web platform's reach, engagement, and impact This is not a traditional marketing role—your job is to engineer growth through product innovation, user journey optimization, and experimentation You’ll be the bridge between editorial, tech, and analytics—turning insights into actions that drive sustainable audience and revenue growth Key ResponsibilitiesOwn the entire web user journey from page discovery to conversion to retention Identify product-led growth opportunities using scroll depth, CTRs, bounce rates, and cohort behavior Optimize high-traffic areas of the site—landing pages, article CTAs, newsletter modules—for conversion and time-on-page Set up and scale A/B testing and experimentation pipelines for UI/UX, headlines, engagement surfaces, and signup flows Collaborate with SEO and Performance Marketing teams to translate high-ranking traffic into engaged, loyal users Partner with content and tech teams to develop recommendation engines, personalization strategies, and feedback loops Monitor analytics pipelines from GA4 Athena dashboards to derive insights and drive decision-making Introduce AI-driven features (LLM prompts, content auto-summaries, etc ) that personalize or simplify the user experience Use tools like Jupyter, Google Analytics, Glue, and others to synthesize data into growth opportunities Who you are4+ years of experience in product growth, web engagement, or analytics-heavy roles Deep understanding of web traffic behavior, engagement funnels, bounce/exit analysis, and retention loops Hands-on experience running product experiments, growth sprints, and interpreting funnel analytics Strong proficiency in SQL, GA4, marketing analytics, and campaign managementUnderstand customer segmentation, LTV analysis, cohort behavior, and user funnel optimizationThrive in ambiguity and love building things from scratchPassionate about AI, automation, and building sustainable growth enginesThinks like a founderdrives initiatives independently, hunts for insights, moves fastA team player who collaborates across engineering, growth, and editorial teams Proactive and solution-oriented, always spotting opportunities for real growth Thrive in a fast-moving environment, taking ownership and driving impact
Posted 1 day ago
8.0 - 13.0 years
9 - 13 Lacs
hyderabad
Work from Office
About The Role Project Role : Software Development Lead Project Role Description : Develop and configure software systems either end-to-end or for a specific stage of product lifecycle. Apply knowledge of technologies, applications, methodologies, processes and tools to support a client, project or entity. Must have skills : Machine Learning Good to have skills : NAMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Senior Machine Learning Engineer, you will be responsible for overseeing the engineering team involved in the advanced technical support for production-level AI models, addressing bug fixes, complex issues, feature additions, model expansion to new business cases. You will own the issue management space and the related process and workflows. You will support the core data science team with model deployment and machine learning operations (MLOps). You will support 2 or more business areas. Roles & Responsibilities:- Provide advanced technical support for production-level AI models, addressing bug fixes, complex issues and model expansion to new business cases. -Act as a liaison between data scientists, cloud engineers, data engineers and the business team to diagnose and resolve issues related to AI model performance and deployment. Coordinate the issue prioritization. -Lead the continuous improvement of support processes and workflows.-Build model performance benchmarking, evaluation and monitoring capabilities. Develop dashboards and reports to present data quality and model performance KPIs. Assess model performance to understand bugs, inefficiencies and their root cause.-Document and communicate best practices and troubleshooting procedures to Level 1 and Level 2 support teams.-Coordinate and mentor the Machine Learning Engineering team, promoting best practices and code quality.-Support data scientists with data exploration and cleaning, data analytics, data visualizations and data annotations. Develop custom data analytics tools and data visualizations-Stay current with the latest advancements in AI and machine learning technologies to provide informed support and recommendations.- 8+ years of experience in deploying, monitoring and maintaining machine learning models in cloud environments-5-6 years of experience in advanced data analysis on large data sets-Strong experience working in a cross-functional environment supporting multiple areas of the business-2-3 years of experience in leading a small team Professional & Technical Skills: - Development and deployment of machine learning and AI solutions (applied statistics, machine learning, model evaluation, data visualization)-Data science and data engineering tools (scikit-learn, xgboost, pandas, numpy, scipy, jupyter notebook)-Software engineering tools and practices (Python, Git, Unit testing, CI/CD principles)-Visual analytics platforms (Tableau, Superset, Streamlit)-Database and other data storage technologies (SQL, NoSQL, Oracle, MySQL, S3, Athena, DynamoDB)-Cloud platform technology (AWS, MS Azure or Snowflake))-System infrastructure in Linux (Bash, Docker, CRON, Apache Airflow)- Strong understanding of software development lifecycle and best practices.- Excellent problem-solving skills and attention to detail.- Must be able to function independently as well as work in a collaborative team environment.- Willingness to teach others and learn new technologies Additional Information:- The candidate should have minimum 7.5 years of experience in Machine Learning.- This position is based at our Hyderabad office.- A 15 years full time education is required.- Bachelor degree in Computer Science, Mathematics, Data Science or related field. Master preferred Qualification 15 years full time education
Posted 1 day ago
5.0 - 10.0 years
9 - 13 Lacs
bengaluru
Work from Office
About The Role Project Role : Software Development Lead Project Role Description : Develop and configure software systems either end-to-end or for a specific stage of product lifecycle. Apply knowledge of technologies, applications, methodologies, processes and tools to support a client, project or entity. Must have skills : Machine Learning Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Software Development Lead, you will be responsible for developing and configuring software systems, applying knowledge of technologies, methodologies, and tools to support projects or clients throughout the product lifecycle. Roles & Responsibilities:- Develop and implement advanced algorithms using MATLAB, Simulink and Python. -Develop models for a variety of use cases, including classification, regression, recommendation systems, time-series forecasting, and anomaly detection. -Apply advanced mathematical techniques to create robust solutions. -Participate in code reviews and provide constructive feedback to ensure code quality and adherence to best practices. -Document algorithms clearly and concisely, including design rationale, assumptions, and limitations. -Prototype and test new algorithms or methods that could be beneficial for the teams goals Professional & Technical Skills: - Strong experience in developing algorithms using MATLAB/Simulink and Python. -Proficiency in programming languages, particularly Python and R, and experience using Jupyter notebooks etc. for experimentation and model development.-Expert in supervised learning, unsupervised learning, reinforcement learning, deep learning. -Solid understanding of linear algebra, calculus, probability theory, and statistics, which are fundamental to ML algorithms. -Understanding of how to deploy ML models into production using tools like Docker, Kubernetes, and CI/CD pipelines. Additional Information:- The candidate should have a minimum of 5 years of experience in Machine Learning- This position is based at our Bengaluru office- A 15 years full time education is required Qualification 15 years full time education
Posted 1 day ago
3.0 - 8.0 years
4 - 8 Lacs
hyderabad
Work from Office
About The Role Project Role : Software Development Engineer Project Role Description : Analyze, design, code and test multiple components of application code across one or more clients. Perform maintenance, enhancements and/or development work. Must have skills : Data Science Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Analyst, you will conduct specific advanced data analyzes, develop custom tools, annotate data and implement robust data quality controls using statistical methods. You will support one specific area of our portfolio. Roles & Responsibilities:- Develop custom data analytics tools and data visualizations to report on data quality and AI model performance KPIs.-Provide technical support to existing solutions, addressing bug fixes and simple issues-Independently design and utilize specialized tools to annotate data in various formats (e.g. timeseries) for predictive algorithm development, validation and monitoring-Support data scientists with data exploration and cleaning, data analytics, data visualizations and data annotations.-Stay current with the latest advancements in advanced data analytics technologies to provide informed support and recommendations. 1-3 years experience in advanced data analysis on large data sets, including data annotation, cleaning and data visualization. Professional & Technical Skills: - Software engineering tools and practices (Python, Git, Unit testing, CI/CD principles)-Data engineering tools (Python, pandas, jupyter notebook)-Visual analytics platforms (Tableau, Superset, Streamlit)-Database and other data storage technologies (SQL, NoSQL, Oracle, MySQL, S3, Athena, DynamoDB)-Cloud platform technology (AWS, MS Azure or Snowflake)-System infrastructure in Linux/Windows (Bash, PowerShell, Docker, CRON, Apache Airflow)- Demonstrated knowledge in a) Applied Mathematics (statistics, regression, simulation, scenario analysis) b) Advanced Data Analytics (applied statistics, model evaluation, data visualization) - Good problem-solving skills and attention to detail.- Ability to communicate well with stakeholders and customers from a wide range of backgrounds- Must be able to function independently as well as work in a collaborative team environment.- Willingness to learn new technologies Additional Information:- The candidate should have minimum 3 years of experience in Data Science.- This position is based at our Bengaluru office.- A 15 years full time education is required. Bachelor degree in Computer Science, Mathematics, Data Science or related field with coursework in statistics or data science. Qualification 15 years full time education
Posted 1 day ago
3.0 - 8.0 years
12 - 16 Lacs
bengaluru
Work from Office
Job Summary Synechron is seeking a detail-oriented and analytical Python Developer to join our data team. In this role, you will design, develop, and optimize data pipelines, analysis tools, and workflows that support key business and analytical functions. Your expertise in data manipulation, database management, and scripting will enable the organization to enhance data accuracy, efficiency, and insights. This position offers an opportunity to work closely with data analysts and scientists to build scalable, reliable data solutions that contribute directly to business decision-making and operational excellence. Software Requirements Required Skills: Python (version 3.7 or higher) with experience in data processing and scripting Pandas library (experience in large dataset manipulation and analysis) SQL (proficiency in writing performant queries for data extraction and database management) Data management tools and databases such as MySQL, PostgreSQL, or similar relational databases Preferred Skills: Experience with cloud data services (AWS RDS, Azure SQL, GCP Cloud SQL) Knowledge of additional Python libraries such as NumPy, Matplotlib, or Jupyter Notebooks for data analysis and visualization Data pipeline orchestration tools (e.g., Apache Airflow) Version control tools like Git Overall Responsibilities Develop, test, and maintain Python scripts for ETL processes and data workflows Utilize Pandas to clean, analyze, and transform large datasets efficiently Write, optimize, and troubleshoot SQL queries for data extraction, updates, and management Collaborate with data analysts and scientists to create data-driven analytic tools and solutions Automate repetitive data workflows to increase operational efficiency and reduce errors Maintain detailed documentation of data processes, pipelines, and procedures Troubleshoot data discrepancies, pipeline failures, and database-related issues efficiently Support ongoing data quality initiatives by identifying and resolving data inconsistencies Technical Skills (By Category) Programming Languages: Required: Python (3.7+), proficiency with data manipulation and scripting Preferred: Additional scripting languages such as R or familiarity with other programming environments Databases/Data Management: Relational databases: MySQL, PostgreSQL, or similar Experience with query optimization and database schema design Cloud Technologies: Preferred: Basic experience with cloud data services (AWS, Azure, GCP) for data storage and processing Frameworks and Libraries: Pandas, NumPy, Matplotlib, Jupyter Notebooks for data analysis and visualization Airflow or similar orchestration tools (preferred) Development Tools and Methodologies: Git or similar version control tools Agile development practices and collaborative workflows Security Protocols: Understanding of data privacy, confidentiality, and secure coding practices Experience Requirements 3+ years of experience in Python development with a focus on data processing and management Proven hands-on experience in building and supporting ETL workflows and data pipelines Strong experience working with SQL and relational databases Demonstrated ability to analyze and manipulate large datasets efficiently Familiarity with cloud data services is advantageous but not mandatory Day-to-Day Activities Write and enhance Python scripts to perform ETL, data transformation, and automation tasks Design and optimize SQL queries for data extraction and updates Collaborate with data analysts, scientists, and team members during daily stand-ups and planning sessions Investigate and resolve data quality issues or pipeline failures promptly Document data pipelines, workflows, and processes for clarity and future maintenance Assist in developing analytical tools and dashboards for business insights Review code changes through peer reviews and ensure adherence to best practices Participate in continuous improvement initiatives related to data workflows and processing techniques Qualifications Bachelors degree in Computer Science, Data Science, Information Technology, or a related field Relevant certifications or training in Python, data engineering, or database management are a plus Proven track record of working on data pipelines, analysis, and automation projects Professional Competencies Strong analytical and problem-solving skills with attention to detail Effective communication skills, able to collaborate across teams and explain technical concepts clearly Ability to work independently and prioritize tasks effectively Continuous learner, eager to adopt new tools, techniques, and best practices in data processing Adaptability to changing project requirements and proactive in identifying process improvements Focused on delivering high-quality work with a results-oriented approach
Posted 1 day ago
2.0 - 4.0 years
25 - 35 Lacs
bengaluru
Work from Office
Technologies: Amazon Bedrock, RAG Models, Java, Python, C or C++, AWS Lambda, Responsibilities: Responsible for developing, deploying, and maintaining a Retrieval Augmented Generation (RAG) model in Amazon Bedrock, our cloud-based platform for building and scaling generative AI applications. Design and implement a RAG model that can generate natural language responses, commands, and actions based on user queries and context, using the Anthropic Claude model as the backbone. Integrate the RAG model with Amazon Bedrock, our platform that offers a choice of high-performing foundation models from leading AI companies and Amazon via a single API, along with a broad set of capabilities to build generative AI applications with security, privacy, and responsible AI. Optimize the RAG model for performance, scalability, and reliability, using best practices and robust engineering methodologies. Design, test, and optimize prompts to improve performance, accuracy, and alignment of large language models across diverse use cases. Develop and maintain reusable prompt templates, chains, and libraries to support scalable and consistent GenAI applications. Skills/Qualifications: Experience in programming with at least one software language, such as Java, Python, or C/C++. Experience in working with generative AI tools, models, and frameworks, such as Anthropic, OpenAI, Hugging Face, TensorFlow, PyTorch, or Jupyter. Experience in working with RAG models or similar architectures, such as RAG, Ragna, or Pinecone. Experience in working with Amazon Bedrock or similar platforms, such as AWS Lambda, Amazon SageMaker, or Amazon Comprehend. Ability to design, iterate, and optimize prompts for various LLM use cases (e.g., summarization, classification, translation, Q&A, and agent workflows). Deep understanding of prompt engineering techniques (zero-shot, few-shot, chain-of-thought, etc.) and their effect on model behavior. Familiarity with prompt evaluation strategies, including manual review, automatic metrics, and A/B testing frameworks. Experience building prompt libraries, reusable templates, and structured prompt workflows for scalable GenAI applications. Ability to debug and refine prompts to improve accuracy, safety, and alignment with business objectives. Awareness of prompt injection risks and experience implementing mitigation strategies. Familiarity with prompt tuning, parameter-efficient fine-tuning (PEFT), and prompt chaining methods. Familiarity with continuous deployment and DevOps tools preferred. Experience with Git preferred Experience working in agile/scrum environments Successful track record interfacing and communicating effectively across cross-functional teams. Good communication, analytical and presentation skills, problem-solving skills and learning attitude
Posted 1 day ago
7.0 - 12.0 years
8 - 13 Lacs
bengaluru
Work from Office
Your future role Take on a new challenge and apply your data engineering expertise in a cutting-edge field. Youll work alongside collaborative and innovative teammates. You'll play a key role in enabling data-driven decision-making across the organization by ensuring data availability, quality, and accessibility. Day-to-day, youll work closely with teams across the business (e.g., Data Scientists, Analysts, and ML Engineers), mentor junior engineers, and contribute to the architecture and design of our data platforms and solutions. Youll specifically take care of designing and developing scalable data pipelines, but also managing and optimizing object storage systems. Well look to you for: Designing, developing, and maintaining scalable and efficient data pipelines using tools like Apache NiFi and Apache Airflow. Creating robust Python scripts for data ingestion, transformation, and validation. Managing and optimizing object storage systems such as Amazon S3, Azure Blob, or Google Cloud Storage. Collaborating with Data Scientists and Analysts to understand data requirements and deliver production-ready datasets. Implementing data quality checks, monitoring, and alerting mechanisms. Ensuring data security, governance, and compliance with industry standards. Mentoring junior engineers and promoting best practices in data engineering. All about you We value passion and attitude over experience. Thats why we dont expect you to have every single skill. Instead, weve listed some that we think will help you succeed and grow in this role: Bachelors or Masters degree in Computer Science, Engineering, or a related field. 7+ years of experience in data engineering or a similar role. Strong proficiency in Python and data processing libraries (e.g., Pandas, PySpark). Hands-on experience with Apache NiFi for data flow automation. Deep understanding of object storage systems and cloud data architectures. Proficiency in SQL and experience with both relational and NoSQL databases. Familiarity with cloud platforms (AWS, Azure, or GCP). Exposure to the Data Science ecosystem, including tools like Jupyter, scikit-learn, TensorFlow, or MLflow. Experience working in cross-functional teams with Data Scientists and ML Engineers. Cloud certifications or relevant technical certifications are a plus.
Posted 2 days ago
15.0 - 25.0 years
5 - 9 Lacs
pune
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : SAP for Utilities Billing Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time educationKey Responsibilitiesa. Design, configure and build applications to meet business process and application requirements b. Knowledge in Analyzing requirements and enhancing and building highly optimized standard / custom applications as well as creating Business Process and related technical documentationc. Billing Execution Individual and batch for Daily reporting to Managers, Risk identification in your moduled. Knowledge on Analyzing Issues and working on bug fixes Technical Experiencea. Should have hands on knowledge of implementing Billing related enhancements, FQ events b. Should have knowledge on Standard Modules used in RICEFW development for Billing Objects c. Should have good knowledge on all Billing and Invoicing processes like Meter to Cash Cycle, Billing exceptions and reversals, Joint Invoicing, Bill Printing, Collective invoicing and Advance Billing functions like Real Time Pricing and Budget Billingd. Should have sound knowledge on Billing Mater Data and Integration points with Device Management and FICAe. Should have strong De-bugging skills , PWBAdditional infoa. Good Communication Skillb. Good interpersonal skill.c. A minimum of 15 years of full-time education is required. Qualification 15 years full time education
Posted 3 days ago
1.0 - 5.0 years
0 Lacs
maharashtra
On-site
As a Data Analyst in C#/.NET and SQL at ISS STOXX, located in Mumbai (Goregaon East), your role involves utilizing technology to transform raw data into valuable insights, creating engaging visualizations, and automating report production. You will be responsible for writing intricate queries using Microsoft SQL Server and Python to analyze large volumes of proprietary data and automate insightful client deliverables using .NET code. Additionally, you will collaborate with the advisory team, fulfill ad-hoc requests from the sales team and media, and explore new ways to analyze and present information leveraging compensation, governance, and sustainability data. Your shift hours will be from 12 PM to 9 PM IST. Key Responsibilities: - Maintain and support a variety of reports - Develop new reports, tools, and deliverables for different business lines - Generate customized reports based on client requests - Extract data from a vast collection of executive compensation, corporate governance, and sustainability datasets - Utilize Python for data analysis and visualization integration Qualifications: - Bachelor's degree or associate's degree with a minimum of 3 years of relevant professional experience - Proficiency in Microsoft SQL querying and writing MSSQL queries - Interest in modifying and writing C#.NET code in Visual Studio - Experience with Python, particularly Numpy, Pandas, and Jupyter Notebook - Familiarity with Microsoft PowerPoint primary interop assembly - Strong data manipulation skills in Excel - Experience with GitLab or similar source code management systems - Ability to work with different programming languages such as T-SQL, C#.NET, Python - Understanding of relational database concepts - Excellent communication skills - Ability to transform data into visualizations - Strong problem-solving skills and quick learning ability - Commitment to documenting and commenting on codes - Team-oriented mindset and time management skills - Fluent in English At ISS STOXX, we value diversity, skills, and experiences, and provide resources, support, and growth opportunities to our employees. We encourage creativity, innovation, and collaboration to drive our success and shape the future together. Join us at ISS STOXX, a leading provider of research and technology solutions for the financial market, offering benchmark and custom indices globally. Explore opportunities with us to make informed decisions and contribute to the benefit of stakeholders. Learn more about us at https://www.issgovernance.com and discover additional open roles at https://www.issgovernance.com/join-the-iss-team/,
Posted 1 week ago
1.0 - 5.0 years
0 Lacs
delhi
On-site
As an experienced data analytics professional with 1 to 2 years of experience, you will be responsible for developing and implementing data analytics methodologies. Your role will require good interpersonal skills along with excellent communication abilities. Your technical skills must include proficiency in Python, machine learning, deep learning, data wrangling, and integration with Big Data tools such as Hadoop, Scoop, Impala, Hive, Pig, and Spark R. You should also have a solid understanding of statistics, data mining, algorithms, time series analysis, forecasting, SQL queries, and Tableau data visualization. Having a good grasp of technologies like Hadoop, HBase, Hive, Pig, Mapreduce, Python, R, Java, Apache Spark, Impala, and machine learning algorithms is essential for this role. Your responsibilities will involve developing training content on Big Data and Hadoop technologies for students, working professionals, and corporates. You will conduct both online and classroom training sessions, provide practical use cases and assignments, and design self-paced recorded training sessions. It's important to continuously enhance teaching methodologies for an effective online learning experience and work collaboratively in small teams to make a significant impact. You will be tasked with designing and overseeing the development of real-time projects to provide practical exposure to the trainees. Additionally, you may work as a consultant or architect in developing and training real-time Big Data applications for corporate clients either on a part-time or full-time basis. Hands-on knowledge of tools like Anaconda Navigator, Jupyter Notebook, Hadoop, Hive, Pig, Mapreduce, Apache Spark, Impala, SQL, and Tableau will be required to excel in this role.,
Posted 1 week ago
8.0 - 12.0 years
0 Lacs
karnataka
On-site
LSEG (London Stock Exchange Group) is a prominent provider of financial markets infrastructure worldwide, offering financial data, analytics, news, and index products to a vast customer base spanning over 170 countries. Committed to excellence, we collaborate with our customers throughout the trade lifecycle to assist them in making informed decisions, executing trades, raising capital, and optimizing their operations. With a rich history of over three centuries, cutting-edge technologies, and a global team of 25,000 professionals across 60 countries, we are dedicated to promoting financial stability, driving economic empowerment, and fostering sustainable growth. In the role of Group Director of Software Engineering, you will be responsible for designing and implementing functionalities with a focus on Data Engineering tasks. Your primary tasks will involve working with semi-structured data to ingest and distribute it on a Microsoft Fabric-based platform, thereby modernizing data products and distribution channels. You will play a pivotal role in driving the software development lifecycle for continuous data delivery and spearheading the evaluation and adoption of emerging technologies. Key Responsibilities: - Collaborate closely with Subject Matter Experts (SMEs) and Tech Leads to ensure delivery on commitments - Develop and maintain secure and compliant production data processing pipelines on Microsoft Fabric and Azure for data ingestion, transformation, and product delivery - Ensure high-performing, efficient, organized, and reliable data pipelines and data stores based on business requirements and constraints - Design, implement, monitor, and optimize data platforms to meet the needs of data pipelines from both functional and non-functional perspectives - Undertake data-related implementation tasks encompassing provisioning data storage services, ingesting streaming and batch data, implementing security requirements, defining data retention policies, identifying performance bottlenecks, establishing monitoring and telemetry, and accessing external data sources - Design and operationalize large-scale enterprise data solutions and applications utilizing Azure data and analytics services such as Delta.io, Lakehouse, Fabric, Azure Cosmos DB, Azure Data Factory, Spark, and Azure Blob storage, among others Skills and Experience: - Relevant experience in Data Platforms within the Financial Services industry and familiarity with Azure's PaaS/SaaS offerings - Proven expertise as a data engineer with a focus on cloud distributed data processing platforms like Spark and modern open table concepts such as delta/iceberg - Strong proficiency in Azure services including Synapse Analytics, Data Factory, Data Lake, Databricks, Microsoft Purview, and others - Proficiency in Spark, SQL, and Python/Scala/Java - Experience in building Lakehouse architecture using open-source table formats like delta and parquet, along with tools like Jupyter Notebook - Strong understanding of security processes utilizing Azure Key Vault, IAM, RBAC, Monitor, etc. - Proficient in integrating, transforming, and consolidating data from diverse structured and unstructured data systems to build analytics solutions - Ability to think strategically while effectively managing day-to-day delivery requirements - Strong communication, presentation, documentation, and interpersonal skills - Capable of working independently in a fast-paced environment with changing requirements and priorities Join us at LSEG, a leading global financial markets infrastructure and data provider, where our core values of Integrity, Partnership, Excellence, and Change guide us in driving financial stability, empowering economies, and fostering sustainable growth.,
Posted 2 weeks ago
5.0 - 10.0 years
5 - 10 Lacs
Noida, Uttar Pradesh, India
On-site
We are seeking a highly analytical Data Scientist with strong proficiency in Python and SQL . You will be responsible for conducting data exploration, improving data quality, and contributing to the scaling of the SCI project by supporting data acquisition, cleansing, and validation processes. This role requires a solid understanding of data quality principles and experience with data matching techniques, including fuzzy logic, to deliver actionable insights and recommendations to stakeholders. Roles & Responsibilities: Conduct continuous data exploration and analysis to identify opportunities for enhancing data matching logic, including fuzzy logic , and improving overall data quality. Perform various analyses specifically aimed at improving data quality within the SCI system, including identifying issues and implementing solutions. Participate in weekly playback sessions, using Jupyter Notebook to demonstrate data insights and analysis. Contribute to the scaling of the SCI project by supporting data acquisition, cleansing, and validation processes for new markets. Perform thorough data analysis and validation of SCI records after batch ingestion and proactively identify insights to improve data quality. Coordinate with business stakeholders to facilitate the manual validation of records and communicate findings and recommendations clearly and effectively. Skills Required: Strong proficiency in Python and SQL . Extensive experience using Jupyter Notebook for data analysis and visualization. Working knowledge of data matching techniques, including fuzzy logic. Experience working with large datasets from various sources (Excel, databases, etc.). Solid understanding of data quality principles and methodologies. Experience with Machine Learning , statistical modeling, and algorithms is a valuable skill. Familiarity with cloud computing platforms ( AWS, Azure, GCP ) and data visualization tools ( Tableau, Power BI ) is a plus. Excellent communication, collaboration, and problem-solving skills. QUALIFICATION: Bachelor's or Master's degree in a quantitative field such as Data Science, Statistics, Computer Science, or a related field, or equivalent practical experience.
Posted 2 weeks ago
5.0 - 8.0 years
4 - 7 Lacs
Mumbai, Maharashtra, India
On-site
Skills : SQL Machine Learning (While not explicitly required in the initial description, it's a valuable skill for a Data Scientist and should be included) Data Analysis Jupyter Notebook Data Cleansing Fuzzy Logic Python Data Quality Improvement Data Validation Data Acquisition Communication and Collaboration Problem-solving and Analytical skills
Posted 2 weeks ago
2.0 - 6.0 years
0 Lacs
maharashtra
On-site
As an AI/ML Engineer, your primary responsibility will be deploying AI/ML solutions to productions and deriving value from them. You will collaborate with teammates and engage with stakeholders, data team, business team, and product team to develop End-to-End solutions. Your support to the dev-ops team in deploying and integrating solutions with existing systems will be crucial. Ensuring Data Quality and Availability is a key aspect of your role, involving handling missing, unstructured, or inconsistent data for model training and validation. Developing models that perform effectively in real-world scenarios, optimizing scalability of AI solutions, and keeping abreast of rapid advances in AI are part of your responsibilities. You will also need to ensure ethical and regulatory compliance of AI models, maintaining transparency, fairness, and adherence to regulations and ethical guidelines. To be successful in this role, you should have 2 - 4 years of relevant experience. The preferred educational qualifications include a Masters in Science, Statistics, Machine Learning, Data Science, or a Bachelors in Engineering, Technology, Data Science, or Computer Science. Your functional skills should encompass working knowledge of the Data Science toolbox, including Python, SQL, Jupyter Notebook, Azure/AWS cloud, PySpark, TensorFlow, PyTorch, Keras, and other Big Data tools. An in-depth understanding of AI/ML models, particularly in LLM, Reinforcement Learning, Computer Vision, and Deep Learning, is essential. Expertise in NLP, Computer Vision, and Speech technology (STT, TTS) will be advantageous. On the behavioral front, you should possess the ability to align technology with business objectives, thrive on innovation, and have a problem-solving mindset for experimentation. Your focus should be on achieving results with excellence in execution, maintaining customer orientation, and exuding passion and energy at work. A positive "Can Do and Will Do" attitude will be instrumental in driving success in this role.,
Posted 2 weeks ago
0.0 - 4.0 years
0 Lacs
hyderabad, telangana
On-site
As a Data Scientist Intern at Evoastra Ventures Pvt. Ltd., you will have the opportunity to work with real-world datasets and gain valuable industry exposure to accelerate your entry into the data science domain. Evoastra Ventures is a research-first data and AI solutions company that focuses on delivering value through predictive analytics, market intelligence, and technology consulting. Our goal is to empower businesses by transforming raw data into strategic decisions. In this role, you will be responsible for performing data cleaning, preprocessing, and transformation, as well as conducting exploratory data analysis (EDA) to identify trends. You will also assist in the development and evaluation of machine learning models and contribute to reports and visual dashboards summarizing key insights. Additionally, you will document workflows, collaborate with team members on project deliverables, and participate in regular project check-ins and mentorship discussions. To excel in this role, you should have a basic knowledge of Python, statistics, and machine learning concepts, along with good analytical and problem-solving skills. You should also be willing to learn and adapt in a remote, team-based environment, possess strong communication and time-management skills, and have access to a laptop with a stable internet connection. Throughout the internship, you will gain a Verified Internship Certificate, a Letter of Recommendation based on your performance, real-time mentorship from professionals in data science and analytics, project-based learning opportunities with portfolio-ready outputs, and priority consideration for future paid internships or full-time roles at Evoastra. You will also be recognized in our internship alumni community. If you meet the eligibility criteria and are eager to build your career foundation with hands-on data science projects that make an impact, we encourage you to submit your resume via our internship application form at www.evoastra.in. Selected candidates will receive an onboarding email with further steps. Please note that this internship is fully remote and unpaid.,
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
54024 Jobs | Dublin
Wipro
24262 Jobs | Bengaluru
Accenture in India
18733 Jobs | Dublin 2
EY
17079 Jobs | London
Uplers
12548 Jobs | Ahmedabad
IBM
11704 Jobs | Armonk
Amazon
11059 Jobs | Seattle,WA
Bajaj Finserv
10656 Jobs |
Accenture services Pvt Ltd
10587 Jobs |
Oracle
10506 Jobs | Redwood City