Home
Jobs

50 Mllib Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In data analysis at PwC, you will focus on utilising advanced analytical techniques to extract insights from large datasets and drive data-driven decision-making. You will leverage skills in data manipulation, visualisation, and statistical modelling to support clients in solving complex business problems. PwC US - Acceleration Center is seeking a highly skilled strong analytical background to work in our Analytics Consulting practice Associate’s will work as an integral part of business analytics teams in India alongside clients and consultants in the U.S., leading teams for high-end analytics consulting engagements and providing business recommendations to project teams. Years of Experience: Candidates with 2+ years of hands on experience Must Have Experience in building ML models in cloud environments (At least 1 of the 3: Azure ML, AWS SageMaker or Databricks) Knowledge of predictive/prescriptive analytics, especially on usage of Log-Log, Log-Linear, Bayesian Regression technques and including Machine Learning algorithms (Supervised and Unsupervised) and deep learning algorithms and Artificial Neural Networks Good knowledge of statistics For e.g: statistical tests & distributions Experience in Data analysis For e.g: data cleansing, standardization and data preparation for the machine learning use cases Experience in machine learning frameworks and tools (For e.g. scikit-learn, mlr, caret, H2O, TensorFlow, Pytorch, MLlib) Advanced level programming in SQL or Python/Pyspark Expertise with visualization tools For e.g: Tableau, PowerBI, AWS QuickSight etc. Nice To Have Working knowledge of containerization ( e.g. AWS EKS, Kubernetes), Dockers and data pipeline orchestration (e.g. Airflow) Good Communication and presentation skills Roles And Responsibilities Develop and execute on project & analysis plans under the guidance of Project Manager Interact with and advise consultants/clients in US as a subject matter expert to formalize data sources to be used, datasets to be acquired, data & use case clarifications that are needed to get a strong hold on data and the business problem to be solved Drive and Conduct analysis using advanced analytics tools and coach the junior team members Implement necessary quality control measures in place to ensure the deliverable integrity Validate analysis outcomes, recommendations with all stakeholders including the client team Build storylines and make presentations to the client team and/or PwC project leadership team Contribute to the knowledge and firm building activities Professional And Educational Background Any graduate /BE / B.Tech / MCA / M.Sc / M.E / M.Tech /Master’s Degree /MBA

Posted 1 day ago

Apply

2.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In data analysis at PwC, you will focus on utilising advanced analytical techniques to extract insights from large datasets and drive data-driven decision-making. You will leverage skills in data manipulation, visualisation, and statistical modelling to support clients in solving complex business problems. PwC US - Acceleration Center is seeking a highly skilled strong analytical background to work in our Analytics Consulting practice Associate’s will work as an integral part of business analytics teams in India alongside clients and consultants in the U.S., leading teams for high-end analytics consulting engagements and providing business recommendations to project teams. Years of Experience: Candidates with 2+ years of hands on experience Must Have Experience in building ML models in cloud environments (At least 1 of the 3: Azure ML, AWS SageMaker or Databricks) Knowledge of predictive/prescriptive analytics, especially on usage of Log-Log, Log-Linear, Bayesian Regression technques and including Machine Learning algorithms (Supervised and Unsupervised) and deep learning algorithms and Artificial Neural Networks Good knowledge of statistics For e.g: statistical tests & distributions Experience in Data analysis For e.g: data cleansing, standardization and data preparation for the machine learning use cases Experience in machine learning frameworks and tools (For e.g. scikit-learn, mlr, caret, H2O, TensorFlow, Pytorch, MLlib) Advanced level programming in SQL or Python/Pyspark Expertise with visualization tools For e.g: Tableau, PowerBI, AWS QuickSight etc. Nice To Have Working knowledge of containerization ( e.g. AWS EKS, Kubernetes), Dockers and data pipeline orchestration (e.g. Airflow) Good Communication and presentation skills Roles And Responsibilities Develop and execute on project & analysis plans under the guidance of Project Manager Interact with and advise consultants/clients in US as a subject matter expert to formalize data sources to be used, datasets to be acquired, data & use case clarifications that are needed to get a strong hold on data and the business problem to be solved Drive and Conduct analysis using advanced analytics tools and coach the junior team members Implement necessary quality control measures in place to ensure the deliverable integrity Validate analysis outcomes, recommendations with all stakeholders including the client team Build storylines and make presentations to the client team and/or PwC project leadership team Contribute to the knowledge and firm building activities Professional And Educational Background Any graduate /BE / B.Tech / MCA / M.Sc / M.E / M.Tech /Master’s Degree /MBA Minimum Years Experience Required Add here AND change text color to black or remove bullet and section title if not applicable Additional Application Instructions Add here AND change text color to black or remove bullet and section title if not applicable

Posted 1 day ago

Apply

4.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In data analysis at PwC, you will focus on utilising advanced analytical techniques to extract insights from large datasets and drive data-driven decision-making. You will leverage skills in data manipulation, visualisation, and statistical modelling to support clients in solving complex business problems. PwC US - Acceleration Center is seeking a highly strong analytical background to work in our Analytics Consulting practice Senior Associate’s will work as an integral part of business analytics teams in India alongside clients and consultants in the U.S., leading teams for high-end analytics consulting engagements and providing business recommendations to project teams. Years of Experience: Candidates with 4+ years of hands on experience Must Have Experience in building ML models in cloud environments (At least 1 of the 3: Azure ML, GCP’s Vertex AI platform, AWS SageMaker) Knowledge of predictive/prescriptive analytics, especially on usage of Log-Log, Log-Linear, Bayesian Regression technques and including Machine Learning algorithms (Supervised and Unsupervised) and deep learning algorithms and Artificial Neural Networks Good knowledge of statistics For e.g: statistical tests & distributions Experience in Data analysis For e.g: data cleansing, standardization and data preparation for the machine learning use cases Experience in machine learning frameworks and tools (For e.g. scikit-learn, mlr, caret, H2O, TensorFlow, Pytorch, MLlib) Advanced level programming in SQL or Python/Pyspark Expertise with visualization tools For e.g: Tableau, PowerBI, AWS QuickSight etc. Nice To Have Working knowledge of containerization ( e.g. AWS EKS, Kubernetes), Dockers and data pipeline orchestration (e.g. Airflow) Good Communication and presentation skills Roles And Responsibilities Develop and execute on project & analysis plans under the guidance of Project Manager Interact with and advise consultants/clients in US as a subject matter expert to formalize data sources to be used, datasets to be acquired, data & use case clarifications that are needed to get a strong hold on data and the business problem to be solved Drive and Conduct analysis using advanced analytics tools and coach the junior team members Implement necessary quality control measures in place to ensure the deliverable integrity Validate analysis outcomes, recommendations with all stakeholders including the client team Build storylines and make presentations to the client team and/or PwC project leadership team Contribute to the knowledge and firm building activities Professional And Educational Background Any graduate /BE / B.Tech / MCA / M.Sc / M.E / M.Tech /Master’s Degree /MBA

Posted 1 day ago

Apply

2.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In data analysis at PwC, you will focus on utilising advanced analytical techniques to extract insights from large datasets and drive data-driven decision-making. You will leverage skills in data manipulation, visualisation, and statistical modelling to support clients in solving complex business problems. PwC US - Acceleration Center is seeking a highly skilled strong analytical background to work in our Analytics Consulting practice Associate’s will work as an integral part of business analytics teams in India alongside clients and consultants in the U.S., leading teams for high-end analytics consulting engagements and providing business recommendations to project teams. Years of Experience: Candidates with 2+ years of hands on experience Must Have Experience in building ML models in cloud environments (At least 1 of the 3: Azure ML, AWS SageMaker or Databricks) Knowledge of predictive/prescriptive analytics, especially on usage of Log-Log, Log-Linear, Bayesian Regression technques and including Machine Learning algorithms (Supervised and Unsupervised) and deep learning algorithms and Artificial Neural Networks Good knowledge of statistics For e.g: statistical tests & distributions Experience in Data analysis For e.g: data cleansing, standardization and data preparation for the machine learning use cases Experience in machine learning frameworks and tools (For e.g. scikit-learn, mlr, caret, H2O, TensorFlow, Pytorch, MLlib) Advanced level programming in SQL or Python/Pyspark Expertise with visualization tools For e.g: Tableau, PowerBI, AWS QuickSight etc. Nice To Have Working knowledge of containerization ( e.g. AWS EKS, Kubernetes), Dockers and data pipeline orchestration (e.g. Airflow) Good Communication and presentation skills Roles And Responsibilities Develop and execute on project & analysis plans under the guidance of Project Manager Interact with and advise consultants/clients in US as a subject matter expert to formalize data sources to be used, datasets to be acquired, data & use case clarifications that are needed to get a strong hold on data and the business problem to be solved Drive and Conduct analysis using advanced analytics tools and coach the junior team members Implement necessary quality control measures in place to ensure the deliverable integrity Validate analysis outcomes, recommendations with all stakeholders including the client team Build storylines and make presentations to the client team and/or PwC project leadership team Contribute to the knowledge and firm building activities Professional And Educational Background Any graduate /BE / B.Tech / MCA / M.Sc / M.E / M.Tech /Master’s Degree /MBA Minimum Years Experience Required Add here AND change text color to black or remove bullet and section title if not applicable Additional Application Instructions Add here AND change text color to black or remove bullet and section title if not applicable

Posted 1 day ago

Apply

2.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In data analysis at PwC, you will focus on utilising advanced analytical techniques to extract insights from large datasets and drive data-driven decision-making. You will leverage skills in data manipulation, visualisation, and statistical modelling to support clients in solving complex business problems. PwC US - Acceleration Center is seeking a highly skilled strong analytical background to work in our Analytics Consulting practice Associate’s will work as an integral part of business analytics teams in India alongside clients and consultants in the U.S., leading teams for high-end analytics consulting engagements and providing business recommendations to project teams. Years of Experience: Candidates with 2+ years of hands on experience Must Have Experience in building ML models in cloud environments (At least 1 of the 3: Azure ML, AWS SageMaker or Databricks) Knowledge of predictive/prescriptive analytics, especially on usage of Log-Log, Log-Linear, Bayesian Regression technques and including Machine Learning algorithms (Supervised and Unsupervised) and deep learning algorithms and Artificial Neural Networks Good knowledge of statistics For e.g: statistical tests & distributions Experience in Data analysis For e.g: data cleansing, standardization and data preparation for the machine learning use cases Experience in machine learning frameworks and tools (For e.g. scikit-learn, mlr, caret, H2O, TensorFlow, Pytorch, MLlib) Advanced level programming in SQL or Python/Pyspark Expertise with visualization tools For e.g: Tableau, PowerBI, AWS QuickSight etc. Nice To Have Working knowledge of containerization ( e.g. AWS EKS, Kubernetes), Dockers and data pipeline orchestration (e.g. Airflow) Good Communication and presentation skills Roles And Responsibilities Develop and execute on project & analysis plans under the guidance of Project Manager Interact with and advise consultants/clients in US as a subject matter expert to formalize data sources to be used, datasets to be acquired, data & use case clarifications that are needed to get a strong hold on data and the business problem to be solved Drive and Conduct analysis using advanced analytics tools and coach the junior team members Implement necessary quality control measures in place to ensure the deliverable integrity Validate analysis outcomes, recommendations with all stakeholders including the client team Build storylines and make presentations to the client team and/or PwC project leadership team Contribute to the knowledge and firm building activities Professional And Educational Background Any graduate /BE / B.Tech / MCA / M.Sc / M.E / M.Tech /Master’s Degree /MBA

Posted 1 day ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

Remote

Job Description: We are seeking a knowledgeable and passionate PySpark Developer Trainer with deep expertise in Apache Spark, Python, and big data technologies. The ideal candidate will have 2–5 years of hands-on experience in PySpark development and a genuine interest in teaching and mentoring aspiring data engineers and analysts. Roles & Responsibilities: Deliver engaging, hands-on training sessions on PySpark fundamentals and advanced data engineering concepts . Design and implement real-world projects, case studies, and capstone assignments to reinforce learning. Teach RDDs, DataFrames, Datasets, Spark SQL , and Spark Streaming with practical use cases. Guide learners in data ingestion, transformation, and optimization using Spark and related tools. Support students with code reviews, debugging sessions, and conceptual understanding. Evaluate learner progress through projects, quizzes, assignments, and live coding challenges . Conduct webinars, live ETL pipelines builds , and industry-focused Q&A sessions . Adapt teaching methodology to suit both entry-level and advanced learners . Technology-Specific Responsibilities: Core PySpark & Apache Spark: Train students in Spark architecture, RDD vs DataFrame vs Dataset , and lazy evaluation. Teach efficient use of Spark transformations, actions, and execution planning . Demonstrate building ETL pipelines , handling large-scale data processing with Spark SQL and UDFs . Data Engineering & Processing: Guide on working with structured and semi-structured data (CSV, JSON, Parquet, ORC) . Teach performance tuning techniques: caching, partitioning, broadcast joins . Introduce streaming concepts using Spark Structured Streaming . Big Data Ecosystem: Familiarize learners with tools like HDFS, Hive, Kafka, Airflow , and Delta Lake . Cover integration with data sources (JDBC, S3, NoSQL) and data lakes . Advanced Tools & Best Practices: Train on writing modular, testable Spark code using Python best practices. Demonstrate logging, error handling , and unit testing with PyTest . Emphasize performance, scalability, and cluster resource tuning . Introduce CI/CD, Git-based workflows , and cloud deployments (AWS EMR, Databricks, GCP Dataproc). Requirements: 2–5 years of professional experience in PySpark and big data development . Strong understanding of Python, Spark internals, and distributed systems . Proficiency in SQL, Spark SQL, and data transformation pipelines . Experience with data modeling, job scheduling, and workflow orchestration . Ability to simplify technical concepts and mentor aspiring developers effectively. Preferred Skills: Exposure to Databricks , AWS Glue , or GCP BigQuery + Dataproc . Familiarity with Apache Airflow , Docker, or Kubernetes-based data pipelines . Experience with batch vs real-time architectures , Spark MLlib, or GraphFrames. Knowledge of DevOps tools (Git, Jenkins) for data workflows. Exposure to Delta Lake, Iceberg, or Lakehouse architectures . Why Join Us? Inspire and guide the next generation of data engineers and PySpark developers . Be part of a collaborative, innovative, and flexible learning ecosystem. Enjoy remote work opportunities and flexible teaching schedules . Competitive pay with additional opportunities in curriculum building, content creation , and community leadership .

Posted 2 days ago

Apply

5.0 years

5 - 6 Lacs

Bengaluru

On-site

Company Description At Nielsen, we are passionate about our work to power a better media future for all people by providing powerful insights that drive client decisions and deliver extraordinary results. Our talented, global workforce is dedicated to capturing audience engagement with content - wherever and whenever it’s consumed. Together, we are proudly rooted in our deep legacy as we stand at the forefront of the media revolution. When you join Nielsen, you will join a dynamic team committed to excellence, perseverance, and the ambition to make an impact together. We champion you, because when you succeed, we do too. We enable your best to power our future. Job Description Responsibilities: Research, design, develop, implement and test econometric, statistical, optimization and machine learning models. Design, write and test modules for Nielsen analytics platforms using Python, R, SQL and/or Spark. Utilize advanced computational/statistics libraries including Spark MLlib, Scikit-learn, SciPy, StatsModels. Collaborate with cross functional Data Science, Product, and Technology teams to integrate best practices from across the organization Provide leadership and guidance for the team in the of adoption of new tools and technologies to improve our core capabilities Execute and refine the roadmap to upgrade the modeling/forecasting/control functions of the team to improve upon the core service KPI’s Ensure product quality, stability, and scalability by facilitating code reviews and driving best practices like modular code, unit tests, and incorporating CI/CD workflows Explain complex data science (e.g. model-related) concepts in simple terms to non-technical internal and external audiences Qualifications Key Skills: 5+ years of professional work experience in Statistics, Data Science, and/or related disciplines, with focus on delivering analytics software solutions in a production environment Strong programming skills in Python with experience in NumPy, Pandas, SciPy and Scikit-learn. Hands-on experience with deep learning frameworks (PyTorch, TensorFlow, Keras). Solid understanding of Machine learning domains such as Computer Vision, Natural Language Processing and classical Machine Learning. Proficiency in SQL and NoSQL databases for large-scale data manipulation Experience with cloud-based ML services (AWS SageMaker, Databricks, GCP AI, Azure ML). Knowledge of model deployment (FastAPI, Flask, TensorRT, ONNX) MLOps tools (MLflow, Kubeflow, Airflow) and containerization. Preferred skills: Understanding of LLM fine-tuning, tokenization, embeddings, and multimodal learning. Familiarity with vector databases (FAISS, Pinecone) and retrieval-augmented generation (RAG). Familiarity with advertising intelligence, recommender systems, and ranking models. Knowledge of CI/CD for ML workflows, and software development best practices. Additional Information Please be aware that job-seekers may be at risk of targeting by scammers seeking personal data or money. Nielsen recruiters will only contact you through official job boards, LinkedIn, or email with a nielsen.com domain. Be cautious of any outreach claiming to be from Nielsen via other messaging platforms or personal email addresses. Always verify that email communications come from an @nielsen.com address. If you're unsure about the authenticity of a job offer or communication, please contact Nielsen directly through our official website or verified social media channels.

Posted 1 week ago

Apply

5.0 years

4 - 6 Lacs

Bengaluru

On-site

Job Title: Senior AI Engineer Location: Bengaluru, India - (Hybrid) At Reltio®, we believe data should fuel business success. Reltio's AI-powered data unification and management capabilities—encompassing entity resolution, multi-domain master data management (MDM), and data products—transform siloed data from disparate sources into unified, trusted, and interoperable data. Reltio Data Cloud™ delivers interoperable data where and when it's needed, empowering data and analytics leaders with unparalleled business responsiveness. Leading enterprise brands—across multiple industries around the globe—rely on our award-winning data unification and cloud-native MDM capabilities to improve efficiency, manage risk and drive growth. At Reltio, our values guide everything we do. With an unyielding commitment to prioritizing our "Customer First", we strive to ensure their success. We embrace our differences and are "Better Together" as One Reltio. We are always looking to "Simplify and Share" our knowledge when we collaborate to remove obstacles for each other. We hold ourselves accountable for our actions and outcomes and strive for excellence. We "Own It". Every day, we innovate and evolve, so that today is "Always Better Than Yesterday". If you share and embody these values, we invite you to join our team at Reltio and contribute to our mission of excellence. Reltio has earned numerous awards and top rankings for our technology, our culture and our people. Reltio was founded on a distributed workforce and offers flexible work arrangements to help our people manage their personal and professional lives. If you're ready to work on unrivaled technology where your desire to be part of a collaborative team is met with a laser-focused mission to enable digital transformation with connected data, let's talk! Job Summary: As a Senior AI Engineer at Reltio, you will be a core part of the team responsible for building intelligent systems that enhance data quality, automate decision-making, and drive entity resolution at scale. You will work with cross-functional teams to design and deploy advanced AI/ML solutions that are production-ready, scalable, and embedded into our flagship data platform. This is a high-impact engineering role with exposure to cutting-edge problems in entity resolution, deduplication, identity stitching, record linking, and metadata enrichment . Job Duties and Responsibilities: Design, implement, and optimize state-of-the-art AI/ML models for solving real-world data management challenges such as entity resolution, classification, similarity matching, and anomaly detection. Work with structured, semi-structured, and unstructured data to extract signals and engineer intelligent features for large-scale ML pipelines. Develop scalable ML workflows using Spark, MLlib, PyTorch, TensorFlow, or MLFlow , with seamless integration into production systems. Translate business needs into technical design and collaborate with data scientists, product managers, and platform engineers to operationalize models. Continuously monitor and improve model performance using feedback loops, A/B testing, drift detection, and retraining strategies. Conduct deep dives into customer data challenges and apply innovative machine learning algorithms to address accuracy, speed, and bias. Actively contribute to research and experimentation efforts, staying updated with latest AI trends in graph learning, NLP, probabilistic modeling , etc. Document designs and present outcomes to both technical and non-technical stakeholders , fostering transparency and knowledge sharing. Skills You Must Have: Bachelor's or Master's degree in Computer Science, Machine Learning, Artificial Intelligence , or related field. PhD is a plus. 5+ years of hands-on experience in developing and deploying machine learning models in production environments. Proficiency in Python (NumPy, scikit-learn, pandas, PyTorch/TensorFlow) and experience with large-scale data processing tools ( Spark, Kafka, Airflow ). Strong understanding of ML fundamentals , including classification, clustering, feature selection, hyperparameter tuning, and evaluation metrics. Demonstrated experience working with entity resolution, identity graphs, or data deduplication . Familiarity with containerized environments (Docker, Kubernetes) and cloud platforms (AWS, GCP, Azure) Strong debugging, analytical, and communication skills with a focus on delivery and impact. Attention to detail, ability to work independently, and a passion for staying updated with the latest advancements in the field of data science Skills Good to Have: Experience with knowledge graphs, graph-based ML, or embedding techniques . Exposure to deep learning applications in data quality, record matching, or information retrieval . Experience building explainable AI solutions in regulated domains. Prior work in SaaS, B2B enterprise platforms , or data infrastructure companies . Why Join Reltio?* Health & Wellness: Comprehensive Group medical insurance, including your parent,s with additional top-up options. Accidental Insurance Life insurance Free online unlimited doctor consultations An Employee Assistance Program (EAP) Work-Life Balance: 36 annual leaves, which include Sick leaves – 18, Earned Leaves - 18 26 weeks of maternity leave, 15 days of paternity leave Very unique to Reltio - 01 week of additional off as recharge week every year globally Support for home office setup: Home office setup allowance. Stay Connected, Work Flexibly: Mobile & Internet Reimbursement No need to pack a lunch—we've got you covered with a free meal. And many more….. Reltio is proud to be an equal opportunity workplace. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. Reltio is committed to working with and providing reasonable accommodation to applicants with physical and mental disabilities.

Posted 1 week ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Description Amazon is investing heavily in building a world class advertising business and we are responsible for defining and delivering a collection of self-service performance advertising products that drive discovery and sales. Our products are strategically important to our Retail and Marketplace businesses driving long term growth. We deliver billions of ad impressions and millions of clicks daily and are breaking fresh ground to create world-class products. We are highly motivated, collaborative and fun-loving with an entrepreneurial spirit and bias for action. With a broad mandate to experiment and innovate, we are growing at an unprecedented rate with a seemingly endless range of new opportunities. The ATT team, based in Bangalore, is responsible for ensuring that ads are relevant and is of good quality, leading to higher conversion for the sellers and providing a great experience for the customers. We deal with one of the world’s largest product catalog, handle billions of requests a day with plans to grow it by order of magnitude and use automated systems to validate tens of millions of offers submitted by thousands of merchants in multiple countries and languages. In this role, you will build and develop ML models to address content understanding problems in Ads. These models will rely on a variety of visual and textual features requiring expertise in both domains. These models need to scale to multiple languages and countries. You will collaborate with engineers and other scientists to build, train and deploy these models. As part of these activities, you will develop production level code that enables moderation of millions of ads submitted each day. Basic Qualifications 3+ years of building machine learning models for business application experience PhD, or Master's degree and 6+ years of applied research experience Experience programming in Java, C++, Python or related language Experience with neural deep learning methods and machine learning Preferred Qualifications Experience with modeling tools such as R, scikit-learn, Spark MLLib, MxNet, Tensorflow, numpy, scipy etc. Experience with large scale distributed systems such as Hadoop, Spark etc. Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI - Karnataka Job ID: A2978163

Posted 1 week ago

Apply

5.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Come work on fantastically high-scale systems with us! Blis is an award-winning, global leader and technology innovator in big data analytics and advertising. We help brands such as McDonald's, Samsung, and Mercedes Benz to understand and effectively reach their best audiences. In doing this, we are industry champions in our commitment to the ethical use of data and believe people should have control over their data and privacy. With offices across four continents, we are a truly international company with a global culture and scale. We’re headquartered in the UK, financially successful and stable, and looking forward to continued growth. We’d love it if you could join us on the journey! We are looking for solid and experienced Data Engineers to work on building out secure, automated, scalable pipelines on GCP. We receive over 350gb of data an hour and respond to 400,000 decision requests each second, with petabytes of analytical data to work with. We tackle challenges across almost every major discipline of data science, including classification, clustering, optimisation, and data mining. You will be responsible for building stable production level pipelines maximising the efficiency of cloud compute to ensure that data is properly enabled for operational and scientific cause. This is a growing team with big responsibilities and exciting challenges ahead of it, as we look to reach the next 10x level of scale and intelligence. Our employees are passionate about teamwork and technology and we are looking for someone who wants to make a difference within a growing, successful company. At Blis, Data Engineers are a combination of software engineers, cloud engineers, and data processing engineers. They actively design and build production pipeline code, typically in Python, whilst having practical experience in ensuring, policing, and measuring for good data governance, quality, and efficient consumption. To run an efficient landscape we are ideally looking for candidates that are comfortable with event- driven automation across also aspects of our operational pipelines. As a Blis data engineer, we seek to understand the data and problem definition and find efficient solutions, so critical thinking is a key component to efficient pipelines and effective reuse, this must include defining the pipelines for the correct controls and recovery points not only function and scale. Across the team, everyone supports each other through mentoring, brainstorming, and pairing up. They have a passion for delivering products that delight and astound our customers and that have a long-lasting impact on the business. They do this while also optimising themselves and the team for long-lasting agility, which is often synonymous with practicing Good Engineering. They are almost always adherents of Lean Development and work well in environments with significant amounts of freedom and ambitious goals. Shift : 12 pm - 8 pm (Mon - Fri) Location: Mumbai (Hybrid - 3 days onsite) Key Responsibilities Design, build, monitor, and support large scale data processing pipelines. Support, mentor, and pair with other members of the team to advance our team’s capabilities and capacity. Help Blis explore and exploit new data streams to innovative and support commercial and technical growth Work closely with Product and be comfortable with taking, making and delivering against fast paced decisions to delight our customers. This ideal candidate will be comfortable with fast feature delivery with a robust engineered follow up. Skills And Experience 5+ years direct experience delivering robust performant data pipelines within the constraints of direct SLA’s and commercial financial footprints. Proven experience in architecting, developing, and maintaining Apache Druid and Imply platforms, with a focus on DevOps practices and large-scale system re-architecture Mastery of building Pipelines in GCP maximising the use of native and native supporting technologies e.g. Apache Airflow Mastery of Python for data and computational tasks with fluency in data cleansing, validation and composition techniques. Hands-on implementation and architectural familiarity with all forms of data sourcing i.e streaming data, relational and non-relational databases, and distributed processing technologies (e.g. Spark) Fluency with all appropriate python libraries typical of data science e.g. pandas, scikit-learn, scipy, numpy, MLlib and/or other machine learning and statistical libraries Advanced knowledge of cloud based services specifically GCP Excellent working understanding of server-side Linux Professional in managing and updating on tasks ensuring appropriate levels of documentation, testing and assurance around their solutions. Desired Experience optimizing both code and config in Spark, Hive, or similar tools Practical experience working with relational databases, including advanced operations such as partitioning and indexing Knowledge and experience with tools like AWS Athena or Google BigQuery to solve data-centric problems Understanding and ability to innovate, apply, and optimize complex algorithms and statistical techniques to large data structures Experience with Python Notebooks, such as Jupyter, Zeppelin, or Google Datalab to analyze, prototype, and visualize data and algorithmic output About Us Blis is the geo-powered advertising tech stack. We’ve built a radically different omnichannel advertising solution structured on geography, not identity. Audience Explorer is our powerful audience planning platform delivering actionable intelligence & insight to advertisers. With Blis, advertisers can plan unified audiences with data from premium partners, connected by geo. Buy audiences using smart cookie less technology that can double performance and halve costs. Measure the audience, not just the channel, with patent-pending omnichannel measurement technology. Established in the UK in 2004, Blis now operates in more than 40 offices across five continents. Working with the world’s largest and most successful companies, as well as every major media agency. As an equal opportunity employer, we treat all our employees and job applicants fairly and equally. We oppose all forms of unlawful and unfair discrimination and take all reasonable steps to create a work environment in which all employees are treated with respect and dignity. We don't condone or tolerate any form of harassment, by employees or by others who do business with us. Our values Brave We're leaders not followers An innovation and growth mindset helps us solve everyday challenges and achieve breakthroughs. Our passion drives us to innovate. We don’t see barriers, just possibilities. We take ownership and hold ourselves accountable for outcomes, good and bad – and we don’t pass the buck. Love our clients We're client obsessed We do what we say and build trusted relationships with our partners for the long term. We act with integrity. We put our clients at the centre of our business. We obsess over the best insights, ideas and solutions to deliver WOW and work with honesty and accountability to get it done Inclusive We're one team We are empathetic and embrace diversity. Everyone has a voice and can bring their authentic self to work. We careabout and support each other – with humility and good humour. Mutual respect and wellbeing are key. We striveto eliminate bias and be open and transparent. Solutions driven We're action oriented Speed matters in business, so we're solution-driven and action-oriented. We value simplification and calculated risk taking. We are lean, agile and resourceful self-starters.We collaborate and break silos, working thoughtfully and with urgency to solve problems, while learning from mistakes and celebrating wins.

Posted 1 week ago

Apply

6.0 - 8.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Dear Candidate, Greetings from TCS !!! TCS is hiring for Data Scientist, please find the below JD.. Experience range – 6 to 8 years Location - Pan india Skills Required - Python, PySpark , MLflow , Databricks AutoML Predictive Modelling ( Classification , Clustering , Regression , timeseries and NLP) Cloud platform (Azure/AWS) , Delta Lake , Unity Catalog Role& Responsibilities – Design and deploy predictive models (e.g., forecasting, churn analysis, fraud detection) using Python/SQL, Spark MLlib, and Databricks ML Build end-to-end ML pipelines (data ingestion → feature engineering → model training → deployment) on Databricks Lakehouse Optimize model performance via hyperparameter tuning, AutoML, and MLflow tracking Collaborate with engineering teams to operationalize models (batch/real-time) using Databricks Jobs or REST APIs Implement Delta Lake for scalable, ACID-compliant data workflows. Enable CI/CD for ML pipelines using Databricks Repos and GitHub Actions Troubleshoot issues in Spark Jobs and Databricks Environment.

Posted 1 week ago

Apply

7.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Title: AI/ML Engineer Location: Pune, India About the Role: We’re looking for highly analytical, technically strong Artificial Intelligence/Machine Learning Engineers to help build scalable, data-driven systems in the digital marketing space. You'll work alongside a top-tier team on impactful solutions affecting billions of users globally. Experience Required: 3 – 7 years Key Responsibilities: Collaborate across Data Science, Ops, and Engineering to tackle large-scale ML challenges. Build and manage robust ML pipelines (ETL, training, deployment) in real-time environments. Optimize models and infrastructure for performance and scalability. Research and implement best practices in ML systems and lifecycle management. Deploy deep learning models using high-performance computing environments. Integrate ML frameworks into cloud/distributed systems. Required Skills: 2+ years of Python development in a programming-intensive role. 1+ year of hands-on ML experience (e.g., Classification, Clustering, Optimization, Deep Learning). 2+ years working with distributed frameworks (Spark, Hadoop, Kubernetes). 2+ years with ML tools such as TensorFlow, PyTorch, Keras, MLlib. 2+ years experience with cloud platforms (AWS, Azure, GCP). Excellent communication skills. Preferred: Prior experience in AdTech or digital advertising platforms (DSP, Ad Exchange, SSP). Education: M.Tech or Ph.D. in Computer Science, Software Engineering, Mathematics, or a related discipline. Why Apply? Join a fast-moving team working on the forefront of AI in advertising. Build technologies that impact billions of users worldwide. Shape the future of programmatic and performance advertising. Show more Show less

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka

On-site

* A MS in CS focused on Machine Learning, Statistics, Operational research or in a highly quantitative field * 5+ years of hands-on experience in big data, machine learning and predictive modeling * 3+ year people management and cross department functional experience * Knowledge of a statistical analysis package such as R, Tableau, and high-level programming language (E.g. Python) used in the context of data analysis and statistical model building * Strongly motivated by entrepreneurial projects and experienced in collaboratively working with a diverse team of engineers, analysts, and business management in achieving superior bottom line results * Strong communication and data presentation skills * Strong ability in problem solving and driving for results Amazon Ads is looking for a Research Science Manager with machine learning and deep learning background to build industry-leading technology for preventing ad fraud at scale. Key job responsibilities Advertising at Amazon is a fast-growing multi-billion dollar business that spans across desktop, mobile and connected devices; encompasses ads on Amazon and a vast network of hundreds of thousands of third party publishers; and extends across US, EU and an increasing number of international geographies. One of the key focus areas is Traffic Quality where we endeavor to identify non-human and invalid traffic within programmatic ad sources, and weed them out to ensure a high quality advertising marketplace. We do this by building machine learning and optimization algorithms that operate at scale, and leverage nuanced features about user, context, and creative engagement to determine the validity of traffic. The challenge is to stay one step ahead by investing in deep analytics and developing new algorithms that address emergent attack vectors in a structured and scalable fashion. We are committed to building a long-term traffic quality solution that encompasses all Amazon advertising channels and provides state-of-the-art traffic filtering that preserves advertiser trust and saves them hundreds of millions of dollars of wasted spend. We are looking for a dynamic, innovative and accomplished research science manager to lead machine learning and data science for the Advertising Traffic Quality vertical. Are you excited by the prospect of analyzing terabytes of data and leveraging state-of-the-art data science and machine learning techniques to solve real world problems? Do you like to own end-to-end business problems/metrics and directly impact the profitability of the company? As an research science manager for Traffic Quality, you will lead a team of applied scientists, research scientists, data scientists and engineers to deliver to conceptualize and build algorithms that efficiently detect and filter invalid traffic. You will be the single-threaded owner of the algorithms that go into our traffic quality systems and will be responsible for both near-term improvements to existing algorithms as well as long-term direction for Traffic Quality algorithms. Your team will include experts in machine learning, statistics and analytics that are working on state-of-the-art modeling techniques, as well as generating insights that fuel critical investments. You will also lead an engineering team that works on handling terabyte scale data and implementing features and algorithms that process billions of events per day. You will interface with product managers and operations teams to bring key advertising initiatives to customers. Your strong management skills will be utilized to help deliver critical projects that cut across organization structures and meet key business goals. Major responsibilities • Deliver key goals to enhance advertiser experience and deliver multi-million dollar savings by building algorithms to detect and mitigate invalid traffic • Use machine learning and statistical techniques to create new, scalable solutions for invalid traffic filtering • Drive core business analytics and data science explorations to inform key business decisions and algorithm roadmap • Establish scalable, efficient, automated processes for large scale data analyses, model development, model validation and model implementation • Hire and develop top talent in machine learning and data science and accelerate the pace of innovation in the group • Build a culture of innovation and long-term thinking, and showcase this via peer-reviewed publications and whitepapers • Work with your engineering team and product managers to evangelize new algorithms and drive the implementation of large-scale complex ML models in production • Keep updated on the industry landscape in Traffic Quality and identify algorithm investments to achieve an industry leading traffic quality solution. • Learn continuously about new developments in machine learning and AI, as well as recent innovations in creative intelligence and malware detection. Identify how these can be rolled into building an industry leading solution for Amazon advertising Technical leader with 10+ years of exceptional, hands-on experience in machine learning in e-commerce, fraud/risk assessment, or an enterprise software company building and providing analytics or risk management services and software. Ph.D. degree in in Statistics, CS, Machine Learning, Operations Research or in a highly quantitative field. Knowledge of distributed computing and experience with advanced machine learning libraries like Spark MLLib, Tensorflow, MxNet, etc. Strong publication record in international conferences on machine learning and artificial intelligence Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.

Posted 2 weeks ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Proficiency in AI tools used to prepare and automate data pipelines and ingestion Apache Spark, especially with MLlib PySpark and Dask for distributed data processing Pandas and NumPy for local data wrangling Apache Airflow – schedule and orchestrate ETL/ELT jobs Google Cloud (BigQuery, Vertex AI) Python (most popular for AI and data tasks) Show more Show less

Posted 2 weeks ago

Apply

7.0 - 10.0 years

0 Lacs

Greater Kolkata Area

Remote

Job Title : Senior Data Scientist (Contract | Remote) Location : Remote Experience Required : 7 - 10 Years About The Role We are seeking a highly experienced Senior Data Scientist to join our team on a contract basis. This role is ideal for someone who excels in predictive analytics and has strong hands-on experience with Databricks and PySpark. You will play a key role in building and deploying scalable machine learning models, with a focus on regression, classification, and time-series forecasting. Key Responsibilities Design, build, and deploy predictive models using regression, classification, and time-series techniques. Develop and maintain scalable data pipelines using Databricks and PySpark. Leverage MLflow for experiment tracking and model versioning. Utilize Delta Lake for efficient data storage and version control. Collaborate with cross-functional teams to understand business requirements and translate them into analytical solutions. Implement and manage CI/CD pipelines for model deployment. Work with cloud platforms such as Azure or AWS to develop and deploy ML solutions. Required Skills & Qualifications Minimum 7 years of experience in predictive analytics and machine learning. Strong expertise in Databricks, PySpark, MLflow, and Delta Lake. Proficiency in Python, Spark MLlib, and AutoML frameworks. Experience working with CI/CD pipelines for model deployment. Familiarity with Azure or AWS cloud services. Excellent problem-solving skills and ability to work : Prior experience in the Life Insurance or Property & Casualty (P&C) insurance domain. (ref:hirist.tech) Show more Show less

Posted 2 weeks ago

Apply

7.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Title: AI/ML Engineer Location: Pune, India About the Role: We’re looking for highly analytical, technically strong Artificial Intelligence/Machine Learning Engineers to help build scalable, data-driven systems in the digital marketing space. You'll work alongside a top-tier team on impactful solutions affecting billions of users globally. Experience Required: 3 – 7 years Key Responsibilities: Collaborate across Data Science, Ops, and Engineering to tackle large-scale ML challenges. Build and manage robust ML pipelines (ETL, training, deployment) in real-time environments. Optimize models and infrastructure for performance and scalability. Research and implement best practices in ML systems and lifecycle management. Deploy deep learning models using high-performance computing environments. Integrate ML frameworks into cloud/distributed systems. Required Skills: 2+ years of Python development in a programming-intensive role. 1+ year of hands-on ML experience (e.g., Classification, Clustering, Optimization, Deep Learning). 2+ years working with distributed frameworks (Spark, Hadoop, Kubernetes). 2+ years with ML tools such as TensorFlow, PyTorch, Keras, MLlib. 2+ years experience with cloud platforms (AWS, Azure, GCP). Excellent communication skills. Preferred: Prior experience in AdTech or digital advertising platforms (DSP, Ad Exchange, SSP). Education: M.Tech or Ph.D. in Computer Science, Software Engineering, Mathematics, or a related discipline. Why Apply? Join a fast-moving team working on the forefront of AI in advertising. Build technologies that impact billions of users worldwide. Shape the future of programmatic and performance advertising. Show more Show less

Posted 2 weeks ago

Apply

6.0 years

0 Lacs

Bengaluru, Karnataka

On-site

- 6+ years of building machine learning models for business application experience - Knowledge of programming languages such as C/C++, Python, Java or Perl - Experience programming in Java, C++, Python or related language - Experience with neural deep learning methods and machine learning Are you passionate about solving complex logistics challenges? Our Analytics team is at the forefront of enhancing delivery experiences through data-driven solutions and innovative technology. As a Data Scientist, you will join a team dedicated to optimizing our delivery network, ensuring reliable and efficient service to our customers. We are seeking an enthusiastic, customer-centric professional with strong analytical capabilities to drive impactful projects, implement advanced solutions, and develop scalable processes. In this role, you will have immediate ownership of business-critical challenges and the opportunity to make strategic, data-driven decisions that shape the future of our delivery operations. Your work will directly influence customer experience and operational excellence. The ideal candidate will possess both research science capabilities and program management skills, thriving in an environment that requires independent decision-making and comfort with ambiguity. This role offers the opportunity to make a significant impact on our advanced logistics network while working with pioneering technology and data science applications. Basic qualifications • 6+ years of building machine learning models for business application experience • Knowledge of programming languages such as C/C++, Python, Java or Perl • Experience programming in Java, C++, Python or related language • Experience with neural deep learning methods and machine learning Preferred qualifications: • PhD in engineering, technology, computer science, machine learning, robotics, operations research, statistics, mathematics or equivalent quantitative field • 7+ years of extensive relevant research experience • Deep expertise in Machine Learning • Proficiency in programming • Core competency in mathematics and statistics • Track record of successful projects in algorithm design and product development • Publications at peer-reviewed conferences or journals • Prior experience with mentorship and/or management of scientists • Strategic thinker with good execution skills • Exhibits excellent business judgment • Effective verbal and written communication skills • Experience working with real-world data sets and building scalable models from big data • Experience with modern modeling tools and frameworks such as R, scikit-learn, Spark MLLib, MxNet, Tensorflow • Experience with large scale distributed systems PhD in engineering, technology, computer science, machine learning, robotics, operations research, statistics, mathematics or equivalent quantitative field 15+ years of relevant, broad research experience Deep expertise in Machine Learning Proficiency in programming core competency in mathematics and statistics. Track record of successful projects in algorithm design and product development. Publications at top-tier peer-reviewed conferences or journals. prior experience with mentorship and/or management of scientists Thinks strategically, but stays on top of tactical execution. Exhibits excellent business judgment; balances business Effective verbal and written communication skills Experience working with real-world data sets and building scalable models from big data. Experience with modeling tools such as R, scikit-learn, Spark MLLib, MxNet, Tensorflow, Experience with large scale distributed systems Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.

Posted 2 weeks ago

Apply

6.0 - 12.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Description We are in need of a driven Scala Developer to join our dynamic team at GlobalLogic. In this role, you will have the groundbreaking opportunity to work on outstanding projects that innovate the future of technology. You will collaborate with our world-class engineers to deliver flawless solutions and compete in a crafty, innovative environment. Requirements Minimum 6-12 years of software development experience Scala Language Mastery Strong understanding of both functional and object-oriented programming paradigms. Deep knowledge of: Immutability, lazy evaluation Traits, case classes, companion objects Pattern matching Advanced type system: generics, type bounds, implicits, context bounds 🔹 Functional Programming (FP) Hands-on experience with: Pure functions, monads, functors, higher-kinded types FP libraries: Cats, Scalaz, or ZIO Understanding of effect systems and referential transparency 📦 Frameworks & Libraries 🔹 Backend / API Development RESTful API development using: Play Framework Akka HTTP Experience with GraphQL is a plus 🔹 Concurrency & Asynchronous Programming Deep understanding of: Futures, Promises Akka actors, Akka Streams ZIO or Cats Effect 🛠️ Build, Tooling & DevOps SBT for project building and dependency management Familiarity with Git, Docker, and Kubernetes CI/CD experience with Jenkins, GitHub Actions, or similar tools Comfortable with Linux command line and shell scripting 🗄️ Database & Data Systems Strong experience with: SQL databases: PostgreSQL, MySQL NoSQL databases: Cassandra, MongoDB Streaming/data pipelines: Kafka, Spark (with Scala) ORM / FP database libraries: Slick, Doobie 🧱 Architecture & System Design Microservices architecture design and deployment Event-driven architecture Familiarity with Domain-Driven Design (DDD) Designing for scalability, fault tolerance, and observability 🧪 Testing & Quality Experience with testing libraries: ScalaTest, Specs2, MUnit ScalaCheck for property-based testing Test-driven development (TDD) and behavior-driven development (BDD) 🌐 Cloud & Infrastructure (Desirable) Deploying Scala apps on: AWS (e.g., EC2, Lambda, ECS, RDS) GCP or Azure Experience with infrastructure-as-code (Terraform, CloudFormation) is a plus 🧠 Soft Skills & Leadership Mentorship: Ability to coach junior developers Code reviews: Ensure code quality and consistency Communication: Work cross-functionally with product managers, DevOps, QA Agile development: Experience with Scrum/Kanban Ownership: Capable of taking features from design to production ⚡ Optional (but Valuable) Scala.js / Scala Native experience Machine Learning with Scala (e.g., Spark MLlib) Exposure to Kotlin, Java, or Python Job responsibilities As a Scala Developer/ Big Data Engineer, you will: – Develop, test, and deploy high-quality Scala applications. – Implement functional and object-oriented programming paradigms. – Ensure code quality through immutability, lazy evaluation, and pattern matching. – Craft and build scalable systems using traits, case classes, and companion objects. – Collaborate with cross-functional teams to determine project requirements and deliver solutions successfully. – Troubleshoot and resolve complex technical issues. – Participate in code reviews to maintain our high standards of quality What we offer Culture of caring. At GlobalLogic, we prioritize a culture of caring. Across every region and department, at every level, we consistently put people first. From day one, you’ll experience an inclusive culture of acceptance and belonging, where you’ll have the chance to build meaningful connections with collaborative teammates, supportive managers, and compassionate leaders. Learning and development. We are committed to your continuous learning and development. You’ll learn and grow daily in an environment with many opportunities to try new things, sharpen your skills, and advance your career at GlobalLogic. With our Career Navigator tool as just one example, GlobalLogic offers a rich array of programs, training curricula, and hands-on opportunities to grow personally and professionally. Interesting & meaningful work. GlobalLogic is known for engineering impact for and with clients around the world. As part of our team, you’ll have the chance to work on projects that matter. Each is a unique opportunity to engage your curiosity and creative problem-solving skills as you help clients reimagine what’s possible and bring new solutions to market. In the process, you’ll have the privilege of working on some of the most cutting-edge and impactful solutions shaping the world today. Balance and flexibility. We believe in the importance of balance and flexibility. With many functional career areas, roles, and work arrangements, you can explore ways of achieving the perfect balance between your work and life. Your life extends beyond the office, and we always do our best to help you integrate and balance the best of work and life, having fun along the way! High-trust organization. We are a high-trust organization where integrity is key. By joining GlobalLogic, you’re placing your trust in a safe, reliable, and ethical global company. Integrity and trust are a cornerstone of our value proposition to our employees and clients. You will find truthfulness, candor, and integrity in everything we do. About GlobalLogic GlobalLogic, a Hitachi Group Company, is a trusted digital engineering partner to the world’s largest and most forward-thinking companies. Since 2000, we’ve been at the forefront of the digital revolution – helping create some of the most innovative and widely used digital products and experiences. Today we continue to collaborate with clients in transforming businesses and redefining industries through intelligent products, platforms, and services. Show more Show less

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Title: Senior Software Engineer Department: IDP About Us HG Insights is the global leader in technology intelligence, delivering actionable AI driven insights through advanced data science and scalable big data solutions. Our Big Data Insights Platform processes billions of unstructured documents and powers a vast data lake, enabling enterprises to make strategic, data-driven decisions. Join our team to solve complex data challenges at scale and shape the future of B2B intelligence. What You’ll Do: Design, build, and optimize large-scale distributed data pipelines for processing billions of unstructured documents using Databricks, Apache Spark, and cloud-native big data tools Architect and scale enterprise-grade big-data systems, including data lakes, ETL/ELT workflows, and syndication platforms for customer-facing Insights-as-a-Service (InaaS) products. Collaborate with product teams to develop features across databases, backend services, and frontend UIs that expose actionable intelligence from complex datasets. Implement cutting-edge solutions for data ingestion, transformation, and analytics using Hadoop/Spark ecosystems, Elasticsearch, and cloud services (AWS EC2, S3, EMR). Drive system reliability through automation, CI/CD pipelines (Docker, Kubernetes, Terraform), and infrastructure-as-code practices. What You’ll Be Responsible For Leading the development of our Big Data Insights Platform, ensuring scalability, performance, and cost-efficiency across distributed systems. Mentoring engineers, conducting code reviews, and establishing best practices for Spark optimization, data modeling, and cluster resource management. Building & Troubleshooting complex data pipeline issues, including performance tuning of Spark jobs, query optimization, and data quality enforcement. Collaborating in agile workflows (daily stand-ups, sprint planning) to deliver features rapidly while maintaining system stability. Ensuring security and compliance across data workflows, including access controls, encryption, and governance policies. What You’ll Need BS/MS/Ph.D. in Computer Science or related field, with 5+ years of experience building production-grade big data systems. Expertise in Scala/Java for Spark development, including optimization of batch/streaming jobs and debugging distributed workflows. Proven track record with: Databricks, Hadoop/Spark ecosystems, and SQL/NoSQL databases (MySQL, Elasticsearch). Cloud platforms (AWS EC2, S3, EMR) and infrastructure-as-code tools (Terraform, Kubernetes). RESTful APIs, microservices architectures, and CI/CD automation37. Leadership experience as a technical lead, including mentoring engineers and driving architectural decisions. Strong understanding of agile practices, distributed computing principles, and data lake architectures. Airflow orchestration (DAGs, operators, sensors) and integration with Spark/Databricks 7+ years of designing, modeling and building big data pipelines in an enterprise work setting. Nice-to-Haves Experience with machine learning pipelines (Spark MLlib, Databricks ML) for predictive analytics. Knowledge of data governance frameworks and compliance standards (GDPR, CCPA). Contributions to open-source big data projects or published technical blogs/papers. DevOps proficiency in monitoring tools (Prometheus, Grafana) and serverless architectures. Show more Show less

Posted 2 weeks ago

Apply

8.0 - 10.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Description WBA Requirements Scala(Coding on Spark, Base Scala Language, Concepts in Scala- traits, extends, Immutable, Mutable, Objects, Class, etc…), Spark(SQL, Dataframes, Datasets, RDD), Databricks(Notebooks, Workflow, Clusters, Monitoring, Debugging), Intelij Minimum 8-10 Years Of Software Development Experience Scala Language Mastery Strong understanding of both functional and object-oriented programming paradigms. Deep knowledge of: Immutability, lazy evaluation Traits, case classes, companion objects Pattern matching Advanced type system: generics, type bounds, implicits, context bounds 🔹 Functional Programming (FP) Hands-on experience with: Pure functions, monads, functors, higher-kinded types FP libraries: Cats, Scalaz, or ZIO Understanding of effect systems and referential transparency 📦 Frameworks & Libraries 🔹 Backend / API Development RESTful API development using: Play Framework Akka HTTP Experience with GraphQL is a plus 🔹 Concurrency & Asynchronous Programming Deep understanding of: Futures, Promises Akka actors, Akka Streams ZIO or Cats Effect 🛠️ Build, Tooling & DevOps SBT for project building and dependency management Familiarity with Git, Docker, and Kubernetes CI/CD experience with Jenkins, GitHub Actions, or similar tools Comfortable with Linux command line and shell scripting 🗄️ Database & Data Systems Strong experience with: SQL databases: PostgreSQL, MySQL NoSQL databases: Cassandra, MongoDB Streaming/data pipelines: Kafka, Spark (with Scala) ORM / FP database libraries: Slick, Doobie 🧱 Architecture & System Design Microservices architecture design and deployment Event-driven architecture Familiarity with Domain-Driven Design (DDD) Designing for scalability, fault tolerance, and observability 🧪 Testing & Quality Experience with testing libraries: ScalaTest, Specs2, MUnit ScalaCheck for property-based testing Test-driven development (TDD) and behavior-driven development (BDD) 🌐 Cloud & Infrastructure (Desirable) Deploying Scala apps on: AWS (e.g., EC2, Lambda, ECS, RDS) GCP or Azure Experience with infrastructure-as-code (Terraform, CloudFormation) is a plus 🧠 Soft Skills & Leadership Mentorship: Ability to coach junior developers Code reviews: Ensure code quality and consistency Communication: Work cross-functionally with product managers, DevOps, QA Agile development: Experience with Scrum/Kanban Ownership: Capable of taking features from design to production ⚡ Optional (but Valuable) Scala.js / Scala Native experience Machine Learning with Scala (e.g., Spark MLlib) Exposure to Kotlin, Java, or Python Job responsibilities Same as above What we offer Culture of caring. At GlobalLogic, we prioritize a culture of caring. Across every region and department, at every level, we consistently put people first. From day one, you’ll experience an inclusive culture of acceptance and belonging, where you’ll have the chance to build meaningful connections with collaborative teammates, supportive managers, and compassionate leaders. Learning and development. We are committed to your continuous learning and development. You’ll learn and grow daily in an environment with many opportunities to try new things, sharpen your skills, and advance your career at GlobalLogic. With our Career Navigator tool as just one example, GlobalLogic offers a rich array of programs, training curricula, and hands-on opportunities to grow personally and professionally. Interesting & meaningful work. GlobalLogic is known for engineering impact for and with clients around the world. As part of our team, you’ll have the chance to work on projects that matter. Each is a unique opportunity to engage your curiosity and creative problem-solving skills as you help clients reimagine what’s possible and bring new solutions to market. In the process, you’ll have the privilege of working on some of the most cutting-edge and impactful solutions shaping the world today. Balance and flexibility. We believe in the importance of balance and flexibility. With many functional career areas, roles, and work arrangements, you can explore ways of achieving the perfect balance between your work and life. Your life extends beyond the office, and we always do our best to help you integrate and balance the best of work and life, having fun along the way! High-trust organization. We are a high-trust organization where integrity is key. By joining GlobalLogic, you’re placing your trust in a safe, reliable, and ethical global company. Integrity and trust are a cornerstone of our value proposition to our employees and clients. You will find truthfulness, candor, and integrity in everything we do. About GlobalLogic GlobalLogic, a Hitachi Group Company, is a trusted digital engineering partner to the world’s largest and most forward-thinking companies. Since 2000, we’ve been at the forefront of the digital revolution – helping create some of the most innovative and widely used digital products and experiences. Today we continue to collaborate with clients in transforming businesses and redefining industries through intelligent products, platforms, and services. Show more Show less

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

New Delhi, Delhi, India

On-site

Company Description Technocratic Solutions is a trusted and renowned provider of technical resources on a contract basis, serving businesses globally. With a dedicated team of developers, we deliver top-notch software solutions in cutting-edge technologies such as PHP, Java, JavaScript, Drupal, QA, Blockchain AI, and more. Our mission is to empower businesses worldwide by offering high-quality technical resources that meet project requirements and objectives. We prioritize exceptional customer service and satisfaction, delivering our services quickly, efficiently, and cost-effectively. Join us and experience the difference of working with a reliable partner driven by excellence and focused on your success. Job Title: AI/ML Engineer – Generative AI, Databricks, R Programming Location: Delhi NCR / Pune Experience Level: 5 years Job Summary: We are seeking a highly skilled and motivated AI/ML Engineer with hands-on experience in Generative AI, Databricks, and R programming to join our advanced analytics team. The ideal candidate will be responsible for designing, building, and deploying intelligent solutions that drive innovation, automation, and insight generation using modern AI/ML technologies. --- Key Responsibilities: Develop and deploy scalable ML and Generative AI models using Databricks (Spark-based architecture). Build pipelines for data ingestion, transformation, and model training/inference on Databricks. Implement and fine-tune Generative AI models (e.g., LLMs, diffusion models) for various use cases like content generation, summarization, and simulation. Leverage R for advanced statistical modeling, data visualization, and integration with ML pipelines. Collaborate with data scientists, data engineers, and product teams to translate business needs into technical solutions. Ensure reproducibility, performance, and governance of AI/ML models. Stay updated with the latest trends and technologies in AI/ML and GenAI and apply them where applicable. --- Required Skills & Qualifications: Bachelor's/Master’s degree in Computer Science, Data Science, Statistics, or a related field. 5 years of hands-on experience in Machine Learning/AI, with at least 2 year in Generative AI. Proficiency in Databricks, including Spark MLlib, Delta Lake, and MLflow. Strong command of R programming, especially for statistical modeling and data visualization (ggplot2, dplyr, caret, etc.). Experience with LLMs, transformers (HuggingFace, LangChain, etc.), and other GenAI frameworks. Familiarity with Python, SQL, and cloud platforms (AWS/Azure/GCP) is a plus. Excellent problem-solving, communication, and collaboration skills. Preferred: Certifications in Databricks, ML/AI (e.g., Azure/AWS ML), or R. Experience in regulated industries (finance, healthcare, etc.). Exposure to MLOps, CI/CD for ML, and version control (Git). --- What We Offer: Competitive salary and benefits Flexible work environment Opportunities for growth and learning in cutting-edge AI/ML Collaborative and innovative team culture --- Would you like this tailored to a specific company, industry, or seniority level (e.g., Lead, Junior, Consultant)? Show more Show less

Posted 2 weeks ago

Apply

0.0 years

0 Lacs

Thiruvananthapuram, Kerala

On-site

Data Science and AI Developer **Job Description:** We are seeking a highly skilled and motivated Data Science and AI Developer to join our dynamic team. As a Data Science and AI Developer, you will be responsible for leveraging cutting-edge technologies to develop innovative solutions that drive business insights and enhance decision-making processes. **Key Responsibilities:** 1. Develop and deploy machine learning models for predictive analytics, classification, clustering, and anomaly detection. 2. Design and implement algorithms for data mining, pattern recognition, and natural language processing. 3. Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions. 4. Utilize advanced statistical techniques to analyze complex datasets and extract actionable insights. 5. Implement scalable data pipelines for data ingestion, preprocessing, feature engineering, and model training. 6. Stay updated with the latest advancements in data science, machine learning, and artificial intelligence research. 7. Optimize model performance and scalability through experimentation and iteration. 8. Communicate findings and results to stakeholders through reports, presentations, and visualizations. 9. Ensure compliance with data privacy regulations and best practices in data handling and security. 10. Mentor junior team members and provide technical guidance and support. **Requirements:** 1. Bachelor’s or Master’s degree in Computer Science, Data Science, Statistics, or a related field. 2. Proven experience in developing and deploying machine learning models in production environments. 3. Proficiency in programming languages such as Python, R, or Scala, with strong software engineering skills. 4. Hands-on experience with machine learning libraries/frameworks such as TensorFlow, PyTorch, Scikit-learn, or Spark MLlib. 5. Solid understanding of data structures, algorithms, and computer science fundamentals. 6. Excellent problem-solving skills and the ability to think creatively to overcome challenges. 7. Strong communication and interpersonal skills, with the ability to work effectively in a collaborative team environment. 8. Certification in Data Science, Machine Learning, or Artificial Intelligence (e.g., Coursera, edX, Udacity, etc.). 9. Experience with cloud platforms such as AWS, Azure, or Google Cloud is a plus. 10. Familiarity with big data technologies (e.g., Hadoop, Spark, Kafka) is an advantage. Data Manipulation and Analysis : NumPy, Pandas Data Visualization : Matplotlib, Seaborn, Power BI Machine Learning Libraries : Scikit-learn, TensorFlow, Keras Statistical Analysis : SciPy Web Scrapping : Scrapy IDE : PyCharm, Google Colab HTML/CSS/JavaScript/React JS Proficiency in these core web development technologies is a must. Python Django Expertise: In-depth knowledge of e-commerce functionalities or deep Python Django knowledge. Theming: Proven experience in designing and implementing custom themes for Python websites. Responsive Design: Strong understanding of responsive design principles and the ability to create visually appealing and user-friendly interfaces for various devices. Problem Solving: Excellent problem-solving skills with the ability to troubleshoot and resolve issues independently. Collaboration: Ability to work closely with cross-functional teams, including marketing and design, to bring creative visions to life. interns must know about how to connect front end with datascience Also must Know to connect datascience to frontend **Benefits:** - Competitive salary package - Flexible working hours - Opportunities for career growth and professional development - Dynamic and innovative work environment Job Type: Full-time Pay: ₹8,000.00 - ₹12,000.00 per month Schedule: Day shift Ability to commute/relocate: Thiruvananthapuram, Kerala: Reliably commute or planning to relocate before starting work (Preferred) Work Location: In person

Posted 2 weeks ago

Apply

3.0 - 8.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Roles & Responsibilities Job Description: Data Scientist Expertise in Data Science & AI/ML: 3-8 years experience designing, developing, and deploying scalable AI/ML solutions for Big Data, with proficiency in Python, SQL, TensorFlow, PyTorch, Scikit-learn, and Big Data ML libraries (e.g., Spark MLlib). Cloud Proficiency: Proven experience with cloud-based Big Data services (GCP preferred, AWS/Azure a plus) for AI/ML model deployment and Big Data pipelines.; understanding of data modeling, warehousing, and ETL in Big Data contexts. Analytical & Communication Skills: Ability to extract actionable insights from large datasets, apply statistical methods, and effectively communicate complex findings to both technical and non-technical audiences (visualization skills a plus). Educational Background: Bachelor's or Master's degree in a quantitative field (Computer Science, Data Science, Engineering, Statistics, Mathematics). Experience 3-4.5 Years Skills Primary Skill: Data Science Sub Skill(s): Data Science Additional Skill(s): Python, Data Science, SQL, TensorFlow, Pytorch About The Company Infogain is a human-centered digital platform and software engineering company based out of Silicon Valley. We engineer business outcomes for Fortune 500 companies and digital natives in the technology, healthcare, insurance, travel, telecom, and retail & CPG industries using technologies such as cloud, microservices, automation, IoT, and artificial intelligence. We accelerate experience-led transformation in the delivery of digital platforms. Infogain is also a Microsoft (NASDAQ: MSFT) Gold Partner and Azure Expert Managed Services Provider (MSP). Infogain, an Apax Funds portfolio company, has offices in California, Washington, Texas, the UK, the UAE, and Singapore, with delivery centers in Seattle, Houston, Austin, Kraków, Noida, Gurgaon, Mumbai, Pune, and Bengaluru. Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

Delhi, India

On-site

Spark Cluster Deployment: Deploy, configure, and maintain Apache Spark clusters on Kubernetes, ensuring scalability, reliability, and performance. Application Deployment: Collaborate with data engineers and data scientists to deploy Spark applications and workloads, ensuring they run efficiently. Monitoring and Optimization: Implement monitoring solutions to track cluster performance, resource utilization, and application health. Proactively identify and resolve performance bottlenecks. Resource Management: Manage cluster resources, including CPU, memory, and storage allocation, to ensure optimal utilization and cost efficiency. Security: Implement and maintain security measures, including authentication, authorization, and encryption, to protect sensitive data and Spark clusters. Backup and Recovery: Develop and maintain backup and recovery strategies to ensure data integrity and availability in case of failures. Documentation: Maintain clear and comprehensive documentation of Spark cluster configurations, deployment procedures, and best practices. Troubleshooting: Quickly diagnose and resolve issues related to Spark clusters, applications, and Kubernetes infrastructure. Collaboration: Work closely with cross-functional teams, including data engineers, data scientists, and DevOps, to understand application requirements and optimize Spark clusters accordingly. Requirements Proven experience deploying and managing Apache Spark on Kubernetes in a production environment. Proficiency in containerization technologies, particularly Docker and Kubernetes. Strong knowledge of Spark architecture, including cluster, driver, and worker nodes. Familiarity with Spark tuning, optimization, and performance monitoring. Experience with resource management tools like Kubernetes Resource Quotas and LimitRanges. Understanding of data processing and analytics workflows. Excellent problem-solving and troubleshooting skills. Strong communication and collaboration skills. Experience with Spark cluster orchestration tools like Helm. Knowledge of Spark ecosystem components such as Spark SQL, Spark Streaming, and MLlib. Familiarity with cloud-based solution (Azure). Certification in Kubernetes (e.g., Certified Kubernetes Administrator - CKA). Knowledge of CI/CD pipelines and infrastructure as code (IaC) tools (e.g., Terraform). Scripting skills in languages like Python, Bash, or Shell. Understanding of DevOps practices and automation. Show more Show less

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Data scientist with strong background in data mining, machine learning, recommendation systems, and statistics. Should possess signature strengths of a qualified mathematician with ability to apply concepts of Mathematics, Applied Statistics, with specialization in one or more of NLP, Computer Vision, Speech, Data mining to develop models that provide effective solution. A strong data engineering background with hands-on coding capabilities is needed to own and deliver outcomes. A Master’s or PhD Degree in a highly quantitative field (Computer Science, Machine Learning, Operational Research, Statistics, Mathematics, etc.) or equivalent experience, 5+ years of industry experience in predictive modelling, data science and analysis, with prior experience in a ML or data scientist role and a track record of building ML or DL models. Responsibilities and skills Work with our customers to deliver a ML/DL project from beginning to end, including understanding the business need, aggregating data, exploring data, building & validating predictive models and deploying completed models to deliver business impact to organizations. Selecting features, building and optimizing classifiers using ML techniques. Data mining using state-of-the-art methods, creating text mining pipelines to clean & process large unstructured datasets to reveal high-quality information and hidden insights using machine learning techniques. Should be able to appreciate and work on: Should be able to appreciate and work on Computer Vision problems, for example, extract rich information from images to categorize and process visual data, develop machine learning algorithms for object and image classification, experience in using DBScan, PCA, Random Forests and Multinomial Logistic Regression to select the best features to classify objects. OR Deep understanding of NLP such as fundamentals of information retrieval, deep learning approaches, transformers, attention models, text summarisation, attribute extraction etc. Preferable experience in one or more of the following areas: recommender systems, moderation of user-generated content, sentiment analysis, etc. OR Speech recognition, speech to text and vice versa, understanding NLP and IR, text summarisation, statistical and deep learning approaches to text processing. Experience of having worked in these areas. Excellent understanding of machine learning techniques and algorithms, such as k-NN, Naive Bayes, SVM, Decision Forests, etc. Appreciation for deep learning frameworks like MXNet, Caffe 2, Keras, Tensorflow. Experience in working with GPUs to develop models, handling terabyte-size datasets. Experience with common data science toolkits such as R, Weka, NumPy, MatLab, mlr, mllib, Scikit learn, caret etc – excellence in at least one of these is highly desirable. Should be able to work hands-on in Python, R etc. Should closely collaborate & work with engineering teams to iteratively analyse data using Scala, Spark, Hadoop, Kafka, Storm etc. Experience with NoSQL databases and familiarity with data visualization tools will be of great advantage. What will you experience in terms of culture at Sahaj? A culture of trust, respect and transparency Opportunity to collaborate with some of the finest minds in the industry Work across multiple domains What are the benefits of being at Sahaj? Unlimited leaves Life insurance & private health insurance Stock options No hierarchy Open Salaries Show more Show less

Posted 3 weeks ago

Apply
page 1 of 2 results
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies