Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Teamwork makes the stream work. Roku is changing how the world watches TV Roku is the #1 TV streaming platform in the U.S., Canada, and Mexico, and we've set our sights on powering every television in the world. Roku pioneered streaming to the TV. Our mission is to be the TV streaming platform that connects the entire TV ecosystem. We connect consumers to the content they love, enable content publishers to build and monetize large audiences, and provide advertisers unique capabilities to engage consumers. From your first day at Roku, you'll make a valuable - and valued - contribution. We're a fast-growing public company where no one is a bystander. We offer you the opportunity to delight millions of TV streamers around the world while gaining meaningful experience across a variety of disciplines. About the team Roku is the No. 1 TV streaming platform in the U.S., Canada, and Mexico with 70+ millions of active accounts. Roku pioneered streaming to the TV and continues to innovate and lead the industry. We believe Roku’s continued success relies on its investment in our machine learning/ML recommendation engine. Roku enables our users to access millions of contents including movies, episodes, news, sports, music and channels from all around the world. About the role We’re on a mission to build cutting-edge advertising technology that empowers businesses to run sustainable and highly-profitable campaigns. The Ad Performance team owns server technologies, data, and cloud services aimed at improving the ad experience. We're looking for seasoned engineers with a background in machine learning to aid in this mission. Examples of problems include improving ad relevance, inferring demographics, yield optimisation, and many more. Employees in this role are expected to apply knowledge of experimental methodologies, statistics, optimisation, probability theory, and machine learning using both general purpose software and statistical languages. What you’ll be doing ML infrastructure: Help build a first-class machine learning platform from the ground up which manages the entire model lifecycle - feature engineering, model training, versioning, deployment, online serving/evaluation, and monitoring prediction quality. Data analysis and feature engineering: Apply your expertise to identify and generate features that can be leveraged by multiple use cases and models. Model training with batch and real-time prediction scenarios: Use machine learning and statistical modelling techniques such as Decision Trees, Logistic Regression, Neural Networks, Bayesian Analysis and others to develop and evaluate algorithms for improving product/system performance, quality, and accuracy. Production operations: Low-level systems debugging, performance measurement, and optimisation on large production clusters. Collaboration with cross-functional teams: Partner with product managers, data scientists, and other engineers to deliver impactful solutions. Staying ahead of the curve: Continuously learn and adapt to emerging technologies and industry trends. We’re excited if you have Bachelors, Masters, or PhD in Computer Science, Statistics, or a related field. Experience in applied machine learning on real use cases (bonus points for ad tech-related use cases). Great coding skills and strong software development experience (we use Spark, Python, Java). Familiarity with real-time evaluation of models with low latency constraints. Familiarity with distributed ML frameworks such as Spark-MLlib, TensorFlow, etc. Ability to work with large scale computing frameworks, data analysis systems, and modelling environments. Examples include Spark, Hive, NoSQL stores such as Aerospike and ScyllaDB. Ad tech background is a plus. Benefits Roku is committed to offering a diverse range of benefits as part of our compensation package to support our employees and their families. Our comprehensive benefits include global access to mental health and financial wellness support and resources. Local benefits include statutory and voluntary benefits which may include healthcare (medical, dental, and vision), life, accident, disability, commuter, and retirement options (401(k)/pension). Our employees can take time off work for vacation and other personal reasons to balance their evolving work and life needs. It's important to note that not every benefit is available in all locations or for every role. For details specific to your location, please consult with your recruiter. The Roku Culture Roku is a great place for people who want to work in a fast-paced environment where everyone is focused on the company's success rather than their own. We try to surround ourselves with people who are great at their jobs, who are easy to work with, and who keep their egos in check. We appreciate a sense of humor. We believe a fewer number of very talented folks can do more for less cost than a larger number of less talented teams. We're independent thinkers with big ideas who act boldly, move fast and accomplish extraordinary things through collaboration and trust. In short, at Roku you'll be part of a company that's changing how the world watches TV. We have a unique culture that we are proud of. We think of ourselves primarily as problem-solvers, which itself is a two-part idea. We come up with the solution, but the solution isn't real until it is built and delivered to the customer. That penchant for action gives us a pragmatic approach to innovation, one that has served us well since 2002. To learn more about Roku, our global footprint, and how we've grown, visit https://www.weareroku.com/factsheet. By providing your information, you acknowledge that you have read our Applicant Privacy Notice and authorize Roku to process your data subject to those terms.
Posted 2 days ago
5.0 - 10.0 years
11 - 15 Lacs
Chennai
Work from Office
Project description You'll be working in the GM Business Analytics team located in Pune. The successful candidate will be a member of the global Distribution team, which has team members in London and Pune. We work as part of a global team providing analytical solutions for IB distribution/sales people. Solutions deployed should be extensible globally with minimal localization. Responsibilities Are you passionate about data and analyticsAre you keen to be part of the journey to modernize a data warehouse/ analytics suite of application(s). Do you take pride in the quality of software delivered for each development iteration We're looking for someone like that to join us and be a part of a high-performing team on a high-profile project. solve challenging problems in an elegant way master state-of-the-art technologies build a highly responsive and fast updating application in an Agile & Lean environment apply best development practices and effectively utilize technologies work across the full delivery cycle to ensure high-quality delivery write high-quality code and adhere to coding standards work collaboratively with diverse team(s) of technologists You are Curious and collaborative, comfortable working independently, as well as in a team Focused on delivery to the business Strong in analytical skills. For example, the candidate must understand the key dependencies among existing systems in terms of the flow of data among them. It is essential that the candidate learns to understand the 'big picture' of how IB industry/business functions. Able to quickly absorb new terminology and business requirements Already strong in analytical tools, technologies, platforms, etc. The candidate must also demonstrate a strong desire for learning and self-improvement. Open to learning home-grown technologies, support current state infrastructure and help drive future state migrations. imaginative and creative with newer technologies Able to accurately and pragmatically estimate the development effort required for specific objectives You will have the opportunity to work under minimal supervision to understand local and global system requirements, design and implement the required functionality/bug fixes/enhancements. You will be responsible for components that are developed across the whole team and deployed globally. You will also have the opportunity to provide third-line support to the application's global user community, which will include assisting dedicated support staff and liaising with the members of other development teams directly, some of which will be local and some remote. Skills Must have A bachelor's or master's degree, preferably in Information Technology or a related field (computer science, mathematics, etc.), focusing on data engineering. 5+ years of relevant experience as a data engineer in Big Data is required. Strong Knowledge of programming languages (Python / Scala) and Big Data technologies (Spark, Databricks or equivalent) is required. Strong experience in executing complex data analysis and running complex SQL/Spark queries. Strong experience in building complex data transformations in SQL/Spark. Strong knowledge of Database technologies is required. Strong knowledge of Azure Cloud is advantageous. Good understanding and experience with Agile methodologies and delivery. Strong communication skills with the ability to build partnerships with stakeholders. Strong analytical, data management and problem-solving skills. Nice to have Experience working on the QlikView tool Understanding of QlikView scripting and data model
Posted 2 days ago
5.0 - 10.0 years
8 - 13 Lacs
Bengaluru
Work from Office
Lead Software Engineer Backend Were seeking a Lead Software Engineer to join one of our Data Layer teams. As the name implies, the Data Layer is at the core of all things data at Zeta. Our responsibilities include: Developing and maintaining the Zeta Identity Graph platform, which collects billions of behavioural, demographic, environmental, and transactional signals to power people-based marketing. Ingesting vast amounts of identity and event data from our customers and partners. Facilitating data transfers across systems. Ensuring the integrity and health of our datasets. And much more. As a member of this team, the data engineer will be responsible for designing and expanding our existing data infrastructure, enabling easy access to data, supporting complex data analyses, and automating optimization workflows for business and marketing operations. Essential Responsibilities: As a Lead Software Engineer, your responsibilities will include: Building, refining, tuning, and maintaining our real-time and batch data infrastructure Daily use technologies such as Python, Spark, Airflow, Snowflake, Hive, Scylla, Django, FastAPI, etc. Maintaining data quality and accuracy across production data systems Working with Data Engineers to optimize data models and workflows Working with Data Analysts to develop ETL processes for analysis and reporting Working with Product Managers to design and build data products Working with our DevOps team to scale and optimize our data infrastructure Participate in architecture discussions, influence the road map, take ownership and responsibility over new projects Participating in on-call rotation in their respective time zones (be available by phone or email in case something goes wrong) Desired Characteristics: Minimum 5 years of software engineering experience. Proven long term experience and enthusiasm for distributed data processing at scale, eagerness to learn new things. Expertise in designing and architecting distributed low latency and scalable solutions in either cloud and onpremises environment. Exposure to the whole software development lifecycle from inception to production and monitoring Fluency in Python or solid experience in Scala, Java Proficient with relational databases and Advanced SQL Expert in usage of services like Spark and Hive Experience with web frameworks such as Flask, Djang Experience in adequate usage of any scheduler such as Apache Airflow, Apache Luigi, Chronos etc. Experience in adequate usage of cloud services (AWS) at scale Experience in agile software development processes Excellent interpersonal and communication skills Nice to have: Experience with large scale / multi-tenant distributed systems Experience with columnar / NoSQL databases Vertica, Snowflake, HBase, Scylla, Couchbase Experience in real team streaming frameworks Flink, Storm Experience in open table formats such as Iceberg, Hudi or Deltalake
Posted 2 days ago
4.0 - 6.0 years
7 - 12 Lacs
Hyderabad
Work from Office
Role Description : As a Senior Big Data Platform Engineer at Incedo, you will be responsible for designing and implementing big data platforms to support large-scale data integration projects. You will work with data architects and data engineers to define the platform architecture and build the necessary infrastructure. You will be skilled in big data technologies such as Hadoop, Spark, and Kafka and have experience in cloud computing platforms such as AWS or Azure. You will be responsible for ensuring the performance, scalability, and security of the big data platform and troubleshooting any issues that arise. Roles & Responsibilities: Designing, developing and maintaining large-scale big data platforms using technologies like Hadoop, Spark and Kafka Creating and managing data warehouses, data lakes and data marts Implementing and optimizing ETL processes and data pipelines Developing and maintaining security and access controls Troubleshooting and resolving big data platform issues Collaborating with other teams to ensure the consistency and integrity of data Technical Skills : Experience with big data processing technologies such as Apache Hadoop, Apache Spark, or Apache Kafka. Understanding of distributed computing concepts such as MapReduce, Spark RDDs, or Apache Flink data streams. Familiarity with big data storage solutions such as HDFS, Amazon S3, or Azure Data Lake Storage. Knowledge of big data processing frameworks such as Apache Hive, Apache Pig, or Apache Impala. Must have excellent communication skills and be able to communicate complex technical information to non-technical stakeholders in a clear and concise manner. Must understand the company's long-term vision and align with it. Provide leadership, guidance, and support to team members, ensuring the successful completion of tasks, and promoting a positive work environment that fosters collaboration and productivity, taking responsibility of the whole team. Nice-to-have skills Qualifications 4-6 years of work experience in relevant field B.Tech/B.E/M.Tech or MCA degree from a reputed university. Computer science background is preferred
Posted 2 days ago
4.0 - 6.0 years
6 - 10 Lacs
Gurugram
Work from Office
Role Description : As a Senior Big Data Platform Engineer at Incedo, you will be responsible for designing and implementing big data platforms to support large-scale data integration projects. You will work with data architects and data engineers to define the platform architecture and build the necessary infrastructure. You will be skilled in big data technologies such as Hadoop, Spark, and Kafka and have experience in cloud computing platforms such as AWS or Azure. You will be responsible for ensuring the performance, scalability, and security of the big data platform and troubleshooting any issues that arise. Roles & Responsibilities: Designing, developing and maintaining large-scale big data platforms using technologies like Hadoop, Spark and Kafka Creating and managing data warehouses, data lakes and data marts Implementing and optimizing ETL processes and data pipelines Developing and maintaining security and access controls Troubleshooting and resolving big data platform issues Collaborating with other teams to ensure the consistency and integrity of data Technical Skills : Experience with big data processing technologies such as Apache Hadoop, Apache Spark, or Apache Kafka. Understanding of distributed computing concepts such as MapReduce, Spark RDDs, or Apache Flink data streams. Familiarity with big data storage solutions such as HDFS, Amazon S3, or Azure Data Lake Storage. Knowledge of big data processing frameworks such as Apache Hive, Apache Pig, or Apache Impala. Must have excellent communication skills and be able to communicate complex technical information to non-technical stakeholders in a clear and concise manner. Must understand the company's long-term vision and align with it. Provide leadership, guidance, and support to team members, ensuring the successful completion of tasks, and promoting a positive work environment that fosters collaboration and productivity, taking responsibility of the whole team. Nice-to-have skills Qualifications 4-6 years of work experience in relevant field B.Tech/B.E/M.Tech or MCA degree from a reputed university. Computer science background is preferred
Posted 2 days ago
8.0 - 13.0 years
7 - 11 Lacs
Bengaluru
Work from Office
As a member of this team, the data engineer will be responsible for designing and expanding our existing data infrastructure, enabling easy access to data, supporting complex data analyses, and automating optimization workflows for business and marketing operations Essential Responsibilities: As a Senior Software Engineer, your responsibilities will include: Building, refining, tuning, and maintaining our real-time and batch data infrastructure Daily use technologies such as Python, Spark, Airflow, Snowflake, Hive, FastAPI, etc. Maintaining data quality and accuracy across production data systems Working with Data Analysts to develop ETL processes for analysis and reporting Working with Product Managers to design and build data products Working with our DevOps team to scale and optimize our data infrastructure Participate in architecture discussions, influence the road map, take ownership and responsibility over new projects Participating in on-call rotation in their respective time zones (be available by phone or email in case something goes wrong) Desired Characteristics: Minimum 8 years of software engineering experience. An undergraduate degree in Computer Science (or a related field) from a university where the primary language of instruction is English is strongly desired. 2+ Years of Experience/Fluency in Python Proficient with relational databases and Advanced SQL Expert in usage of services like Spark and Hive. Experience working with container-based solutions is a plus. Experience in adequate usage of any scheduler such as Apache Airflow, Apache Luigi, Chronos etc. Experience in adequate usage of cloud services (AWS) at scale Proven long term experience and enthusiasm for distributed data processing at scale, eagerness to learn new things. Expertise in designing and architecting distributed low latency and scalable solutions in either cloud and on-premises environment. Exposure to the whole software development lifecycle from inception to production and monitoring. Experience in Advertising Attribution domain is a plus Experience in agile software development processes Excellent interpersonal and communication skills
Posted 2 days ago
7.0 - 9.0 years
9 - 13 Lacs
Chennai
Work from Office
Role Description : As a Technical Lead - Data Science and Modeling at Incedo, you will be responsible for developing and deploying predictive models and machine learning algorithms to support business decision-making. You will work with data scientists, data engineers, and business analysts to understand business requirements and develop data-driven solutions. You will be skilled in programming languages such as Python or R and have experience in data science tools such as TensorFlow or Keras. You will be responsible for ensuring that models are accurate, efficient, and scalable. Roles & Responsibilities: Developing and implementing machine learning models and algorithms to solve complex business problems Conducting data analysis and modeling using statistical and data analysis tools Collaborating with other teams to ensure the consistency and integrity of data Providing guidance and mentorship to junior data science and modeling specialists Presenting findings and recommendations to stakeholders Technical Skills : Proficiency in statistical analysis techniques such as regression analysis, hypothesis testing, or time-series analysis. Knowledge of machine learning algorithms and techniques such as supervised learning, unsupervised learning, or reinforcement learning. Experience with data wrangling and data cleaning techniques using tools such as Python, R, or SQL. Understanding of big data technologies such as Hadoop, Spark, or Hive. Must have excellent communication skills and be able to communicate complex technical information to non-technical stakeholders in a clear and concise manner. Must understand the company's long-term vision and align with it. Should be open to new ideas and be willing to learn and develop new skills. Should also be able to work well under pressure and manage multiple tasks and priorities. Qualifications 7-9 years of work experience in relevant field B.Tech/B.E/M.Tech or MCA degree from a reputed university. Computer science background is preferred
Posted 2 days ago
4.0 - 6.0 years
7 - 12 Lacs
Gurugram
Work from Office
Role Description: As a Senior Big Data Platform Engineer at Incedo, you will be responsible for designing and implementing big data platforms to support large-scale data integration projects. You will work with data architects and data engineers to define the platform architecture and build the necessary infrastructure. You will be skilled in big data technologies such as Hadoop, Spark, and Kafka and have experience in cloud computing platforms such as AWS or Azure. You will be responsible for ensuring the performance, scalability, and security of the big data platform and troubleshooting any issues that arise. Roles & Responsibilities: Designing, developing and maintaining large-scale big data platforms using technologies like Hadoop, Spark and Kafka Creating and managing data warehouses, data lakes and data marts Implementing and optimizing ETL processes and data pipelines Developing and maintaining security and access controls Troubleshooting and resolving big data platform issues Collaborating with other teams to ensure the consistency and integrity of data Technical Skills Skills Requirements: Experience with big data processing technologies such as Apache Hadoop, Apache Spark, or Apache Kafka. Understanding of distributed computing concepts such as MapReduce, Spark RDDs, or Apache Flink data streams. Familiarity with big data storage solutions such as HDFS, Amazon S3, or Azure Data Lake Storage. Knowledge of big data processing frameworks such as Apache Hive, Apache Pig, or Apache Impala. Must have excellent communication skills and be able to communicate complex technical information to non-technical stakeholders in a clear and concise manner. Must understand the company's long-term vision and align with it. Provide leadership, guidance, and support to team members, ensuring the successful completion of tasks, and promoting a positive work environment that fosters collaboration and productivity, taking responsibility of the whole team. Nice-to-have skills Qualifications 4-6 years of work experience in relevant field B.Tech/B.E/M.Tech or MCA degree from a reputed university. Computer science background is preferred
Posted 2 days ago
4.0 - 6.0 years
11 - 16 Lacs
Gurugram
Work from Office
Role Description : As a Senior Data Science and Modeling Specialist at Incedo, you will be responsible for developing and deploying predictive models and machine learning algorithms to support business decision-making. You will work with data scientists, data engineers, and business analysts to understand business requirements and develop data-driven solutions. You will be skilled in programming languages such as Python or R and have experience in data science tools such as TensorFlow or Keras. You will be responsible for ensuring that models are accurate, efficient, and scalable. Roles & Responsibilities: Developing and implementing machine learning models and algorithms to solve complex business problems Conducting data analysis and modeling using statistical and data analysis tools Collaborating with other teams to ensure the consistency and integrity of data Providing guidance and mentorship to junior data science and modeling specialists Presenting findings and recommendations to stakeholders Technical Skills : Proficiency in statistical analysis techniques such as regression analysis, hypothesis testing, or time-series analysis. Knowledge of machine learning algorithms and techniques such as supervised learning, unsupervised learning, or reinforcement learning. Experience with data wrangling and data cleaning techniques using tools such as Python, R, or SQL. Understanding of big data technologies such as Hadoop, Spark, or Hive. Must have excellent communication skills and be able to communicate complex technical information to non-technical stakeholders in a clear and concise manner. Must understand the company's long-term vision and align with it. Provide leadership, guidance, and support to team members, ensuring the successful completion of tasks, and promoting a positive work environment that fosters collaboration and productivity, taking responsibility of the whole team. Qualifications 4-6 years of work experience in relevant field B.Tech/B.E/M.Tech or MCA degree from a reputed university. Computer science background is preferred
Posted 2 days ago
7.0 - 9.0 years
9 - 13 Lacs
Bengaluru
Work from Office
Role Description : As a Technical Lead - Data Science and Modeling at Incedo, you will be responsible for developing and deploying predictive models and machine learning algorithms to support business decision-making. You will work with data scientists, data engineers, and business analysts to understand business requirements and develop data-driven solutions. You will be skilled in programming languages such as Python or R and have experience in data science tools such as TensorFlow or Keras. You will be responsible for ensuring that models are accurate, efficient, and scalable. Roles & Responsibilities: Developing and implementing machine learning models and algorithms to solve complex business problems Conducting data analysis and modeling using statistical and data analysis tools Collaborating with other teams to ensure the consistency and integrity of data Providing guidance and mentorship to junior data science and modeling specialists Presenting findings and recommendations to stakeholders Technical Skills : Proficiency in statistical analysis techniques such as regression analysis, hypothesis testing, or time-series analysis. Knowledge of machine learning algorithms and techniques such as supervised learning, unsupervised learning, or reinforcement learning. Experience with data wrangling and data cleaning techniques using tools such as Python, R, or SQL. Understanding of big data technologies such as Hadoop, Spark, or Hive. Must have excellent communication skills and be able to communicate complex technical information to non-technical stakeholders in a clear and concise manner. Must understand the company's long-term vision and align with it. Should be open to new ideas and be willing to learn and develop new skills. Should also be able to work well under pressure and manage multiple tasks and priorities. Qualifications 7-9 years of work experience in relevant field B.Tech/B.E/M.Tech or MCA degree from a reputed university. Computer science background is preferred
Posted 2 days ago
2.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Overview Data Science Team works in developing Machine Learning (ML) and Artificial Intelligence (AI) projects. Specific scope of this role is to develop ML solution in support of ML/AI projects using big analytics toolsets in a CI/CD environment. Analytics toolsets may include DS tools/Spark/Databricks, and other technologies offered by Microsoft Azure or open-source toolsets. This role will also help automate the end-to-end cycle with Azure Pipelines. You will be part of a collaborative interdisciplinary team around data, where you will be responsible of our continuous delivery of statistical/ML models. You will work closely with process owners, product owners and final business users. This will provide you the correct visibility and understanding of criticality of your developments. Responsibilities Delivery of key Advanced Analytics/Data Science projects within time and budget, particularly around DevOps/MLOps and Machine Learning models in scope Active contributor to code & development in projects and services Partner with data engineers to ensure data access for discovery and proper data is prepared for model consumption. Partner with ML engineers working on industrialization. Communicate with business stakeholders in the process of service design, training and knowledge transfer. Support large-scale experimentation and build data-driven models. Refine requirements into modelling problems. Influence product teams through data-based recommendations. Research in state-of-the-art methodologies. Create documentation for learnings and knowledge transfer. Create reusable packages or libraries. Ensure on time and on budget delivery which satisfies project requirements, while adhering to enterprise architecture standards Leverage big data technologies to help process data and build scaled data pipelines (batch to real time) Implement end-to-end ML lifecycle with Azure Databricks and Azure Pipelines Automate ML models deployments Qualifications BE/B.Tech in Computer Science, Maths, technical fields. Overall 2-4 years of experience working as a Data Scientist. 2+ years’ experience building solutions in the commercial or in the supply chain space. 2+ years working in a team to deliver production level analytic solutions. Fluent in git (version control). Understanding of Jenkins, Docker are a plus. Fluent in SQL syntaxis. 2+ years’ experience in Statistical/ML techniques to solve supervised (regression, classification) and unsupervised problems. 2+ years’ experience in developing business problem related statistical/ML modeling with industry tools with primary focus on Python or Pyspark development. Data Science - Hands on experience and strong knowledge of building machine learning models - supervised and unsupervised models. Knowledge of Time series/Demand Forecast models is a plus Programming Skills - Hands-on experience in statistical programming languages like Python, Pyspark and database query languages like SQL Statistics - Good applied statistical skills, including knowledge of statistical tests, distributions, regression, maximum likelihood estimators Cloud (Azure) - Experience in Databricks and ADF is desirable Familiarity with Spark, Hive, Pig is an added advantage Business storytelling and communicating data insights in business consumable format. Fluent in one Visualization tool. Strong communications and organizational skills with the ability to deal with ambiguity while juggling multiple priorities Experience with Agile methodology for team work and analytics ‘product’ creation. Experience in Reinforcement Learning is a plus. Experience in Simulation and Optimization problems in any space is a plus. Experience with Bayesian methods is a plus. Experience with Causal inference is a plus. Experience with NLP is a plus. Experience with Responsible AI is a plus. Experience with distributed machine learning is a plus Experience in DevOps, hands-on experience with one or more cloud service providers AWS, GCP, Azure(preferred) Model deployment experience is a plus Experience with version control systems like GitHub and CI/CD tools Experience in Exploratory data Analysis Knowledge of ML Ops / DevOps and deploying ML models is preferred Experience using MLFlow, Kubeflow etc. will be preferred Experience executing and contributing to ML OPS automation infrastructure is good to have Exceptional analytical and problem-solving skills Stakeholder engagement-BU, Vendors. Experience building statistical models in the Retail or Supply chain space is a plus
Posted 2 days ago
0 years
0 Lacs
Kochi, Kerala, India
On-site
Got a Way with Words? Let’s Turn That Talent into Impact. We’re MaSs – a future-forward digital marketing agency from the UK, with our delivery squad making waves in Kochi. We turn data into stories and ideas into action, crafting copy that doesn’t just sit pretty – it drives results. AI might be rewriting the rules, but we still believe in the power of a human touch. If you can write words that make people stop, think, and act – this is your invitation to do it with us. What’s the Gig? We’re looking for a sharp, word-loving Content & Copywriting Intern/Trainee who wants to dive headfirst into the world of digital storytelling. If you obsess over the perfect headline, love crafting compelling narratives, and are curious about how AI can amplify creativity – you belong here. You’ll work with our team of marketers to create copy that clicks, converts, and keeps audiences hooked. Plus, you’ll learn to leverage AI tools to boost your workflow and sharpen your writing game. What You’ll Do (a.k.a. Your Superpowers) • Write attention-grabbing copy for social media, websites, blogs, emails, and ads. • Adapt your writing style to match different brands, audiences, and vibes. • Collaborate with marketers to turn concepts into persuasive, personality-packed content. • Learn how to use AI to brainstorm, edit, and optimize your copy. • Stay on top of industry trends and audience psychology to make your writing smarter. • Edit and fine-tune your work until it’s pitch-perfect. What We’re Looking For (Is This You?) Wordsmith-in-the-Making: • You have a strong command of English (bonus points if you make grammar look effortless). • You can make even the dullest topic sound fascinating. • You love playing with language and know how to write with personality. Curious & Adaptable: • You’re open to trying new writing styles and tones (from cheeky to corporate). • You stay curious about how people think, act, and buy. • You embrace AI tools to work smarter, not harder. Hungry to Learn: • You take feedback like a pro and use it to level up. • You’re excited to grow in a fast-paced, ever-evolving creative environment. What’s in It for You? • Hands-on experience writing for real clients and real audiences. • One-on-one mentorship from marketing pros who want you to shine. • Exposure to cutting-edge AI writing tools and techniques. • A fun, collaborative workspace where creativity rules. • A shot at a full-time gig if you blow us away. Ready to shape stories and spark action? Apply today
Posted 2 days ago
2.0 - 7.0 years
15 - 20 Lacs
Bengaluru
Work from Office
As a AI Ops Expert , Responsible and full ownership for the deliverables with greater defined quality standards with defined timeline and budget Design, implement, and manage AIops solutions to automate and optimize AI/ML workflows. Collaborate with data scientists, engineers, and other stakeholders to ensure seamless integration of AI/ML models into production. Monitor and maintain the health and performance of AI/ML systems. Develop and maintain CI/CD pipelines for AI/ML models. Implement best practices for model versioning, testing, and deployment. Troubleshoot and resolve issues related to AI/ML infrastructure and workflows. Stay up-to-date with the latest AIops, MLOps, and Kubernetes tools and technologies. Requirements and skills Bachelors or Masters degree in Computer Science, Software Engineering, or a related field. 2-7 year of relevant experience Proven experience in AIops, MLOps, or related fields. Strong proficiency in Python and experience with FastAPI. Strong handson expertise on Kubernetes (Or AKS) Hands-on experience with MS Azure and its AI/ML services, including Azure ML Flow. Proficiency in using DevContainer for development. Knowledge of CI/CD tools such as Jenkins, GitHub Actions, or Azure DevOps. Experience with containerization and orchestration tools like Docker and Kubernetes. Strong problem-solving skills and the ability to work in a fast-paced environment. Excellent communication and collaboration skills. Preferred Skills: Experience with machine learning frameworks such as TensorFlow, PyTorch, or scikit-learn. Familiarity with data engineering tools like Apache Kafka, Apache Spark, or similar. Knowledge of monitoring and logging tools such as Prometheus, Grafana, or ELK stack. Understanding of data versioning tools like DVC or MLflow. Experience with infrastructure as code (IaC) tools like Terraform or Ansible. Proficiency in Azure-specific tools and services, such as: Azure Machine Learning (Azure ML) Azure DevOps Azure Kubernetes Service (AKS) Azure Functions Azure Logic Apps Azure Data Factory Azure Monitor and Application Insights
Posted 2 days ago
4.0 - 8.0 years
8 - 12 Lacs
Pune
Work from Office
Piller Soft Technology is looking for Lead Data Engineer to join our dynamic team and embark on a rewarding career journey Designing and developing data pipelines: Lead data engineers are responsible for designing and developing data pipelines that move data from various sources to storage and processing systems. Building and maintaining data infrastructure: Lead data engineers are responsible for building and maintaining data infrastructure, such as data warehouses, data lakes, and data marts. Ensuring data quality and integrity: Lead data engineers are responsible for ensuring data quality and integrity, by setting up data validation processes and implementing data quality checks. Managing data storage and retrieval: Lead data engineers are responsible for managing data storage and retrieval, by designing and implementing data storage systems, such as NoSQL databases or Hadoop clusters. Developing and maintaining data models: Lead data engineers are responsible for developing and maintaining data models, such as data dictionaries and entity-relationship diagrams, to ensure consistency in data architecture. Managing data security and privacy: Lead data engineers are responsible for managing data security and privacy, by implementing security measures, such as access controls and encryption, to protect sensitive data. Leading and managing a team: Lead data engineers may be responsible for leading and managing a team of data engineers, providing guidance and support for their work.
Posted 2 days ago
4.0 - 9.0 years
9 - 14 Lacs
Bengaluru
Work from Office
Minimum of 4+ years of software development experience with demonstrated expertise in standard development best practice methodologies SKILLS REQUIRED: Spark, Scala, Python, HDFS, Hive, , Scheduler ( Ozzie, Airflow),Kafka Spark/Scala SQL RDBMS DOCKER KUBERNETES RABBITMQ/KAFKA MONITORING TOOLS - SPLUNK OR ELK Profile required Integrate test frameworks in development process Refactor existing solutions to make it reusable and scalable - Work with operations to get the solutions deployed Take ownership of production deployment of code Collaborating with and/or lead cross functional teams, build and launch applications and data platforms at scale, either for revenue generating or operational purposes *Come up with Coding and Design best practices *Thrive in self-motivated internal-innovation driven environment Adapting fast to new application knowledge and changes
Posted 2 days ago
3.0 - 8.0 years
13 - 18 Lacs
Bengaluru
Work from Office
Develop code and test case scenarios by applying relevant software craftsmanship principles and meet the acceptance criteria. Complete the assigned learning path and contribute to daily meetings. Deliver on all aspects of Software Development Lifecyle (SDLC) in-line with Agile and IT craftsmanship principles. Take part in team ceremonies be it agile practices or chapter meetings. Deliver high-quality clean code and design that can be re-used. Actively, work with other Development teams to define and implement API's and rules for data access. Perform bug-free release validations and produce test and defect reports. Contribute to developing scripts, configuring quality and automating framework usage. Run and maintain test suites with the guidance of seniors. Support existing data models, data dictionary, data pipeline standards, storage of source, process and consumer metadata. Profile required 3+ year of expertise and hands on experience in Core java, Python/Spark, and good conceptual understanding of OOPs and Data Engineering Hands-on experience wirh at least 2 years on PGres or SQL Dbs Hands-on experience with at least 2 years in Springboot Hands-on experience with at least 2 years in web GUI development using ReactJS\AngularJS, Hands-on experience on API development Prior experience working with CI/CD tools (Maven, Git, jenkins) Good to have working knowledge of Cloud platroms Professional attitude: Self-motivated, fast learner, team player, independent, ability to handle multiple tasks and functional topic simultaneous
Posted 2 days ago
5.0 - 8.0 years
13 - 17 Lacs
Bengaluru
Work from Office
Mid-Senior Python/py spark Developer with Azure tech stack , the ideal candidate will have strong expertise in Python 3.12, Spark 3.x, Postgres 15 on AZURE, OCP 19, SOK8s (AKS 1.27) on AZURE and microservices architecture. This role requires fluency in database technologies Oracle/PL SQL postgres . knowledge of CI/CD pipelines, and a proactive approach to project leadership and mentorship. The candidate is expected to work along with the project team to lead the solutioning develop the expected solution, following BDD TDD practices, le Deliver working code within agreed timelines, that meets acceptance criteria D.O.D. Come up with Low level design , Understanding project objectives and understand application Impacts contributions expected Review deliveries periodic review of application performance Collaborate within team to maintain Production stability facilitate on-demand release. Support team members on their technical tasks tasks estimation and timely code review Manage the technical debts of the applications Drive consistent development practices w.r.t. tools, common components, documentation Profile required Should have 5 to 8 years of hands on experience working on Python Should have at least 2 years of hands on experience working on using PySpark with Hadoop Should have experience working on projects using Agile Methodologies and CI/CD Pipelines Should have experience working on at least one of the RDBMS databases such as Oracle, PostgreSQL and SQL Server Should have exposure to Linux platform such as RHEL and Cloud platforms such as Azure Data Lake Should have exposure to Container and Orchestration tools such as Docker and Kubernetes
Posted 2 days ago
4.0 - 6.0 years
13 - 18 Lacs
Bengaluru
Work from Office
Design, development and testing of components / modules in TOP (Trade Open Platform) involving Spark, Java, Hive and related big-data technologies in a datalake architecture Contribute to the design, development and deployment of new features new components in Azure public cloud Contribute to the evolution of REST APIs in TOP enhancement, development and testing of new APIs Ensure the processes in TOP provide an optimal performance and assist in performance tuning and optimization Release Deployment Deploy using CD/CI practices and tools in various environments development, UAT and production and follow production processes. Ensure Craftsmanship practices are followed Follow Agile at Scale process in terms of participation in PI Planning and follow-up, Sprint planning, Back-log maintenance in Jira. Organize training sessions on the core platform and related technologies for the Tribe / Business line to ensure the platform evolution is continuously updated to relevant stakeholders Profile required Around 4-6 years of experience in IT industry, preferably banking domain Expertise and experience in Java (java 1.8 (building API, Java thread, collections, Streaming, dependency injection/inversion), Junit, Big-data (Spark, Oozie, Hive) and Azure (AKS, CLI, Event, Key valut) and should have been part of digital transformation initiatives with knowledge of Unix, SQL/RDBMS and Monitoring Development experience in REST APIs Experience in managing tools GIT/BIT Bucket, Jenkins, NPM, Docket/Kubernetes, Jira, Sonar Knowledge of Agile practices and Agile at Scale Good communication / collaboration skills
Posted 2 days ago
4.0 - 7.0 years
13 - 18 Lacs
Bengaluru
Work from Office
Design, development and testing of components / modules in TOP (Trade Open Platform) involving Spark, Java, Hive and related big-data technologies in a datalake architecture Contribute to the design, development and deployment of new features new components in Azure public cloud Contribute to the evolution of REST APIs in TOP enhancement, development and testing of new APIs Ensure the processes in TOP provide an optimal performance and assist in performance tuning and optimization Release Deployment Deploy using CD/CI practices and tools in various environments development, UAT and production and follow production processes. Ensure Craftsmanship practices are followed Follow Agile at Scale process in terms of participation in PI Planning and follow-up, Sprint planning, Back-log maintenance in Jira. Organize training sessions on the core platform and related technologies for the Tribe / Business line to ensure the platform evolution is continuously updated to relevant stakeholders
Posted 2 days ago
4.0 - 6.0 years
13 - 18 Lacs
Bengaluru
Work from Office
Design, development and testing of components / modules in TOP (Trade Open Platform) involving Spark, Java, Hive and related big-data technologies in a datalake architecture Contribute to the design, development and deployment of new features new components in Azure public cloud Contribute to the evolution of REST APIs in TOP enhancement, development and testing of new APIs Ensure the processes in TOP provide an optimal performance and assist in performance tuning and optimization Release Deployment Deploy using CD/CI practices and tools in various environments development, UAT and production and follow production processes. Ensure Craftsmanship practices are followed Follow Agile at Scale process in terms of participation in PI Planning and follow-up, Sprint planning, Back-log maintenance in Jira. Organize training sessions on the core platform and related technologies for the Tribe / Business line to ensure the platform evolution is continuously updated to relevant stakeholders Around 4-6 years of experience in IT industry, preferably banking domain Expertise and experience in Java (java 1.8 (building API, Java thread, collections, Streaming, dependency injection/inversion), Junit, Big-data (Spark, Oozie, Hive) and Azure (AKS, CLI, Event, Key valut) and should have been part of digital transformation initiatives with knowledge of Unix, SQL/RDBMS and Monitoring Development experience in REST APIs Experience in managing tools – GIT/BIT Bucket, Jenkins, NPM, Docket/Kubernetes, Jira, Sonar Knowledge of Agile practices and Agile@Scale Good communication / collaboration skills
Posted 2 days ago
4.0 - 6.0 years
13 - 18 Lacs
Bengaluru
Work from Office
Design, development and testing of components / modules in TOP (Trade Open Platform) involving Spark, Java, Hive and related big-data technologies in a datalake architecture Contribute to the design, development and deployment of new features new components in Azure public cloud Contribute to the evolution of REST APIs in TOP enhancement, development and testing of new APIs Ensure the processes in TOP provide an optimal performance and assist in performance tuning and optimization Release Deployment Deploy using CD/CI practices and tools in various environments development, UAT and production and follow production processes. Ensure Craftsmanship practices are followed Follow Agile at Scale process in terms of participation in PI Planning and follow-up, Sprint planning, Back-log maintenance in Jira. Organize training sessions on the core platform and related technologies for the Tribe / Business line to ensure the platform evolution is continuously updated to relevant stakeholders Profile required Around 4-6 years of experience in IT industry, preferably banking domain Expertise and experience in Java (java 1.8 (building API, Java thread, collections, Streaming, dependency injection/inversion), Junit, Big-data (Spark, Oozie, Hive) and Azure (AKS, CLI, Event, Key valut) and should have been part of digital transformation initiatives with knowledge of Unix, SQL/RDBMS and Monitoring Development experience in REST APIs Experience in managing tools GIT/BIT Bucket, Jenkins, NPM, Docket/Kubernetes, Jira, Sonar Knowledge of Agile practices and Agile at Scale Good communication / collaboration skills
Posted 2 days ago
3.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Overview We are PepsiCo PepsiCo is one of the world's leading food and beverage companies with more than $79 Billion in Net Revenue and a global portfolio of diverse and beloved brands. We have a complementary food and beverage portfolio that includes 22 brands that each generate more than $1 Billion in annual retail sales. PepsiCo's products are sold in more than 200 countries and territories around the world. PepsiCo's strength is its people. We are over 250,000 game changers, mountain movers and history makers, located around the world, and united by a shared set of values and goals. We believe that acting ethically and responsibly is not only the right thing to do, but also the right thing to do for our business. At PepsiCo, we aim to deliver top-tier financial performance over the long term by integrating sustainability into our business strategy, leaving a positive imprint on society and the environment. We call this Winning with Pep+ Positive . For more information on PepsiCo and the opportunities it holds, visit www.pepsico.com. PepsiCo Data Analytics & AI Overview: With data deeply embedded in our DNA, PepsiCo Data, Analytics and AI (DA&AI) transforms data into consumer delight. We build and organize business-ready data that allows PepsiCo’s leaders to solve their problems with the highest degree of confidence. Our platform of data products and services ensures data is activated at scale. This enables new revenue streams, deeper partner relationships, new consumer experiences, and innovation across the enterprise. The Data Science Pillar in DA&AI will be the organization where Data Scientist and ML Engineers report to in the broader D+A Organization. Also DS will lead, facilitate and collaborate on the larger DS community in PepsiCo. DS will provide the talent for the development and support of DS component and its life cycle within DA&AI Products. And will support “pre-engagement” activities as requested and validated by the prioritization framework of DA&AI. Data Scientist-Gurugram and Hyderabad The role will work in developing Machine Learning (ML) and Artificial Intelligence (AI) projects. Specific scope of this role is to develop ML solution in support of ML/AI projects using big analytics toolsets in a CI/CD environment. Analytics toolsets may include DS tools/Spark/Databricks, and other technologies offered by Microsoft Azure or open-source toolsets. This role will also help automate the end-to-end cycle with Machine Learning Services and Pipelines. Responsibilities Delivery of key Advanced Analytics/Data Science projects within time and budget, particularly around DevOps/MLOps and Machine Learning models in scope Collaborate with data engineers and ML engineers to understand data and models and leverage various advanced analytics capabilities Ensure on time and on budget delivery which satisfies project requirements, while adhering to enterprise architecture standards Use big data technologies to help process data and build scaled data pipelines (batch to real time) Automate the end-to-end ML lifecycle with Azure Machine Learning and Azure/AWS/GCP Pipelines. Setup cloud alerts, monitors, dashboards, and logging and troubleshoot machine learning infrastructure Automate ML models deployments Qualifications Minimum 3years of hands-on work experience in data science / Machine learning Minimum 3year of SQL experience Experience in DevOps and Machine Learning (ML) with hands-on experience with one or more cloud service providers. BE/BS in Computer Science, Math, Physics, or other technical fields. Data Science - Hands on experience and strong knowledge of building machine learning models - supervised and unsupervised models Programming Skills - Hands-on experience in statistical programming languages like Python and database query languages like SQL Statistics - Good applied statistical skills, including knowledge of statistical tests, distributions, regression, maximum likelihood estimators Any Cloud - Experience in Databricks and ADF is desirable Familiarity with Spark, Hive, Pig is an added advantage Model deployment experience will be a plus Experience with version control systems like GitHub and CI/CD tools Experience in Exploratory data Analysis Knowledge of ML Ops / DevOps and deploying ML models is required Experience using MLFlow, Kubeflow etc. will be preferred Experience executing and contributing to ML OPS automation infrastructure is good to have Exceptional analytical and problem-solving skills
Posted 2 days ago
0.0 - 3.0 years
0 - 0 Lacs
Pune, Maharashtra
Remote
We’re seeking an experienced and confident Cold Calling Expert who can make high-quality outbound calls to businesses and prospects in the USA market . Your goal is to generate qualified leads, set appointments, and spark meaningful business conversations. Key Responsibilities (Female only apply) Make outbound cold calls to US-based leads (B2B). Engage prospects, introduce services, and handle objections. Qualify leads and schedule meetings for the sales team. Maintain CRM records and update call outcomes. Meet weekly call volume and conversion targets. Requirements Proven cold calling or outbound sales experience (USA market preferred). Fluent English communication skills – both spoken and written. Comfortable with US business culture and etiquette . Strong interpersonal and persuasive skills. Self-motivated, reliable, and target-oriented. Tech-savvy with experience using CRMs (e.g., HubSpot, Zoho, Salesforce, etc.). Quiet work environment and noise-canceling headset. Preferred Qualifications Previous BPO, VA agency, or lead generation experience. Experience calling for real estate, SaaS, medical, or service-based companies . Familiarity with tools like Dialpad, Aircall, RingCentral, etc. What We Offer Competitive hourly rate or commission-based model (based on experience). Flexible working hours. Work from the comfort of your home. Ongoing projects with long-term opportunities. Supportive team and training (if required). Job Types: Full-time, Part-time Pay: ₹15,000.00 - ₹40,000.00 per month Benefits: Health insurance Internet reimbursement Paid sick time Education: Bachelor's (Preferred) Experience: total: 3 years (Required) Language: English (Preferred) Usa English (Required) Location: Pune, Maharashtra (Required) Shift availability: Night Shift (Required) Work Location: Remote Speak with the employer +91 7841970552
Posted 2 days ago
6.0 - 8.0 years
1 - 4 Lacs
Chennai
Hybrid
3+ years of experience as a Snowflake Developer or Data Engineer. Strong knowledge of SQL, SnowSQL, and Snowflake schema design. Experience with ETL tools and data pipeline automation. Basic understanding of US healthcare data (claims, eligibility, providers, payers). Experience working with largescale datasets and cloud platforms (AWS, Azure,GCP). Familiarity with data governance, security, and compliance (HIPAA, HITECH).
Posted 2 days ago
12.0 years
0 Lacs
Greater Chennai Area
On-site
Job Description The Data Engineering team within the AI, Data, and Analytics (AIDA) organization is the backbone of our data-driven sales and marketing operations. We provide the essential foundation for transformative insights and data innovation. By focusing on integration, curation, quality, and data expertise across diverse sources, we power world-class solutions that advance Pfizer’s mission. Join us in shaping a data-driven organization that makes a meaningful global impact. Role Summary We are seeking a technically adept and experienced Data Solutions Engineering Senior Manager who is passionate about and skilled in designing and developing robust, scalable data models. This role focuses on optimizing the consumption of data sources to generate unique insights from Pfizer’s extensive data ecosystems. A strong technical design and development background is essential to ensure effective collaboration with engineering and developer team members. As a Senior Data Solutions Engineer in our data lake/data warehousing team, you will play a crucial role in designing and building data pipelines and processes that support data transformation, workload management, data structures, dependencies, and metadata management. Your expertise will be pivotal in creating and maintaining the data capabilities that enables advanced analytics and data-driven decision-making. In this role, you will work closely with stakeholders to understand their needs and collaborate with them to create end-to-end data solutions. This process starts with designing data models and pipelines and establishing robust CI/CD procedures. You will work with complex and advanced data environments, design and implement the right architecture to build reusable data products and solutions, and support various analytics use cases, including business reporting, production data pipelines, machine learning, optimization models, statistical models, and simulations. As the Data Solutions Engineering Senior Manager, you will develop sound data quality and integrity standards and controls. You will enable data engineering communities with standard protocols to validate and cleanse data, resolve data anomalies, implement data quality checks, and conduct system integration testing (SIT) and user acceptance testing (UAT). The ideal candidate is a passionate and results-oriented product lead with a proven track record of delivering data-driven solutions for the pharmaceutical industry. Role Responsibilities Project solutioning, including scoping, and estimation. Data sourcing, investigation, and profiling. Prototyping and design thinking. Designing and developing data pipelines & complex data workflows. Create standard procedures to ensure efficient CI/CD. Responsible for project documentation and playbook, including but not limited to physical models, conceptual models, data dictionaries and data cataloging. Technical issue debugging and resolutions. Accountable for engineering development of both internal and external facing data solutions by conforming to EDSE and Digital technology standards. Partner with internal / external partners to design, build and deliver best in class data products globally to improve the quality of our customer analytics and insights and the growth of commercial in its role in helping patients. Demonstrate outstanding collaboration and operational excellence. Drive best practices and world-class product capabilities. Qualifications Bachelor’s degree in a technical area such as computer science, engineering, or management information science. Master’s degree is preferred. 12 to 16 years of combined data warehouse/data lake experience as a data lake/warehouse developer or data engineer. 12 to 16 years in developing data product and data features in servicing analytics and AI use cases. Recent Healthcare Life Sciences (pharma preferred) and/or commercial/marketing data experience is highly preferred. Domain knowledge in the pharmaceutical industry preferred. Good knowledge of data governance and data cataloging best practices. Technical Skillset 9+ years of hands-on experience in working with SQL, Python, object-oriented scripting languages (e.g. Java, C++, etc.) in building data pipelines and processes. Proficiency in SQL programming, including the ability to create and debug stored procedures, functions, and views. 9+ years of hands-on experience designing and delivering data lake/data warehousing projects. Minimal of 5 years in hands on design of data models. Proven ability to effectively assist the team in resolving technical issues. Proficient in working with cloud native SQL and NoSQL database platforms. Snowflake experience is desirable. Experience in AWS services EC2, EMR, RDS, Spark is preferred. Solid understanding of Scrum/Agile is preferred and working knowledge of CI/CD, GitHub MLflow. Familiarity with data privacy standards, governance principles, data protection, pharma industry practices/GDPR compliance is preferred. Great communication skills. Great business influencing and stakeholder management skills. Pfizer is an equal opportunity employer and complies with all applicable equal employment opportunity legislation in each jurisdiction in which it operates. Information & Business Tech
Posted 2 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
31458 Jobs | Dublin
Wipro
16542 Jobs | Bengaluru
EY
10788 Jobs | London
Accenture in India
10711 Jobs | Dublin 2
Amazon
8660 Jobs | Seattle,WA
Uplers
8559 Jobs | Ahmedabad
IBM
7988 Jobs | Armonk
Oracle
7535 Jobs | Redwood City
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi
Capgemini
6091 Jobs | Paris,France