Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
2.0 years
0 Lacs
Gurugram, Haryana
On-site
- 1+ years of data querying languages (e.g. SQL), scripting languages (e.g. Python) or statistical/mathematical software (e.g. R, SAS, Matlab, etc.) experience - 2+ years of data/research scientist, statistician or quantitative analyst in an internet-based company with complex and big data sources experience Job Description Are you interested in applying your strong quantitative analysis and big data skills to world-changing problems? Are you interested in driving the development of methods, models and systems for capacity planning, transportation and fulfillment network? If so, then this is the job for you. Our team is responsible for creating core analytics tech capabilities, platforms development and data engineering. We develop scalable analytics applications and research modeling to optimize operation processes. We standardize and optimize data sources and visualization efforts across geographies, builds up and maintains the online BI services and data mart. You will work with professional software development managers, data engineers, scientists, business intelligence engineers and product managers using rigorous quantitative approaches to ensure high quality data tech products for our customers around the world, including India, Australia, Brazil, Mexico, Singapore and Middle East. Amazon is growing rapidly and because we are driven by faster delivery to customers, a more efficient supply chain network, and lower cost of operations, our main focus is in the development of strategic models and automation tools fed by our massive amounts of available data. You will be responsible for building these models/tools that improve the economics of Amazon’s worldwide fulfillment networks in emerging countries as Amazon increases the speed and decreases the cost to deliver products to customers. You will identify and evaluate opportunities to reduce variable costs by improving fulfillment center processes, transportation operations and scheduling, and the execution to operational plans. You will also improve the efficiency of capital investment by helping the fulfillment centers to improve storage utilization and the effective use of automation. Finally, you will help create the metrics to quantify improvements to the fulfillment costs (e.g., transportation and labor costs) resulting from the application of these optimization models and tools. Major responsibilities include: · Translating business questions and concerns into specific analytical questions that can be answered with available data using BI tools; produce the required data when it is not available. · Apply Statistical and Machine Learning methods to specific business problems and data. · Create global standard metrics across regions and perform benchmark analysis. · Ensure data quality throughout all stages of acquisition and processing, including such areas as data sourcing/collection, ground truth generation, normalization, transformation, cross-lingual alignment/mapping, etc. · Communicate proposals and results in a clear manner backed by data and coupled with actionable conclusions to drive business decisions. · Collaborate with colleagues from multidisciplinary science, engineering and business backgrounds. · Develop efficient data querying and modeling infrastructure. · Manage your own process. Prioritize and execute on high impact projects, triage external requests, and ensure to deliver projects in time. · Utilizing code (Python, R, Scala, etc.) for analyzing data and building statistical models. Knowledge of statistical packages and business intelligence tools such as SPSS, SAS, S-PLUS, or R Experience with clustered data processing (e.g., Hadoop, Spark, Map-reduce, and Hive) Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Posted 1 week ago
0 years
0 Lacs
Bengaluru South, Karnataka, India
On-site
You Lead the Way. We’ve Got Your Back. With the right backing, people and businesses have the power to progress in incredible ways. When you join Team Amex, you become part of a global and diverse community of colleagues with an unwavering commitment to back our customers, communities, and each other. Here, you will learn and grow as we help you create a career journey that is unique and meaningful to you with benefits, programs, and flexibility that support you personally and professionally. At American Express, you will be recognized for your contributions, leadership, and impact—every colleague has the opportunity to share in the company’s success. Together, we will win as a team, striving to uphold our company values and powerful backing promise to provide the world’s best customer experience every day. And we will do it with the utmost integrity, and in an environment where everyone is seen, heard and feels like they belong. Join Team Amex and let's lead the way together. Data ENGINEER of Digital Workspace Data & Analytics Platform What is Amex’s objective for a digital workplace? The Digital Workplace Data & Analytics Platform with AI ML capabilities aims to bring together the data from all Unified Workspace, Collaboration and Colleague Servicing platforms, combining this with HR, Information Security, and network data to provide real-time, meaningful insights in areas such as user experience, health scoring, productivity, and overall IT visibility. As the Engineer 1 of the Digital Workplace Data & Analytics Platform, you will have responsibility for leading the engineering teams to develop the Advance Data engineering pipeline, Data Management, Data DevOps on Cloud platform and enhance it to provide personalization capabilities, analytics, and engineering automations, and best practices. Our winning aspiration is to deliver the best Colleague digital experience. We simplify work and raise productivity by empowering Colleagues with the best digital tools and services. Opportunity for Impact Digital Workplace at American Express is entering into a new phase of technology transformation driven by opportunities to improve Colleague experience, raise productivity and collaboration, and drive operational efficiency of all service and infrastructure operations. If you have the talent and desire to deliver innovative products and services at a rapid pace, with hands on experience and strategic thinking, in areas of productivity and collaboration software suites, endpoint computing and security, mobile platforms, data management and analytics, and software engineering, join our leadership team to help with our transformation journey. Role and Responsibilities: The Data & Analytics platform with AI ML capabilities is central to the future of how we work and improve colleague experience while identifying opportunities for improvement. As the engineer of this group, you will: ● Create and build entire Data & Analytics solution in production grade. ● Work & collaborate with Product Management and Digital Workplace teams to influence key decisions on architecture/ solutions and implementation of scalable, reliable, and cost-effective Data & Analytics platform ● Bring thought leadership to advance the overall state of technology and customer focus for the platform ● Enthusiasm for staying up to date with the latest advancements in Cloud (GCP), Data engineering pipeline mainly spark, Data ingestion, Data Management. ● A portfolio showcasing previous Cloud based Data & Analytics projects, contributions to open-source projects, or relevant publications is a plus. ● Build culture of innovation, ownership, accountability, and customer focus ● Contribute to the American Express Data & Analytics Strategy. Working with other Technology teams to drive enterprise solutions, define best practice at a company level and further develop skills and experience outside Digital Workplace. ● Partner with the Digital Workplace technology teams to develop the next generation solution. platform strategy across all products and channels. ● Participate actively and constructively in agile team meetings and application reviews ● Work directly with and learn from the business, product, and engineering leaders across the organization ● Strengthen the collaboration with Industry partners/suppliers for more robust data solutions and market research for innovative solutions in this space. Professional Qualifications: ● Have extensive knowledge to build production grade Data Pipeline mainly Spark ● Have extensive knowledge of Real time data ingestion, batch data ingestion, data governance and Management, and Data devops ● Mandatory strong experience in scalability, large scale distributed system designs to handle million requests, including reliability engineering, and platform monitoring . ● Expertise in pre-processing and cleaning large datasets as a part of ingesting data, from multiple sources within the enterprise. ● Willingness to learn AI ML pipeline that are used in multilingual conversational systems. ● Take an extra mile to solve real-world scenarios for user commands and requests by identifying the right pipeline strategy, tooling and frameworks. ● Review architecture and provide technical guidance for engineers ● Good to have experience on various data architectures, latest tools, current and future trends in data engineering space especially Big Data, Streaming and Cloud technologies like GCP, AWS, Azure. But GCP knowledge is mandatory ● Good to have experience with Big Data technologies (Hadoop framework) and have at least 1 Big data implementation on production grade platforms. ● Good to have experience with Visualization Tools like ElasticSearch, Tableau, Power BI, etc. ● Ability to learn new tools and paradigms in data engineering and science ● Well versed in AGILE, SAFe and Program Management methods ● Bachelor’s degree with a preference for Computer Science, Master’s is a plus. We back our colleagues and their loved ones with benefits and programs that support their holistic well-being. That means we prioritize their physical, financial, and mental health through each stage of life. Benefits include: Competitive base salaries Bonus incentives Support for financial-well-being and retirement Comprehensive medical, dental, vision, life insurance, and disability benefits (depending on location) Flexible working model with hybrid, onsite or virtual arrangements depending on role and business need Generous paid parental leave policies (depending on your location) Free access to global on-site wellness centers staffed with nurses and doctors (depending on location) Free and confidential counseling support through our Healthy Minds program Career development and training opportunities American Express is an equal opportunity employer and makes employment decisions without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran status, disability status, age, or any other status protected by law. Offer of employment with American Express is conditioned upon the successful completion of a background verification check, subject to applicable laws and regulations. Show more Show less
Posted 1 week ago
4.0 years
0 Lacs
Greater Kolkata Area
On-site
Title : Data Scientist Experience : 4 to 6 years Company : Scry AI Skill Sets : GEN AI (LLM), Python, Scikit-learn, SQL, k-means, decision trees, random forests, neural networks, genetic algorithms About The Role Design and develop Predictive Analytics and Prescriptive Analytics architecture and features for artificial intelligence software solutions, including developing machine learning and statistical models to perform big data analysis. Duties Will Include The Following Research, design and develop algorithms and techniques to perform data manipulation and transformation; build generative and discriminative models to perform clustering, classification, regression, and recommendation systems. Design and build custom machine learning models. Building GEN AI (LLM) pipelines using LangChain, LlamaIndex, Vector DBs, embedding generation and Hugging Face. Build and implement machine learning algorithms to plan, create, coordinate, and deploy business information solutions; gather business requirements and technical specifications for developing analytics solutions. Develop artificial intelligence and machine learning algorithms. Custom design and implement various techniques in machine learning such as support vector machines, neural networks, genetic algorithms, principal component analysis, cluster analysis, k-means, decision trees, random forests, regression analysis, Bayesian analysis, Na- ve Bayes, and deep learning. Analyze large, complex and multi-dimensional data sets. Develop analytic solutions using R, Python, Scikit-learn, SQL and similar statistical tools and frameworks. Utilize Deep Learning neural networks including RNN, CNN, LSTM, and Autoencoder to predict the behavior and performance of businesses. Requirements Masters degree (or equivalent-) in Data Informatics, Computer Science, Applied Mathematics, Statistics, Analytics or a related field such as Computer Science and Engineering. Minimum 4+ years of relevant experience in the job offered or as a Software Engineer, Software Engineering Analyst, Application Developer, Programmer Analyst. Minimum 2 years of experience with algorithm design and development; software architecture development; data engineering, data structures, and data visualization; statistical data analysis. Artificial Intelligence (AI) and deep learning techniques; feature synthesis, engineering, and hyper-parameter tuning. Natural Language Processing; Machine Learning, including Bayesian modeling, multivariate and logistic regression, support vector machines, cluster analysis, decision and regression trees, random forest, neural networks and ensemble methods; analyzing large, complex, multi-dimensional data sets and developing analytic solutions. Experience working with structured and unstructured data (text, image). Working experience with unstructured data such as audios and videos is not mandatory but will be an add-on advantage. Experience in predictive analytics using R, Python, Scikit-learn, and SQL, PostgreSQL statistical tools; Spark, Hadoop, Unix O/S, GitHub; and programming languages Java, Python, R. Our Perfect Candidate Is Someone That Is proactive and an independent problem solver. Is a constant learner. We are a fast-growing company. We want you to grow with us! Is a team player and good communicator. (ref:hirist.tech) Show more Show less
Posted 1 week ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Position Overview Job Title: Core Python Engineer Corporate Title: Assistant Vice President Location: Pune, India Role Description Regulatory Technology aims to be an industry leading function that delivers sustainable regulatory compliance through technology automation and competitive operating leverage to create a safe and controlled operating environment that protects the Deutsche Bank franchise and its clients. This specific role is with Internal and External Surveillance where we are monitoring traders’ and clients’ activities looking for anomalous behavior using Big Data tools, including Python and Spark and React technologies. In order to do this, we must ensure that we remain an engineering focused organization. We are looking for technologists who demonstrate a passion to build the right thing in the right way. You will will work as part of a cross-functional agile development team, collaborating with Product SMEs, analysts, testers, DevOps and stakeholders. You will bring an innovative approach to software development, focusing on using the latest technologies and practices, as part of a relentless focus on business value. You will be someone who sees engineering as team activity, with a predisposition to open code, open discussion and creating a supportive, collaborative environment. You will be ready to take a leading role in all stages of software delivery, from intial analysis right through to production support. We will ask a lot of you, but we will offer a lot in return. You will have an opportunity to work in an environment that provides continuous growth and learning, with an emphasis on excellence and mutual respect. People Management As an Assistant Vice President, you will be expected to lead others. For example in sharing knowledge, facilitating meetings and workshops, defining new designs and discovering new techniques. In some cases, it may also include elements of team leadership or line management. What We’ll Offer You As part of our flexible scheme, here are just some of the benefits that you’ll enjoy, Best in class leave policy. Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your Key Responsibilities Work as part of a delivery team, collaborating with others to understand and capture requirements, analyse and refine stories, design solutions, implement them, test them and support them in production. Design and develop excellent and understandable server side code. Work closely with users to gain feedback and ensure they are fit for purpose. Use BDD techniques, collaborating closely with users, analysts, developers and other testers. Make sure we are building the right thing. Write code and write it well. Be proud to call yourself a programmer. Use test driven development, write clean code and refactor constantly. Make sure we are building the thing right. Define and evolve the architecture of the components you are working on. Contribute to architectural decisions at a department and bank-wide level. Ensure that the software you build is reliable and easy to support in production. Be prepared to take your turn on call providing 3rd line support when it’s needed Help your team build, test and release software with the short lead times and a minimum of waste. Work to develop and maintain a highly automated Continuous Delivery pipeline. Help create a culture of learning and continuous improvement within your team and beyond Your Skills And Experience You will need: Experience with Python and Scala for building server-side code Experience of modern Python libraries, including PySpark, Pandas numpy, scikit-learn, etc. Experience of server side programming, preferably using Python Flask Working knowledge of back-end data software - SQL (Impala/Oracle), SQL Alchemy Understanding of back-end design, including REST services, SQL data access, Experience working in an agile team, practicing Scrum or Kanban Desire to write robust, maintainable & re-usable code Practical experience of TDD and constant refactoring in continuous integration environment. Practical experience of delivering good quality code within enterprise scale development (CI/CD) Supporting academic background in computer science (graduate) The ideal candidate will also have: Working knowledge of Hadoop Experience with cloud technologies (Google cloud, AWS, Azure etc.) Experience of container technologies such as Kubernetes or Docker Experience in other programing languages, specifically Java would be helpful Behavior Driven Development, particularly experience of how it can be used to define requirements in a collaborative manner, ensure that the team builds the right thing and create a system of living documentation Knowledge gained in Financial Services environments, for example products, instruments, trade lifecycles, regulation, risk, financial reporting or accounting Education/ Qualifications We are happy to consider candidates with a wide variety of educational backgrounds and qualifications. Qualifications in computer science, STEM subjects, other numerate disciplines, business and economics are beneficial for the role. We also look favorably upon candidates with equivalent practical experience. This could have be gained in the workplace or in other contexts, such as contributing to open source software or working on personal projects. How We’ll Support You Training and development to help you excel in your career. Coaching and support from experts in your team. A culture of continuous learning to aid progression. A range of flexible benefits that you can tailor to suit your needs. About Us And Our Teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment. Show more Show less
Posted 1 week ago
4.0 - 9.0 years
11 - 20 Lacs
Pune, Chennai, Bengaluru
Work from Office
Project Role : Application Developer Location : Bangalore/Chennai/Pune Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Ab Initio Minimum 4-12 Years of experience Educational Qualification : 15 years of full term education Summary: As an Application Developer with expertise in Ab Initio, you will be responsible for designing, building, and configuring applications to meet business process and application requirements. Your typical day will involve working with Ab Initio, collaborating with cross-functional teams, and delivering impactful data-driven solutions. Roles & Responsibilities: - Design, build, and configure applications to meet business process and application requirements using Ab Initio. - Collaborate with cross-functional teams to identify and prioritize business requirements and translate them into technical solutions. - Develop and maintain technical documentation, including design documents, test plans, and user manuals. - Troubleshoot and debug applications, identifying and resolving technical issues in a timely manner. - Stay updated with the latest advancements in Ab Initio and related technologies, integrating innovative approaches for sustained competitive advantage. Professional & Technical Skills: - Must To Have Skills: Expertise in Ab Initio. - Good To Have Skills: Experience with related technologies such as Hadoop, Spark, and Hive. - Strong understanding of ETL concepts and data warehousing principles. - Experience with SQL and Unix scripting. - Solid grasp of software development life cycle (SDLC) methodologies and agile development practices. Interested candidates can reach on neha,singh@mounttalent.com
Posted 1 week ago
20.0 - 22.0 years
35 - 70 Lacs
Chennai
Work from Office
Role & responsibilities Technical Requirements (what) Advanced knowledge of data analysis and modeling principles: KPI Tree creation, Reporting best practices, predictive analytics, statistical and ML based modeling techniques Work Experience of 20+ years in analytics inclusive of 5+ years of experience of leading a team. Candidates from Insurance Industry are preferred. Understanding of and experience using analytical concepts and statistical techniques: hypothesis development, designing tests/experiments, analyzing data, drawing conclusions, and developing actionable recommendations for business units. Strong problem solving, quantitative and analytical abilities. Strong ability to plan and manage numerous processes, people, and projects simultaneously. The right candidate will also be proficient and experienced with the following tools/programs: Experience with working on big data environments: Teradata, Aster, Hadoop. Experience with testing tools such as Adobe Test & Target. Experience with data visualization tools: Tableau, Raw, chart.js. Experience with Adobe Analytics and other analytics tools. Preferred candidate profile Perks and benefits
Posted 1 week ago
10.0 - 20.0 years
30 - 45 Lacs
Chennai
Work from Office
Role & responsibilities Advanced knowledge of data analysis and modeling principles: KPI Tree creation, Reporting best practices, predictive analytics, statistical and ML based modeling techniques Work Experience of 10+ years in analytics inclusive of 3+ years of experience of leading a team. Candidates from Insurance Industry are preferred. Strong SQL skills, ability to perform effective querying involving multiple tables and subqueries. Understanding of and experience using analytical concepts and statistical techniques: hypothesis development, designing tests/experiments, analyzing data, drawing conclusions, and developing actionable recommendations for business units. Strong problem solving, quantitative and analytical abilities. Strong ability to plan and manage numerous processes, people, and projects simultaneously. The right candidate will also be proficient and experienced with the following tools/programs: Strong programming skills with querying languages: SQL. Experience with working on big data environments: Teradata, Aster, Hadoop. Experience with testing tools such as Adobe Test & Target. Experience with data visualization tools: Tableau, Raw, chart.js. Preferred candidate profile Perks and benefits
Posted 1 week ago
3.0 - 8.0 years
7 - 17 Lacs
Noida
Work from Office
About The Job Position Title: Big Data Admin Department: Product & Engineering Job Scope: India Location: Noida, India Reporting to: Director- DevOps Work Setting: Onsite Purpose of the Job As we are working on Big Data stack as well; where we need to setup, optimise, and maintain multiple services of Big Data clusters. We have expertise in AWS, Cloud, security, cluster etc. but now, we need a special expertise person (Big-Data Admin) who can help us to setup and maintain Bigdata environment in better way and keep it live with multi-cluster setup in Production environment. Key Responsibilities Participate in requirements analysis. Write clean, scalable jobs. Collaborate with internal teams to produce solutions and architecture. Test and deploy applications and systems. Revise, update, refactor, and debug code. Improve existing software. Serve as an expert on Big Data stack and provide technical support Qualifications Requirement: Experience, Skills & Education Graduate with 3+ years of Experience in Big Data technology Expertise in Hadoop, Yarn, Spark, Airflow, Cassandra, ELK, Redis, Grafana etc Expertise in cloud managed Bigdata stack like MWAA, EMR, EKS etc Good Knowledge of python and scripting Knowledge of optimization and performance tuning for Bigdata stack Troubleshoot skill is a must. Must have good knowledge of Linux OS and troubleshooting. Desired Skills Bigdata stack. Linux OS and troubleshooting Why Explore a Career: Be a Part of the Revolution in Healthcare Marketing. Innovate with Us to Unite and Transform the Healthcare Providers (HCPs) - Ecosystem for Improved Patient Outcomes. It has been recognized and certified two times in a row Best places to work NJ 2023, Great Place to work 2023. If you are passionate about health technology and have a knack for turning complex concepts into compelling narratives, we invite you to apply for this exciting opportunity to contribute to the success of our innovative health tech company. Below are the competitive benefits that will be provided to the selected candidates basis their location. Competitive Salary Package Generous Leave Policy Flexible Working Hours Performance-Based Bonuses Health Care Benefits
Posted 1 week ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
JOB_POSTING-3-71440 Job Description Role Title : AVP, Campaign Operations Manager (L11) Company Overview: Synchrony (NYSE: SYF) is a premier consumer financial services company delivering one of the industry’s most complete digitally enabled product suites. Our experience, expertise and scale encompass a broad spectrum of industries including digital, health and wellness, retail, telecommunications, home, auto, outdoors, pet and more. We have recently been ranked #2 among India’s Best Companies to Work for by Great Place to Work. We were among the Top 50 India’s Best Workplaces in Building a Culture of Innovation by All by GPTW and Top 25 among Best Workplaces in BFSI by GPTW. We have also been recognized by Ambition Box Employee Choice Awards among the Top 20 Mid-Sized Companies, ranked #3 among Top Rated Companies for Women, and Top-Rated Financial Services Companies. Synchrony celebrates ~51% women diversity, 105+ people with disabilities, and ~50 veterans and veteran family members. We offer Flexibility and Choice for all employees and provide best-in-class employee benefits and programs that cater to work-life integration and overall well-being. We provide career advancement and upskilling opportunities, focusing on Advancing Diverse Talent to take up leadership roles. Organizational Overview: The Performance Marketing Team is the engine behind business growth as it handles the majority of marketing activities. This includes targeting the right audience through campaigns, distributing content via channel marketing, conducting a thorough analysis of campaign launches and budgets, and also ensuring compliance via surveillance and governance all to maximize results and maintain a competitive edge. Together this team drives ROI and elevates Synchrony's brand presence in a dynamic market. Role Summary/Purpose: As AVP, Campaign Operations you will be an integral member of the Growth marketing within Synchrony Financial, India. In this role, you will work closely with internal team and stakeholders to lead the evolution of our campaign operations with a focus on personalized, end-to-end consumer journeys, enabled by data, automation and Gen AI. You will serve as a transformation agent, working cross-functionally to embed a consumer first mindset into the fabric of our marketing operations. Key Responsibilities Campaign Process Discovery :- Explore and analyze existing campaign methodology and processes to identify the opportunities and gaps for the transformation Journey Planning through Collaboration :- Engage with team members, stakeholders to understand real-world challenges and co-create journey based strategies tailored to business objectives. Lead campaign transformation :- Design and implement digitally-enabled, consumer first campaign operations that drive personalized engagement across channels Journey design and execution :- Develop and manage consumer journeys in marketing platforms and Customer Data Platforms (CDPs), ensuring responsiveness and relevance in real time Cross functional collaboration :- Partner with other MarTech teams (eg:-core CDP, Audience management), Marketing Cloud and Digital product teams to align strategy and integrate personalization into journeys Scalable Frameworks :-Create templates and strategic playbooks for identification, targeting, segmentation and orchestration of journey to ensure transformation with success Innovation and Best Practices :- Drive digital transformation by embedding modern campaign methodologies and personalization best practices into operational workflows Team enablement :- Mentor and upskill team members on journey-based thinking, platform usage and consumer centric execution Required Skills/Knowledge 8+ years of experience in digital marketing, campaign operations with consumer journey design – preferably within financial services. Strong background in Consumer journey design, personalization and omnichannel campaign delivery Proficiency with Customer Data Platforms (eg:- Blueconic, Adobe experience Platform, Salesforce, SAS CI 360) Working knowledge on SAS or SQL Understanding of quality management practices in marketing operations, including process governance and continuous improvement. Hands on experience with marketing automation platforms and real-time journey orchestration tools. Solid interpersonal skills at all organizational levels, including the ability to communicate effectively and influence others in the business Desired Skills/Knowledge Working knowledge of Gen AI application in marketing operations, consumer insight generation Experience driving change and implementing new frameworks or technologies within complex marketing ecosystems. Strong project management and analytical skills with a focus on measurable business impact Creative thinker with the ability to reimagine existing processes Experience with statistical analytics languages and platforms is a plus i.e., Python, Hadoop etc Eligibility Criteria Bachelor's Degree with 8+ years of IT experience or, in lieu of a degree, minimum of 10+ years of Marketing experience in financial domain Work Timings: 2 PM to 11 PM IST For Internal Applicants Understand the criteria or mandatory skills required for the role, before applying Inform your manager and HRM before applying for any role on Workday Ensure that your professional profile is updated (fields such as education, prior experience, other skills) and it is mandatory to upload your updated resume (Word or PDF format) Must not be any corrective action plan (Formal/Final Formal, LPP) L9+ Employees who have completed 18 months in the organization and 12 months in current role and level are only eligible. L09+ Employees can apply Level / Grade : 11 Job Family Group Marketing Show more Show less
Posted 1 week ago
8.0 - 13.0 years
30 - 35 Lacs
Bengaluru
Work from Office
About The Role Data Engineer -1 (Experience 0-2 years) What we offer Our mission is simple Building trust. Our customer's trust in us is not merely about the safety of their assets but also about how dependable our digital offerings are. That"™s why, we at Kotak Group are dedicated to transforming banking by imbibing a technology-first approach in everything we do, with an aim to enhance customer experience by providing superior banking services. We welcome and invite the best technological minds in the country to come join us in our mission to make banking seamless and swift. Here, we promise you meaningful work that positively impacts the lives of many. About our team DEX is a central data org for Kotak Bank which manages entire data experience of Kotak Bank. DEX stands for Kotak"™s Data Exchange. This org comprises of Data Platform, Data Engineering and Data Governance charter. The org sits closely with Analytics org. DEX is primarily working on greenfield project to revamp entire data platform which is on premise solutions to scalable AWS cloud-based platform. The team is being built ground up which provides great opportunities to technology fellows to build things from scratch and build one of the best-in-class data lake house solutions. The primary skills this team should encompass are Software development skills preferably Python for platform building on AWS; Data engineering Spark (pyspark, sparksql, scala) for ETL development, Advanced SQL and Data modelling for Analytics. The org size is expected to be around 100+ member team primarily based out of Bangalore comprising of ~10 sub teams independently driving their charter. As a member of this team, you get opportunity to learn fintech space which is most sought-after domain in current world, be a early member in digital transformation journey of Kotak, learn and leverage technology to build complex data data platform solutions including, real time, micro batch, batch and analytics solutions in a programmatic way and also be futuristic to build systems which can be operated by machines using AI technologies. The data platform org is divided into 3 key verticals: Data Platform This Vertical is responsible for building data platform which includes optimized storage for entire bank and building centralized data lake, managed compute and orchestrations framework including concepts of serverless data solutions, managing central data warehouse for extremely high concurrency use cases, building connectors for different sources, building customer feature repository, build cost optimization solutions like EMR optimizers, perform automations and build observability capabilities for Kotak"™s data platform. The team will also be center for Data Engineering excellence driving trainings and knowledge sharing sessions with large data consumer base within Kotak. Data Engineering This team will own data pipelines for thousands of datasets, be skilled to source data from 100+ source systems and enable data consumptions for 30+ data analytics products. The team will learn and built data models in a config based and programmatic and think big to build one of the most leveraged data model for financial orgs. This team will also enable centralized reporting for Kotak Bank which cuts across multiple products and dimensions. Additionally, the data build by this team will be consumed by 20K + branch consumers, RMs, Branch Managers and all analytics usecases. Data Governance The team will be central data governance team for Kotak bank managing Metadata platforms, Data Privacy, Data Security, Data Stewardship and Data Quality platform. If you"™ve right data skills and are ready for building data lake solutions from scratch for high concurrency systems involving multiple systems then this is the team for you. You day to day role will include Drive business decisions with technical input and lead the team. Design, implement, and support an data infrastructure from scratch. Manage AWS resources, including EC2, EMR, S3, Glue, Redshift, and MWAA. Extract, transform, and load data from various sources using SQL and AWS big data technologies. Explore and learn the latest AWS technologies to enhance capabilities and efficiency. Collaborate with data scientists and BI engineers to adopt best practices in reporting and analysis. Improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers. Build data platforms, data pipelines, or data management and governance tools. BASIC QUALIFICATIONS for Data Engineer/ SDE in Data Bachelor's degree in Computer Science, Engineering, or a related field Experience in data engineering Strong understanding of AWS technologies, including S3, Redshift, Glue, and EMR Experience with data pipeline tools such as Airflow and Spark Experience with data modeling and data quality best practices Excellent problem-solving and analytical skills Strong communication and teamwork skills Experience in at least one modern scripting or programming language, such as Python, Java, or Scala Strong advanced SQL skills PREFERRED QUALIFICATIONS AWS cloud technologiesRedshift, S3, Glue, EMR, Kinesis, Firehose, Lambda, IAM, Airflow Prior experience in Indian Banking segment and/or Fintech is desired. Experience with Non-relational databases and data stores Building and operating highly available, distributed data processing systems for large datasets Professional software engineering and best practices for the full software development life cycle Designing, developing, and implementing different types of data warehousing layers Leading the design, implementation, and successful delivery of large-scale, critical, or complex data solutions Building scalable data infrastructure and understanding distributed systems concepts SQL, ETL, and data modelling Ensuring the accuracy and availability of data to customers Proficient in at least one scripting or programming language for handling large volume data processing Strong presentation and communications skills.
Posted 1 week ago
13.0 - 21.0 years
35 - 40 Lacs
Bengaluru
Work from Office
About The Role FunctionSoftware Engineering, Backend Development Responsibilities: You will work on building the biggest neo-banking app of India You will own the design process, implementation of standard software engineering methodologies while improving performance, scalability and maintainability You will be translating functional and technical requirements into detailed design and architecture You will be collaborating with UX designers and product owners for detailed product requirements You will be part of a fast growing engineering group You will be responsible for mentoring other engineers, defining our tech culture and helping build a fast growing team Requirements: 2-6 years of experience in product development, design and architecture Hands on expertise in at least one of the following programming languages Java, Python NodeJS and Go Hands on expertise in SQL and NoSQL databases Expertise in problem solving, data structure and algorithms Deep understanding and experience in object oriented design Ability in designing and architecting horizontally scalable software systems Drive to constantly learn and improve yourself and processes surrounding you Mentoring, collaborating and knowledge sharing with other engineers in the team Self-starter Strive to write the optimal code possible day in day out What you will get:
Posted 1 week ago
2.0 - 6.0 years
7 - 11 Lacs
Bengaluru
Work from Office
About The Role This is an Internal document. Job TitleSenior Data Engineer About The Role As a Senior Data Engineer, you will play a key role in designing and implementing data solutions @Kotak811. — You will be responsible for leading data engineering projects, mentoring junior team members, and collaborating with cross-functional teams to deliver high-quality and scalable data infrastructure. — Your expertise in data architecture, performance optimization, and data integration will be instrumental in driving the success of our data initiatives. Responsibilities 1. Data Architecture and Designa. Design and develop scalable, high-performance data architecture and data models. b. Collaborate with data scientists, architects, and business stakeholders to understand data requirements and design optimal data solutions. c. Evaluate and select appropriate technologies, tools, and frameworks for data engineering projects. d. Define and enforce data engineering best practices, standards, and guidelines. 2. Data Pipeline Development & Maintenancea. Develop and maintain robust and scalable data pipelines for data ingestion, transformation, and loading for real-time and batch-use-cases b. Implement ETL processes to integrate data from various sources into data storage systems. c. Optimise data pipelines for performance, scalability, and reliability. i. Identify and resolve performance bottlenecks in data pipelines and analytical systems. ii. Monitor and analyse system performance metrics, identifying areas for improvement and implementing solutions. iii. Optimise database performance, including query tuning, indexing, and partitioning strategies. d. Implement real-time and batch data processing solutions. 3. Data Quality and Governancea. Implement data quality frameworks and processes to ensure high data integrity and consistency. b. Design and enforce data management policies and standards. c. Develop and maintain documentation, data dictionaries, and metadata repositories. d. Conduct data profiling and analysis to identify data quality issues and implement remediation strategies. 4. ML Models Deployment & Management (is a plus) This is an Internal document. a. Responsible for designing, developing, and maintaining the infrastructure and processes necessary for deploying and managing machine learning models in production environments b. Implement model deployment strategies, including containerization and orchestration using tools like Docker and Kubernetes. c. Optimise model performance and latency for real-time inference in consumer applications. d. Collaborate with DevOps teams to implement continuous integration and continuous deployment (CI/CD) processes for model deployment. e. Monitor and troubleshoot deployed models, proactively identifying and resolving performance or data-related issues. f. Implement monitoring and logging solutions to track model performance, data drift, and system health. 5. Team Leadership and Mentorshipa. Lead data engineering projects, providing technical guidance and expertise to team members. i. Conduct code reviews and ensure adherence to coding standards and best practices. b. Mentor and coach junior data engineers, fostering their professional growth and development. c. Collaborate with cross-functional teams, including data scientists, software engineers, and business analysts, to drive successful project outcomes. d. Stay abreast of emerging technologies, trends, and best practices in data engineering and share knowledge within the team. i. Participate in the evaluation and selection of data engineering tools and technologies. Qualifications1. 3-5 years"™ experience with Bachelor's Degree in Computer Science, Engineering, Technology or related field required 2. Good understanding of streaming technologies like Kafka, Spark Streaming. 3. Experience with Enterprise Business Intelligence Platform/Data platform sizing, tuning, optimization and system landscape integration in large-scale, enterprise deployments. 4. Proficiency in one of the programming language preferably Java, Scala or Python 5. Good knowledge of Agile, SDLC/CICD practices and tools 6. Must have proven experience with Hadoop, Mapreduce, Hive, Spark, Scala programming. Must have in-depth knowledge of performance tuning/optimizing data processing jobs, debugging time consuming jobs. 7. Proven experience in development of conceptual, logical, and physical data models for Hadoop, relational, EDW (enterprise data warehouse) and OLAP database solutions. 8. Good understanding of distributed systems 9. Experience working extensively in multi-petabyte DW environment 10. Experience in engineering large-scale systems in a product environment
Posted 1 week ago
7.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
About the Role: We are looking for a self-driven Analytics Consultant to join our team of data and domain enthusiasts in healthcare payment integrity. In this role, you will have the opportunity to work with various payers and providers, helping to reduce provider abrasion and enhance provider engagement with our innovative, highly scalable solutions. Work from office Role Brief: We monitor business performance and operations, problem-solving by applying various analytics levers and involving different teams working on ML models, SQL rules, hospital profiling, pattern mining, etc., to meet client savings targets. The Analytics team functions as both the R&D and operational excellence team, constantly discovering new patterns through state-of-the-art technologies, from SQL queries to large language model (LLM) agents. Responsibilities: Design data-driven solutions and frameworks (descriptive and predictive) from scratch, and consult in a leadership capacity on potential solutions, storyboards, and POCs. Drive business metrics that contribute to top-line growth and/or profitability. Perform quantitative and qualitative analyses, including raw data analysis and deep dives, to derive insights. Develop descriptive (reporting) to prescriptive analytics for business monitoring and operational excellence. Translate data insights for business stakeholders to communicate and gain equivalent business context. Apply next-gen technologies to all parts of the analytics lifecycle, including data extraction, exploratory data analysis, data mining, information extraction from unstructured data, and visualization/storyboarding. Manage a small team of data analysts. Skills: 7+ years of experience in strategy and business optimization. Post-graduate or MBA (preferred) OR a graduate degree in Engineering, Mathematics, Operations Research, Science, or Statistics. Experience in the healthcare industry is preferred. At least 7+ years of experience in analytics using SQL, SAS, Python, basic statistical concepts, and analyzing data to interpret results for the business. Ability to translate and structure business problems to deliver technical solutions. Proven experience in a fast-paced environment, supporting multiple concurrent projects. Collaborative and team-oriented with a willingness to take on various projects. Strong desire to work in a fast-paced environment. About EXL Health Payments Analytics: At EXL Health Payments Analytics Center of Excellence, we are looking for passionate individuals with a growth/startup mindset who are ready to experiment, fail fast, learn, and contribute to our 5-fold growth journey, from $200M to $1B. EXL is recognized as a Special Investigations Unit by 6 of the top 10 U.S. health insurance companies, managing approximately one-third of U.S. healthcare data. We specialize in error/overpayment detection for hospital and doctor claims. Unlike typical services and consulting companies, we generate revenue from the savings we identify for our clients, on a commission or outcome basis. We productize algorithms and R&D accelerators that can be used across multiple health insurance clients for the above business case. Our ecosystem includes: Massive Data Assets : Millions of structured data records and thousands of unstructured records processed monthly. Tech Investment : On-prem GPUs, Azure, AWS, Databricks, and On-prem Hadoop-Hive environments. Leadership Push : Strong focus on digitization, data-led decision-making, and AI. Analytics Team : A team of 100+ data enthusiasts, decision scientists, and business/subject matter experts. Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Where Data Does More. Join the Snowflake team. Snowflake’s Support team is expanding! We are looking for a Senior Cloud Support Engineer who likes working with data and solving a wide variety of issues utilizing their technical experience having worked on a variety of operating systems, database technologies, big data, data integration, connectors, and networking. Snowflake Support is committed to providing high-quality resolutions to help deliver data-driven business insights and results. We are a team of subject matter experts collectively working toward our customers’ success. We form partnerships with customers by listening, learning, and building connections. Snowflake’s values are key to our approach and success in delivering world-class Support. Putting customers first, acting with integrity, owning initiative and accountability, and getting it done are Snowflake's core values, which are reflected in everything we do. As a Senior Cloud Support Engineer , your role is to delight our customers with your passion and knowledge of Snowflake Data Warehouse. Customers will look to you for technical guidance and expert advice with regard to their effective and optimal use of Snowflake. You will be the voice of the customer regarding product feedback and improvements for Snowflake’s product and engineering teams. You will play an integral role in building knowledge within the team and be part of strategic initiatives for organizational and process improvements. Based on business needs, you may be assigned to work with one or more Snowflake Priority Support customers . You will develop a strong understanding of the customer’s use case and how they leverage the Snowflake platform. You will deliver exceptional service, enabling them to achieve the highest levels of continuity and performance from their Snowflake implementation. Ideally, you have worked in a 24x7 environment, handled technical case escalations and incident management, worked in technical support for an RDBMS, been on-call during weekends, and are familiar with database release management. AS A SENIOR CLOUD SUPPORT ENGINEER AT SNOWFLAKE, YOU WILL: Drive technical solutions to complex problems providing in-depth analysis and guidance to Snowflake customers and partners using the following methods of communication: email, web, and phone Adhere to response and resolution SLAs and escalation processes to ensure fast resolution of customer issues that exceed expectations Demonstrate good problem-solving skills and be process-oriented Utilize the Snowflake environment, connectors, 3rd party partner software, and tools to investigate issues Document known solutions to the internal and external knowledge base Report well-documented bugs and feature requests arising from customer-submitted requests Partner with engineering teams in prioritizing and resolving customer requests Participate in a variety of Support initiatives Provide support coverage during holidays and weekends based on business needs OUR IDEAL SENIOR CLOUD SUPPORT ENGINEER WILL HAVE: Bachelor’s. or Master’s degree in Computer Science or equivalent discipline. 5+ years experience in a Technical Support environment or a similar technical function in a customer-facing role Solid knowledge of at least one major RDBMS In-depth understanding of SQL data types, aggregations, and advanced functions including analytical/window functions A deep understanding of resource locks and experience with managing concurrent transactions Proven experience with query lifecycle, profiles, and execution/explain plans Expertise in managing schedules of jobs and tasks for maximum throughput Demonstrated ability to analyze and tune query performance and provide detailed recommendations for performance improvement Advanced skills in interpreting SQL queries and execution workflow logic Proven ability with rewriting joins for optimization while maintaining logical consistency In-depth knowledge of various caching mechanisms and ability to take advantage of caching strategies to enhance performance Ability to interpret systems performance metrics (CPU, I/O, RAM, Network stats) Proficiency with JSON, XML, and other semi-structured data formats Proficient in database patch and release management NICE TO HAVES: Knowledge of distributed computing principles and frameworks (e.g., Hadoop, Spark) Scripting/coding experience in any programming language Database migration and ETL experience Ability to monitor and optimize cloud spending using cost management tools and strategies. SPECIAL REQUIREMENTS: Participate in pager duty rotations during nights, weekends, and holidays Ability to work the 4th/night shift which typically starts from 10 pm IST Applicants should be flexible with schedule changes to meet business needs Snowflake is growing fast, and we’re scaling our team to help enable and accelerate our growth. We are looking for people who share our values, challenge ordinary thinking, and push the pace of innovation while building a future for themselves and Snowflake. How do you want to make your impact? For jobs located in the United States, please visit the job posting on the Snowflake Careers Site for salary and benefits information: careers.snowflake.com Show more Show less
Posted 1 week ago
10.0 - 12.0 years
12 - 14 Lacs
Hyderabad
Work from Office
About the Roe: Grade Leve (for interna use): 11 The Team: Our team is responsibe for the design, architecture, and deveopment of our cient facing appications using a variety of toos that are reguary updated as new technoogies emerge. You wi have the opportunity every day to work with peope from a wide variety of backgrounds and wi be abe to deveop a cose team dynamic with coworkers from around the gobe. The Impact: The work you do wi be used every singe day, its the essentia code you write that provides the data and anaytics required for crucia, daiy decisions in the capita and commodities markets. Whats in it for you: Buid a career with a goba company. Work on code that fues the goba financia markets. Grow and improve your skis by working on enterprise eve products and new technoogies. Responsibiities: Sove probems, anayze and isoate issues.Provide technica guidance and mentoring to the team and hep them adopt change as new processes are introduced.Champion best practices and serve as a subject matter authority.Deveop soutions to deveop/support key business needs.Engineer components and common services based on standard deveopment modes, anguages and toosProduce system design documents and ead technica wakthroughsProduce high quaity codeCoaborate effectivey with technica and non-technica partnersAs a team-member shoud continuousy improve the architecture Basic Quaifications: 10-12 years of experience designing/buiding data-intensive soutions using distributed computing.Proven experience in impementing and maintaining enterprise search soutions in arge-scae environments.Experience working with business stakehoders and users, providing research direction and soution design and writing robust maintainabe architectures and APIs.Experience deveoping and depoying Search soutions in a pubic coud such as AWS.Proficient programming skis at a high-eve anguages -Java, Scaa, PythonSoid knowedge of at east one machine earning research frameworksFamiiarity with containerization, scripting, coud patforms, and CI/CD.5+ years experience with Python, Java, Kubernetes, and data and workfow orchestration toos4+ years experience with Easticsearch, SQL, NoSQL,Apache spark, Fink, Databricks and Mfow.Prior experience with operationaizing data-driven pipeines for arge scae batch and stream processing anaytics soutionsGood to have experience with contributing to GitHub and open source initiatives or in research projects and/or participation in Kagge competitionsAbiity to quicky, efficienty, and effectivey define and prototype soutions with continua iteration within aggressive product deadines.Demonstrate strong communication and documentation skis for both technica and non-technica audiences. Preferred Quaifications: Search TechnoogiesQuery and Indexing content for Apache Sor, Eastic Search, etc.Proficiency in search query anguages (e.g., Lucene Query Syntax) and experience with data indexing and retrieva.Experience with machine earning modes and NLP techniques for search reevance and ranking.Famiiarity with vector search techniques and embedding modes (e.g., BERT, Word2Vec).Experience with reevance tuning using A/B testing frameworks.Big Data TechnoogiesApache Spark, Spark SQL, Hadoop, Hive, AirfowData Science Search TechnoogiesPersonaization and Recommendation modes, Learn to Rank (LTR)Preferred LanguagesPython, JavaDatabase TechnoogiesMS SQL Server patform, stored procedure programming experience using Transact SQL.Abiity to ead, train and mentor. About S&P Goba Market Inteigence At S&P Goba Market Inteigence, a division of S&P Goba we understand the importance of accurate, deep and insightfu information. Our team of experts deivers unrivaed insights and eading data and technoogy soutions, partnering with customers to expand their perspective, operate with confidence, andmake decisions with conviction.For more information, visit . Whats In It For You Our Purpose: Progress is not a sef-starter. It requires a catayst to be set in motion. Information, imagination, peope, technoogythe right combination can unock possibiity and change the word.Our word is in transition and getting more compex by the day. We push past expected observations and seek out new eves of understanding so that we can hep companies, governments and individuas make an impact on tomorrow. At S&P Goba we transform data into Essentia Inteigence, pinpointing risks and opening possibiities. We Acceerate Progress.
Posted 1 week ago
5.0 - 10.0 years
14 - 20 Lacs
Bengaluru
Work from Office
Senior Data Engineer_Bangalore Location Exp-5 Years to 10 Years Location- Bangalore Key Responsibilities : Design, develop, and maintain scalable data pipelines and ETL processes Optimize data flow and collection for cross-functional teams Build infrastructure required for optimal extraction, transformation, and loading of data Ensure data quality, reliability, and integrity across all data systems Collaborate with data scientists and analysts to help implement models and algorithms Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, etc. Create and maintain comprehensive technical documentation Evaluate and integrate new data management technologies and tools Qualifications: 5-9 years of professional experience in data engineering roles Bachelor's degree in Computer Science, Engineering, or related field; Master's degree preferred Job Description Expert knowledge of SQL and experience with relational databases (e.g., PostgreSQL, Redshift, TIDB, MySQL, Oracle, Teradata) Extensive experience with big data technologies (e.g., Hadoop, Spark, Hive, Flink) Proficiency in at least one programming language such as Python, Java, or Scala Experience with data modeling, data warehousing, and building ETL pipelines Strong knowledge of data pipeline and workflow management tools (e.g., Airflow, Luigi, NiFi) Experience with cloud platforms (AWS, Azure, or GCP) and their data services. AWS Preferred Hands on Experience with building streaming pipelines with flink, Kafka, Kinesis. Flink Understanding of data governance and data security principles Experience with version control systems (e.g., Git) and CI/CD practices Preferred Qualifications: Experience with containerization and orchestration tools (Docker, Kubernetes) Basic knowledge of machine learning workflows and MLOps Experience with NoSQL databases (MongoDB, Cassandra, etc.) Familiarity with data visualization tools (Tableau, Power BI, etc.) Experience with real-time data processing Knowledge of data governance frameworks and compliance requirements (GDPR, CCPA, etc.) Experience with infrastructure-as-code tools (Terraform, CloudFormation) HR Nikita Chaudhary 8879637539 nikita.chaudhary@enlink.com
Posted 1 week ago
7.0 - 12.0 years
13 - 17 Lacs
Noida
Work from Office
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities: Work with large, diverse datasets to deliver predictive and prescriptive analytics Develop innovative solutions using data modeling, machine learning, and statistical analysis Design, build, and evaluate predictive and prescriptive models and algorithms Use tools like SQL, Python, R, and Hadoop for data analysis and interpretation Solve complex problems using data-driven approaches Collaborate with cross-functional teams to align data science solutions with business goals Lead AI/ML project execution to deliver measurable business value Ensure data governance and maintain reusable platforms and tools Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Technical Skills Programming Languages: Python, R, SQL Machine Learning Tools: TensorFlow, PyTorch, scikit-learn Big Data Technologies: Hadoop, Spark Visualization Tools: Tableau, Power BI Cloud Platforms: AWS, Azure, Google Cloud Data Engineering: Talend, Data Bricks, Snowflake, Data Factory Statistical Software: R, Python libraries Version Control: Git Preferred Qualifications: Masters or PhD in Data Science, Computer Science, Statistics, or related field Certifications in data science or machine learning 7+ years of experience in a senior data science role with enterprise-scale impact Experience managing AI/ML projects end-to-end Solid communication skills for technical and non-technical audiences Demonstrated problem-solving and analytical thinking Business acumen to align data science with strategic goals Knowledge of data governance and quality standards At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone of every race, gender, sexuality, age, location and income deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes an enterprise priority reflected in our mission. #Nic
Posted 1 week ago
6.0 - 9.0 years
25 - 37 Lacs
Pune, Gurugram, Bengaluru
Work from Office
Data Science Consultants (DSCs) design, develop and execute high-impact analytics solutions for large, complex, structured and unstructured data sets (including big data) to help clients make better fact-based decisions. Responsibilities: Develop advanced algorithms that solve problems of large dimensionality in a computationally efficient and statistically effective manner Execute statistical and data mining techniques (e.g. hypothesis testing, machine learning and retrieval processes) on large data sets to identify trends, figures and other relevant information Collaborating with clients and cross functional teams to develop & deploy the data science solutions effectively & communicate the analysis findings Contribute to the evaluation of emerging datasets and technologies that may contribute to our analytical platform Evaluating emerging datasets and technologies that may contribute to our analytical platform Owning the development of select assets/accelerators for efficient scaling of capability Contributing to the thought leadership of the firm by helping in researching the evolving topics and publishing them Qualifications: Bachelor's or master's degree or PhD in Computer Science (or Statistics) from a premier institute, and strong academic performance with analytic and quantitative coursework is required; Knowledge of big data/advanced analytics concepts and algorithms (e.g. text mining, social listening, recommender systems, predictive modeling, etc.); Knowledge of programming (e.g. Java/Python/R); Exposure to tools/platforms (e.g. Hadoop eco system and database systems); Excellent oral and written communication skills; Strong attention to detail, with a research-focused mindset; Excellent critical thinking and problem-solving skills;
Posted 1 week ago
3.0 - 6.0 years
1 - 5 Lacs
Hyderabad
Work from Office
Total Yrs. of Experience* 3-6 years Relevant Yrs. of experience* 3-6 years Detailed JD *(Roles and Responsibilities) Candidate would be responsible for Kafka support and testing. Should be familiar with Kafka commands with strong experience in troubleshooting. Exposure to KSQL & rest proxy is preferred. Mandatory skills* Kafka Admin Desired skills* KSQL & Rest proxy
Posted 1 week ago
175.0 years
0 Lacs
Gurugram, Haryana, India
On-site
At American Express, our culture is built on a 175-year history of innovation, shared values and Leadership Behaviors, and an unwavering commitment to back our customers, communities, and colleagues. As part of Team Amex, you'll experience this powerful backing with comprehensive support for your holistic well-being and many opportunities to learn new skills, develop as a leader, and grow your career. Here, your voice and ideas matter, your work makes an impact, and together, you will help us define the future of American Express. Business Overview: International Card Services (ICS) is the leading provider of Credit Cards, Business Financing, T&E Solutions, Supplier Payments, and Cross-Border Payments that help consumer, small, mid-size, and large corporations around the world manage nearly every facet of their business spending. International Card Services Centre of Excellence (ICS COE) is the newly established function within ICS with the mission to unlock growth and enable ICS to become the fastest growing segment within American Express. ASI (Analytics & Strategic Insights) Marketing Analytics team sits within ICS COE and is the analytical engine that enables business growth across international markets. This is an outstanding opportunity in a high visibility role to develop data capabilities for commercial sales prospecting for lead markets in ICS. The incumbent would lead a team of high performing developers primarily based out of India. Job Responsibilities: Design, development & maintenance of prospect database for our lead markets in ICS Discover and analyze the technical architecture for new and existing solutions on an ongoing basis Partner with business, analytics, and machine learning teams to enhance the prospect database Basic Qualifications Bachelor’s or master’s degree in Information Technology, Computer Science, Mathematics, Engineering or equivalent 1-3 years of experience in developing solutions across a variety of platforms and technologies such as Big Data, PySpark, Hive, Scala, Java, Scripting Experience in Agile Scrum methodology or the Software Delivery Life Cycle. Strong analytical & problem-solving skills Ability to think abstractly and deal with ambiguous/under-defined problems Ability to work in a high-pressure environment with minimal errors Strong ability to formulate and communicate strategies in a clear, compelling way Technical Skills/Capabilities: Expertise in Big Data, Hive, SQL Background of programming skills – Java and Hadoop We back you with benefits that support your holistic well-being so you can be and deliver your best. This means caring for you and your loved ones' physical, financial, and mental health, as well as providing the flexibility you need to thrive personally and professionally: Competitive base salaries Bonus incentives Support for financial-well-being and retirement Comprehensive medical, dental, vision, life insurance, and disability benefits (depending on location) Flexible working model with hybrid, onsite or virtual arrangements depending on role and business need Generous paid parental leave policies (depending on your location) Free access to global on-site wellness centers staffed with nurses and doctors (depending on location) Free and confidential counseling support through our Healthy Minds program Career development and training opportunities American Express is an equal opportunity employer and makes employment decisions without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran status, disability status, age, or any other status protected by law. Offer of employment with American Express is conditioned upon the successful completion of a background verification check, subject to applicable laws and regulations. Show more Show less
Posted 1 week ago
5.0 - 10.0 years
13 - 20 Lacs
Hyderabad, Pune
Hybrid
Looking for Big Data Developer ( PySpark+ AWS + Java/Scala) -( Role Type 1) & looking for Hadoop Developer with Python+ Spark - (Role Type 2) Point to be remember Pls fill the Candidate Summary Sheet. Not Considering more than 30 days notice period. Highlights of this role. Its a long-term role. High Possibility of conversion within 6 Months or After 6 months (if you perform well). Interview -Total 2 rounds of Interview (Both Virtual), but one face to face meeting is mandatory @ any location - Pune/Hyderabad /Bangalore /Chennai. UAN Verification will be done in Background Check. Any overlapping in past employment will eliminate you. Last 4 Years Continuous PF deduction is mandatory. One face to face meeting is mandatory, otherwise we cant onboard you. Client Company One of Leading Technology Consulting Payroll Company One of Leading IT Services & Staffing Company (This company has a presence in India, UK, Europe, Australia, New Zealand, US, Canada, Singapore, Indonesia, and Middle east. Do not change the subject line or do not create new email while sharing /applying for this position. Pls reply on this email thread only. Job Description -Role 1 Big Data Developer Role: Big Data Developer Must-Have Skills Big Data (PySpark + Java/Scala) Preferred Skills: AWS (EMR, S3, Glue, Airflow, RDS, Dynamodb, similar) CICD (Jenkins or another) Relational Databases experience (any) No SQL databases experience (any) Microservices or Domain services or API gateways or similar Containers (Docker, K8s, similar) Role 2. Hadoop Developer Must have skills /Mandatory Skills Hadoop, Hive, HDFS, SQL and Python Experience Range 4+ Years Relevant Experience -4+ Yrs Location Only Pune & Hyderabad, If you are applying from outside Pune or Hyd, then you have to relocate to Pune or Hyd . Work Mode Min 2 days are mandatory to work from home. *** Mandatory to share ***Candidate Summary Sheet*** Interested parties can share their resume at (shant@harel-consulting.com) along with below details Applying for which role ( Pls mention the role name)- Your Name Contact NO Email ID Do you have valid passport – Total Experience Experience in Big data- Experience in Hive – Experience in Python OR Java OR Scala and how much – Experience in PySpark- Experience in AWS - Experience in Python Current CTC – Expected CTC – What is your notice period in your current Company- Are you currently working or not- If not working then when you have left your last company – Current location – Preferred Location – It’s a Contract to Hire role, Are you ok with that- Highest Qualification – Current Employer (Payroll Company Name) Previous Employer (Payroll Company Name)- 2nd Previous Employer (Payroll Company Name) – 3rd Previous Employer (Payroll Company Name)- Are you holding any Offer – Are you Expecting any offer - Are you open to consider Contract to Hire role , It’s a C2H Role- PF Deduction is happening in Current Company – PF Deduction happened in 2nd last Employer- PF Deduction happened in 3 last Employer – Latest Photo - BR Shantpriya Harel Consulting shant@harel-consulting.com 9582718094
Posted 1 week ago
5.0 - 10.0 years
13 - 20 Lacs
Hyderabad, Pune
Hybrid
Looking for Big Data Developer ( PySpark+ AWS + Java/Scala) -( Role Type 1) & looking for Hadoop Developer with Python+ Spark - (Role Type 2) Point to be remember Pls fill the Candidate Summary Sheet. Not Considering more than 30 days notice period. Highlights of this role. Its a long-term role. High Possibility of conversion within 6 Months or After 6 months (if you perform well). Interview -Total 2 rounds of Interview (Both Virtual), but one face to face meeting is mandatory @ any location - Pune/Hyderabad /Bangalore /Chennai. UAN Verification will be done in Background Check. Any overlapping in past employment will eliminate you. Last 4 Years Continuous PF deduction is mandatory. One face to face meeting is mandatory, otherwise we cant onboard you. Client Company One of Leading Technology Consulting Payroll Company One of Leading IT Services & Staffing Company (This company has a presence in India, UK, Europe, Australia, New Zealand, US, Canada, Singapore, Indonesia, and Middle east. Do not change the subject line or do not create new email while sharing /applying for this position. Pls reply on this email thread only. Job Description -Role 1 Big Data Developer Role: Big Data Developer Must-Have Skills Big Data (PySpark + Java/Scala) Preferred Skills: AWS (EMR, S3, Glue, Airflow, RDS, Dynamodb, similar) CICD (Jenkins or another) Relational Databases experience (any) No SQL databases experience (any) Microservices or Domain services or API gateways or similar Containers (Docker, K8s, similar) Role 2. Hadoop Developer Must have skills /Mandatory Skills Hadoop, Hive, HDFS, SQL and Python Experience Range 4+ Years Relevant Experience -4+ Yrs Location Only Pune & Hyderabad, If you are applying from outside Pune or Hyd, then you have to relocate to Pune or Hyd . Work Mode Min 2 days are mandatory to work from home. *** Mandatory to share ***Candidate Summary Sheet*** Interested parties can share their resume at (shant@harel-consulting.com) along with below details Applying for which role ( Pls mention the role name)- Your Name Contact NO Email ID Do you have valid passport – Total Experience Experience in Big data- Experience in Hive – Experience in Python OR Java OR Scala and how much – Experience in PySpark- Experience in AWS - Experience in Python Current CTC – Expected CTC – What is your notice period in your current Company- Are you currently working or not- If not working then when you have left your last company – Current location – Preferred Location – It’s a Contract to Hire role, Are you ok with that- Highest Qualification – Current Employer (Payroll Company Name) Previous Employer (Payroll Company Name)- 2nd Previous Employer (Payroll Company Name) – 3rd Previous Employer (Payroll Company Name)- Are you holding any Offer – Are you Expecting any offer - Are you open to consider Contract to Hire role , It’s a C2H Role- PF Deduction is happening in Current Company – PF Deduction happened in 2nd last Employer- PF Deduction happened in 3 last Employer – Latest Photo - BR Shantpriya Harel Consulting shant@harel-consulting.com 9582718094
Posted 1 week ago
4.0 - 7.0 years
10 - 15 Lacs
Bengaluru
Work from Office
Job Description Summary The Data Scientist will work in teams addressing statistical, machine learning and data understanding problems in a commercial technology and consultancy development environment. In this role, you will contribute to the development and deployment of modern machine learning, operational research, semantic analysis, and statistical methods for finding structure in large data sets. Job Description Site Overview Established in 2000, the John F. Welch Technology Center (JFWTC) in Bengaluru is GE Aerospaces multidisciplinary research and engineering center. Pushing the boundaries of innovation every day, engineers and scientists at JFWTC have contributed to hundreds of aviation patents, pioneering breakthroughs in engine technologies, advanced materials, and additive manufacturing. Role Overview: As a Data Scientist, you will be part of a data science or cross-disciplinary team on commercially-facing development projects, typically involving large, complex data sets. These teams typically include statisticians, computer scientists, software developers, engineers, product managers, and end users, working in concert with partners in GE business units. Potential application areas include remote monitoring and diagnostics across infrastructure and industrial sectors, financial portfolio risk assessment, and operations optimization. In this role, you will: Develop analytics within well-defined projects to address customer needs and opportunities. Work alongside software developers and software engineers to translate algorithms into commercially viable products and services. Work in technical teams in development, deployment, and application of applied analytics, predictive analytics, and prescriptive analytics. Perform exploratory and targeted data analyses using descriptive statistics and other methods. Work with data engineers on data quality assessment, data cleansing and data analytics Generate reports, annotated code, and other projects artifacts to document, archive, and communicate your work and outcomes. Share and discuss findings with team members. Required Qualifications: Bachelor's Degree in Computer Science or STEM Majors (Science, Technology, Engineering and Math) with basic experience. Desired Characteristics: - Expertise in one or more programming languages and analytic software tools (e.g., Python, R, SAS, SPSS). Strong understanding of machine learning algorithms, statistical methods, and data processing techniques. - Exceptional ability to analyze large, complex data sets and derive actionable insights. Proficiency in applying descriptive, predictive, and prescriptive analytics to solve real-world problems. - Demonstrated skill in data cleansing, data quality assessment, and data transformation. Experience working with big data technologies and tools (e.g., Hadoop, Spark, SQL). - Excellent communication skills, both written and verbal. Ability to convey complex technical concepts to non-technical stakeholders and collaborate effectively with cross-functional teams - Demonstrated commitment to continuous learning and staying up-to-date with the latest advancements in data science, machine learning, and related fields. Active participation in the data science community through conferences, publications, or contributions to open-source projects. - Ability to thrive in a fast-paced, dynamic environment and adapt to changing priorities and requirements. Flexibility to work on diverse projects across various domains. Preferred Qualifications: - Awareness of feature extraction and real-time analytics methods. - Understanding of analytic prototyping, scaling, and solutions integration. - Ability to work with large, complex data sets and derive meaningful insights. - Familiarity with machine learning techniques and their application in solving real-world problems. - Strong problem-solving skills and the ability to work independently and collaboratively in a team environment. - Excellent communication skills, with the ability to convey complex technical concepts to non-technical stakeholders. Domain Knowledge: Demonstrated awareness of industry and technology trends in data science Demonstrated awareness of customer and stakeholder management and business metrics Leadership: Demonstrated awareness of how to function in a team setting Demonstrated awareness of critical thinking and problem solving methods Demonstrated awareness of presentation skills Personal Attributes: Demonstrated awareness of how to leverage curiosity and creativity to drive business impact Humble: respectful, receptive, agile, eager to learn Transparent: shares critical information, speaks with candor, contributes constructively Focused: quick learner, strategically prioritizes work, committed Leadership ability: strong communicator, decision-maker, collaborative Problem solver: analytical-minded, challenges existing processes, critical thinker Whether we are manufacturing components for our engines, driving innovation in fuel and noise reduction, or unlocking new opportunities to grow and deliver more productivity, our GE Aerospace teams are dedicated and making a global impact. Join us and help move the aerospace industry forward . Additional Information Relocation Assistance Provided: No
Posted 1 week ago
0.0 - 1.0 years
4 - 7 Lacs
Bengaluru
Work from Office
Job Description Summary We are looking to grow our Software Controls and Optimization team at GE Aerospace Research & are looking for top notch researchers to be part of this exciting journey. As a group, we innovate and execute on the R&D strategy for GE Aerospace on a range of problems from designing inspections solutions for aircraft engines to building predictive/prescriptive analytics for a variety of applications that improve process efficiencies in the business. Job Description Company Overview Working at GE Aerospace means you are bringing your unique perspective, innovative spirit, drive, and curiosity to a collaborative and diverse team working to advance aerospace for future generations. If you have ideas, we will listen. Join us and see your ideas take flight! Site Overview Established in 2000, the John F. Welch Technology Center (JFWTC) in Bengaluru is our multidisciplinary research and engineering center. Engineers and scientists at JFWTC have contributed to hundreds of aviation patents, pioneering breakthroughs in engine technologies, advanced materials, and additive manufacturing. Role Overview We are looking for highly motivated people with proven track record to conduct research in natural language processing, artificial intelligence, and machine learning. As a Research Intern, you will be working with scientists in GE Aerospace Research to develop search and recommendation systems to improve the productivity of our Engineering teams. Your responsibilities will include developing and implementing algorithms to process Aerospace domain data for recommendation systems, design experiments, conduct thorough evaluations, and document your work (e.g., publications, invention disclosures). It will also include effectively communicating your findings with the appropriate stakeholders. You will encounter and have an opportunity to tackle unique challenges posed by the data and problems in the Aerospace domain including data quality, highly domain specific vocabulary, and how to integrate AI solutions into our safety critical and regulated workflows and processes. Ideal candidate: Should have experience in machine learning Required Qualifications Enrolled in full-time Masters or PhD Degree program in Computer Science, Electronics, Industrial, Electrical, Mechanical or related Engineering field with specialization in Natural Language Processing, Machine Learning, AI or Statistics. At least one year of experience in conducting independent research. Proficient in implementing algorithms, data pipelines and solutions in Python. Self-starter, ability to work in ambiguous environments, and excellent communication skills. Desired Qualifications Enrolled in a PhD Degree program with at least three years of experience in conducting independent research. Previous experience in training, fine-tuning (including instruction-tuning), and deploying Large Language Models/ vision algorithms. Proven track record of publications at top AI conferences (or co-located workshops). Experience of working with problems and data from industrial domains (Aviation, Energy, Healthcare, Manufacturing, etc.). Strong foundations in design, analysis, and implementation of algorithms in different computing architectures. Humble: respectful, receptive, agile, eager to learn Transparent: shares critical information, speaks with candor, contributes constructively Focused: quick learner, strategically prioritizes work, committed Leadership ability: strong communicator, decision-maker, collaborative Problem solver: analytical-minded, challenges existing processes, critical thinker At GE Aerospace, we have a relentless dedication to the future of safe and more sustainable flight and believe in our talented people to make it happen. Here, you will have the opportunity to work on really cool things with really smart and collaborative people. Together, we will mobilize a new era of growth in aerospace and defense. Where others stop, we accelerate. Additional Information Relocation Assistance Provided: Yes
Posted 1 week ago
5.0 - 9.0 years
12 - 16 Lacs
Bengaluru
Work from Office
Job Description Summary The Senior NLP Data Scientist will work in teams addressing natural language processing, large language models (LLMs), and agentic AI problems in a commercial technology and consultancy development environment. In this role, you will contribute to the development and deployment of advanced NLP techniques, large language models, agentic AI systems, and semantic analysis methods for extracting and understanding structure in large text data sets. Job Description Site Overview Established in 2000, the John F. Welch Technology Center (JFWTC) in Bengaluru is our multidisciplinary research and engineering center. Engineers and scientists at JFWTC have contributed to hundreds of aviation patents, pioneering breakthroughs in engine technologies, advanced materials, and additive manufacturing. Role Overview: Develop and implement state-of-the-art NLP models and algorithms, with a focus on large language models (LLMs) and agentic AI. Collaborate with cross-functional teams to identify and solve complex NLP problems in various domains. Design and conduct experiments to evaluate the performance of NLP models and improve their accuracy and efficiency. Deploy NLP models into production environments, ensuring scalability and robustness. Analyze large text data sets to uncover insights and patterns using advanced statistical and machine learning techniques. Stay up-to-date with the latest research and advancements in NLP, LLMs, and agentic AI, and apply this knowledge to ongoing projects. Communicate findings and recommendations to stakeholders through clear and concise reports and presentations. Mentor and guide junior data scientists and engineers in best practices for NLP and machine learning. Develop and maintain documentation for NLP models, algorithms, and processes. Collaborate with product managers and engineers to integrate NLP solutions into products and services. Conduct code reviews and ensure adherence to coding standards and best practices. Participate in the development of data collection and annotation strategies to improve model performance. Contribute to the development of intellectual property, including patents and publications, in the field of NLP and AI The Ideal Candidate Ideal candidate should have experience in Image Analytics, Computer Vision, Python and cloud platforms Required Qualifications: Bachelor's Degree in Computer Science or STEM Majors (Science, Technology, Engineering and Math) with 5+ experience in data science Demonstrated skill in the use of Python and or other analytic software tools or languages Demonstrated skill in guiding teams to solve business problems Strong communication, interpersonal and leadership skills Preferred Qualification Proven experience in developing and deploying NLP models, particularly large language models (LLMs) and agentic AI systems. Strong programming skills in Python and familiarity with NLP libraries and frameworks such as TensorFlow, PyTorch, Hugging Face Transformers, Langchain, Langgraph and spaCy. Experience with cloud platforms and tools for deploying machine learning models (e.g., AWS, GCP, Azure). Excellent problem-solving skills and the ability to work independently and as part of a team. Strong communication skills, with the ability to explain complex technical concepts to non-technical stakeholders. Experience with transfer learning, fine-tuning, and prompt engineering for LLMs. Knowledge of agentic AI principles and their application in real-world scenarios. Familiarity with big data technologies and tools such as Hadoop, Spark, and SQL. Publications or contributions to the NLP and AI research community. At GE Aerospace, we have a relentless dedication to the future of safe and more sustainable flight and believe in our talented people to make it happen. Here, you will have the opportunity to work on really cool things with really smart and collaborative people. Together, we will mobilize a new era of growth in aerospace and defense. Where others stop, we accelerate. Additional Information Relocation Assistance Provided: Yes
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
20312 Jobs | Dublin
Wipro
11977 Jobs | Bengaluru
EY
8165 Jobs | London
Accenture in India
6667 Jobs | Dublin 2
Uplers
6464 Jobs | Ahmedabad
Amazon
6352 Jobs | Seattle,WA
Oracle
5993 Jobs | Redwood City
IBM
5803 Jobs | Armonk
Capgemini
3897 Jobs | Paris,France
Tata Consultancy Services
3776 Jobs | Thane