Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
7.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. As part of our strategic initiative to build a centralized capability around data and cloud engineering, we are establishing a dedicated Azure Cloud Data Engineering practice. This team will be at the forefront of designing, developing, and deploying scalable data and reporting solutions on cloud primarily using Microsoft Azure platform and Power BI. The practice will serve as a centralized team, driving innovation, standardization, and best practices across cloud-based data initiatives. New hires will play a pivotal role in shaping the future of our data landscape, collaborating with cross-functional teams, clients, and stakeholders to deliver impactful, end-to-end solutions. This position will design, develop, implement, test, deploy, monitor, and maintain the delivery of data enrichments and reporting models using MS Fabric/PBI infrastructure. Primary Responsibilities Work with BI team to build and deploy healthcare data enrichments Design and develop high performance reporting models and dashboards using Power BI/MS Fabric Deploy and manage the Power BI dashboard using Power BI service Ensure connectivity with various sources flat files, on-prem databases, Snowflake, Databricks using Live, Direct Query and Import connections Design and develop Azure Databricks jobs using Python & Spark Develop and maintain CI/CD processes using Jenkins, GitHub, Maven Maintain high quality documentation of data definitions, transformations, and processes to ensure data governance and security Continuously explore new Azure, Power BI features and capabilities; assess their applicability to business needs Create detailed documentation for PBI/MS Fabric processes, architecture, and implementation patterns Prepare case studies and technical write-ups to showcase successful implementations and lessons learned Work closely with clients, business stakeholders, and internal teams to gather requirements and translate them into technical solutions using best practices and appropriate architecture Contribute to full lifecycle project implementations, from design and development to deployment and monitoring Ensure solutions adhere to security, compliance, and governance standards Identifies solutions to non-standard requests and problems Support and maintain the self-service BI warehouse Work with business owners to add new enrichments and to design and implement new reporting models Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Undergraduate degree or equivalent experience 7+ years of overall experience in Data & Analytics engineering 5+ years of experience designing, developing and deploying MS Fabric and Power BI dashboards 5+ years of experience working with Azure, Databricks, and Pyspark/Spark-SQL Solid experience with CICD tools such as Jenkins, GitHub etc. In-depth understanding of MS Fabric and Power BI along with deploying and managing dashboards, Apps and Workspaces along with access and security management Proven excellent communication skills Preferred Qualifications Snowflake/Airflow experience Experience or knowledge of health care concepts - E&I, M&R, C&S LOBs, Claims, Members, Provider, Payers, Underwriting At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission. #NIC
Posted 6 days ago
4.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. As part of our strategic initiative to build a centralized capability around data and cloud engineering, we are establishing a dedicated Azure Cloud Data Engineering practice. This team will be at the forefront of designing, developing, and deploying scalable data solutions on cloud primarily using Microsoft Azure platform. The practice will serve as a centralized team, driving innovation, standardization, and best practices across cloud-based data initiatives. New hires will play a pivotal role in shaping the future of our data landscape, collaborating with cross-functional teams, clients, and stakeholders to deliver impactful, end-to-end solutions. Primary Responsibilities Ingest data from multiple on-prem and cloud data sources using various tools & capabilities in Azure Design and develop Azure Databricks processes using PySpark/Spark-SQL Design and develop orchestration jobs using ADF, Databricks Workflow Analyzing data engineering processes being developed and act as an SME to troubleshoot performance issues and suggest solutions to improve Develop and maintain CI/CD processes using Jenkins, GitHub, Github Actions etc. Building test framework for the Databricks notebook jobs for automated testing before code deployment Design and build POCs to validate new ideas, tools, and architectures in Azure Continuously explore new Azure services and capabilities; assess their applicability to business needs. Create detailed documentation for cloud processes, architecture, and implementation patterns Work with data & analytics team to build and deploy efficient data engineering processes and jobs on Azure cloud Prepare case studies and technical write-ups to showcase successful implementations and lessons learned Work closely with clients, business stakeholders, and internal teams to gather requirements and translate them into technical solutions using best practices and appropriate architecture Contribute to full lifecycle project implementations, from design and development to deployment and monitoring Ensure solutions adhere to security, compliance, and governance standards Monitor and optimize data pipelines and cloud resources for cost and performance efficiency Identifies solutions to non-standard requests and problems Support and maintain the self-service BI warehouse Mentor and support existing on-prem developers for cloud environment Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Undergraduate degree or equivalent experience 4+ years of overall experience in Data & Analytics engineering 4+ years of experience working with Azure, Databricks, and ADF, Data Lake Solid experience working with data platforms and products using PySpark and Spark-SQL Solid experience with CICD tools such as Jenkins, GitHub, Github Actions, Maven etc. In-depth understanding of Azure architecture & ability to come up with efficient design & solutions Highly proficient in Python and SQL Proven excellent communication skills Preferred Qualifications Snowflake, Airflow experience Power BI development experience Experience or knowledge of health care concepts - E&I, M&R, C&S LOBs, Claims, Members, Provider, Payers, Underwriting At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes — an enterprise priority reflected in our mission. #NIC
Posted 6 days ago
5.0 - 6.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Summary We are looking for an experienced Azure Databricks Architect who has a strong background in designing and implementing end-to-end data solutions on Azure Cloud. The ideal candidate should have expertise in Databricks, Azure Data Factory, Azure DevOps, Power BI, Unity Catalog, and data governance, cataloging, and modelling. The Azure Databricks Architect will work closely with our data engineering, data science, and business stakeholders to design and implement scalable, secure, and efficient data architectures that meet our business requirements. Key Responsibilities Design and implement scalable, secure, and efficient data architectures on Azure Cloud using Databricks, Azure Data Factory, and other related technologies. Lead the development of end-to-end data solutions on Azure Cloud, including data ingestion, processing, storage, and visualization. Collaborate with data engineering, data science, and business stakeholders to identify business requirements and design data architectures that meet those requirements. Develop and maintain data governance, cataloging, and modelling frameworks to ensure data quality, security, and compliance. Implement data security and access controls using Azure Active Directory, Unity Catalog, and other related technologies. Develop and maintain data pipelines using Azure Data Factory, Databricks, and other related technologies. Develop and maintain data visualizations using Power BI, Databricks, and other related technologies. Collaborate with DevOps engineers to develop and maintain CI/CD pipelines using Azure DevOps, Databricks, and other related technologies. Stay up-to-date with the latest Azure Cloud technologies and trends, and apply that knowledge to improve our data architectures and solutions. Requirements 5-6 years of experience in designing and implementing data architectures on Azure Cloud using Databricks, Azure Data Factory, and other related technologies. Strong experience in data governance, cataloging, and modelling, including data quality, security, and compliance. Experience in developing and maintaining data pipelines using Azure Data Factory, Databricks, and other related technologies. Experience in developing and maintaining data visualizations using Power BI or Tableau. Experience in collaborating with DevOps engineers to develop and maintain CI/CD pipelines using Azure DevOps, Databricks, and other related technologies. Strong understanding of Azure Cloud security, including Azure Active Directory, Unity Catalog, and other related technologies. Strong understanding of data architecture principles, including scalability, security, and efficiency. Bachelor's degree in Computer Science, Information Technology, or a related field. Key Technical Expertise On Python Pyspark SPARK SQL
Posted 6 days ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Title: Data Engineer - PySpark, Python, SQL, Git, AWS Services – Glue, Lambda, Step Functions, S3, Athena. Job Description We are seeking a talented Data Engineer with expertise in PySpark, Python, SQL, Git, and AWS to join our dynamic team. The ideal candidate will have a strong background in data engineering, data processing, and cloud technologies. You will play a crucial role in designing, developing, and maintaining our data infrastructure to support our analytics. Responsibilities Develop and maintain ETL pipelines using PySpark and AWS Glue to process and transform large volumes of data efficiently. Collaborate with analysts to understand data requirements and ensure data availability and quality. Write and optimize SQL queries for data extraction, transformation, and loading. Utilize Git for version control, ensuring proper documentation and tracking of code changes. Design, implement, and manage scalable data lakes on AWS, including S3, or other relevant services for efficient data storage and retrieval. Develop and optimize high-performance, scalable databases using Amazon DynamoDB. Proficiency in Amazon QuickSight for creating interactive dashboards and data visualizations. Automate workflows using AWS Cloud services like event bridge, step functions. Monitor and optimize data processing workflows for performance and scalability. Troubleshoot data-related issues and provide timely resolution. Stay up-to-date with industry best practices and emerging technologies in data engineering. Qualifications Bachelor's degree in Computer Science, Data Science, or a related field. Master's degree is a plus. Strong proficiency in PySpark and Python for data processing and analysis. Proficiency in SQL for data manipulation and querying. Experience with version control systems, preferably Git. Familiarity with AWS services, including S3, Redshift, Glue, Step Functions, Event Bridge, CloudWatch, Lambda, Quicksight, DynamoDB, Athena, CodeCommit etc. Familiarity with Databricks and it’s concepts. Excellent problem-solving skills and attention to detail. Strong communication and collaboration skills to work effectively within a team. Ability to manage multiple tasks and prioritize effectively in a fast-paced environment. Preferred Skills Knowledge of data warehousing concepts and data modeling. Familiarity with big data technologies like Hadoop and Spark. AWS certifications related to data engineering.
Posted 6 days ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description: The Data/AI Engineer is an individual contributor role focused on advancing AT&T’s enterprise-scale AI solutions by developing, deploying, and maintaining robust machine learning operations pipelines and data engineering solutions. This role requires a strong blend of AI, data engineering, and MLOps expertise, ideally with exposure to technology, fraud management, and financial or risk domains. The ideal candidate thrives in agile product development environments and has a proven track record delivering scalable, production-ready AI products. Key Responsibilities Design, build, and optimize scalable data pipelines for ingestion, processing, and integration from diverse data sources. Develop and deploy AI and machine learning models into production environments, ensuring reliability and scalability. Utilize cloud platforms (AWS, Azure, GCP) and modern data technologies (e.g., Snowflake, Databricks, Kafka) to manage large-scale data workflows. Collaborate closely with data scientists and analysts to translate analytical models into operational AI solutions. Implement data quality checks, monitoring, and alerting to ensure data integrity and model performance. Support continuous integration and continuous deployment (CI/CD) processes for AI and data workflows. Apply best practices for data security, privacy, and compliance within all engineering solutions. Troubleshoot and resolve data and AI system issues in production environments. Document architecture, processes, and technical specifications to ensure maintainability and knowledge sharing. Stay up-to-date with emerging technologies and industry trends in data engineering and AI. Experience working in agile/scrum product development teams. Strong analytical and problem-solving skills with a client-focused mindset. Excellent communication and presentation skills for conveying complex analytics and AI concepts to diverse audiences. Qualifications Required: Bachelor’s or Master’s degree in Computer Science, Engineering, Data Science, or a related field. Proven experience in data engineering and AI/ML model deployment in enterprise environments. Proficiency in programming languages such as Python, SQL, and familiarity with Spark or similar distributed computing frameworks. Experience with cloud data platforms and services (AWS, Azure, GCP). Strong understanding of data pipeline architecture, ETL/ELT processes, and data warehousing concepts. Familiarity with containerization (Docker) and orchestration tools (Kubernetes) is a plus. Excellent problem-solving skills and ability to work collaboratively in cross-functional teams. Preferred: Experience with machine learning frameworks such as TensorFlow, PyTorch, or scikit-learn. Knowledge of real-time data streaming technologies (Kafka, Kinesis). Understanding of data governance, data privacy regulations, and responsible AI principles. Experience with CI/CD pipelines and automation tools for ML workflows. Strong communication skills for technical and non-technical audiences. Familiarity with Generative AI, Large Language Model (LLM) workflows, and Graph-based Retrieval-Augmented Generation (RAG) techniques. Background in telecommunications, fraud management, financial services, or risk analytics. Knowledge of responsible AI practices, data governance, and bias mitigation strategies. Willingness to work flexible shifts as required to support business operations. #DataEngineering Weekly Hours: 40 Time Type: Regular Location: IND:AP:Hyderabad / Argus Bldg 4f & 5f, Sattva, Knowledge City- Adm: Argus Building, Sattva, Knowledge City It is the policy of AT&T to provide equal employment opportunity (EEO) to all persons regardless of age, color, national origin, citizenship status, physical or mental disability, race, religion, creed, gender, sex, sexual orientation, gender identity and/or expression, genetic information, marital status, status with regard to public assistance, veteran status, or any other characteristic protected by federal, state or local law. In addition, AT&T will provide reasonable accommodations for qualified individuals with disabilities. AT&T is a fair chance employer and does not initiate a background check until an offer is made. JobCategory:BigData
Posted 6 days ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description: Role Overview As a Specialist Data/AI Engineer – QA at AT&T, you will be responsible for ensuring the quality, reliability, and performance of data pipelines, AI models, and analytics solutions. You will design and execute comprehensive testing strategies for data and AI systems, including validation of data integrity, model accuracy, and system scalability. Your role is critical to delivering robust, production-ready AI and data solutions that meet AT&T’s high standards. Key Responsibilities Develop and implement QA frameworks, test plans, and automated testing scripts for data pipelines and AI/ML models. Validate data quality, consistency, and accuracy across ingestion, transformation, and storage processes. Test AI/ML model performance including accuracy, bias, robustness, and drift detection. Utilize cloud platforms (AWS, Azure, GCP) and modern data technologies (e.g., Snowflake, Databricks, Kafka) to manage large-scale data workflows. Collaborate with data engineers, data scientists, and product teams to identify test requirements and ensure comprehensive coverage. Perform regression, integration, system, and performance testing on data and AI workflows. Automate testing processes using appropriate tools and frameworks to enable continuous testing in CI/CD pipelines. Monitor production systems to detect issues proactively and support root cause analysis for defects or anomalies. Document test results, defects, and quality metrics, communicating findings to technical and non-technical stakeholders. Advocate for quality best practices and contribute to improving testing methodologies across the CDO. Stay current with industry trends and emerging tools in data engineering, AI, and QA automation. Qualifications Required: Bachelor’s or Master’s degree in Computer Science, Engineering, Data Science, or a related field. Experience in quality assurance or testing roles focused on data engineering, AI, or machine learning systems. Proficiency in programming and scripting languages such as Python, SQL, and experience with test automation frameworks. Strong understanding of data pipelines, ETL/ELT processes, and data validation techniques. Familiarity with machine learning concepts and model evaluation metrics. Experience with cloud platforms (AWS, Azure, GCP) and data platforms (Snowflake, Databricks) is preferred. Knowledge of CI/CD tools and integration of automated testing within deployment pipelines. Excellent analytical, problem-solving, and communication skills. Preferred: Experience with AI/ML model testing frameworks and bias/fairness testing. Familiarity with containerization (Docker) and orchestration (Kubernetes) environments. Understanding of data governance, compliance, and responsible AI principles. Experience with real-time data streaming and testing associated workflows. #DataEngineering Weekly Hours: 40 Time Type: Regular Location: IND:AP:Hyderabad / Argus Bldg 4f & 5f, Sattva, Knowledge City- Adm: Argus Building, Sattva, Knowledge City It is the policy of AT&T to provide equal employment opportunity (EEO) to all persons regardless of age, color, national origin, citizenship status, physical or mental disability, race, religion, creed, gender, sex, sexual orientation, gender identity and/or expression, genetic information, marital status, status with regard to public assistance, veteran status, or any other characteristic protected by federal, state or local law. In addition, AT&T will provide reasonable accommodations for qualified individuals with disabilities. AT&T is a fair chance employer and does not initiate a background check until an offer is made. JobCategory:BigData
Posted 6 days ago
5.0 - 9.0 years
0 - 0 Lacs
karnataka
On-site
You will be responsible for building and interpreting machine learning models on real business data from the SigView platform, such as Logistic Regression, Boosted trees (Gradient boosting), Random Forests, and Decision Trees. Your tasks will include identifying data sources, integrating multiple sources or types of data, and applying data analytics expertise within a data source to develop methods to compensate for limitations and extend the applicability of the data. Moreover, you will be expected to extract data from relevant data sources, including internal systems and third-party data sources, through manual and automated web scrapping. Your role will involve validating third-party metrics by cross-referencing various syndicated data sources and determining the numerical variables to be used in the same form as they are from the raw datasets, categorized into buckets, and used to create new calculated numerical variables. You will perform exploratory data analysis using PySpark to finalize the list of compulsory variables necessary to solve the business problem and transform formulated problems into implementation plans for experiments by applying appropriate data science methods, algorithms, and tools. Additionally, you will work with offshore teams post data preparation to identify the best statistical model/analytical solution that can be applied to the available data to solve the business problem and derive actionable insights. Your responsibilities will also include collating the results of the models, preparing detailed technical reports showcasing how the models can be used and modified for different scenarios in the future to develop predictive insights. You will develop multiple reports to facilitate the generation of various business scenarios and provide features for users to generate scenarios. Furthermore, you will be interpreting the results of tests and analyses to develop insights into formulated problems within the business/customer context and provide guidance on risks and limitations. Acquiring and using broad knowledge of innovative data analytics methods, algorithms, and tools, including Spark, Elasticsearch, Python, Databricks, Azure, Power BI, Azure Cloud services, LLMs-Gen AI, and Microsoft Suite will be crucial for success in this role. This position may involve telecommuting and requires 10% travel nationally to meet with clients. The minimum requirements for this role include a Bachelor's Degree in Electronics Engineering, Computer Engineering, Data Analytics, Computer Science, or a related field plus five (5) years of progressive experience in the job offered or related occupation. Special skill requirements for this role include applying statistical methods to validate results and support strategic decisions, building and interpreting advanced machine learning models, using various tools such as Python, Scikit-Learn, XGBoost, Databricks, Excel, and Azure Machine Learning for data preparation and model validation, integrating diverse data sources using data analytics techniques, and performing data analysis and predictive model development using AI/ML algorithms. Your mathematical knowledge in Statistics, Probability, Differentiation and Integration, Linear Algebra, and Geometry will be beneficial. Familiarity with Data Science libraries such as NumPy, SciPy, and Pandas, Azure Data Factory for data pipeline design, NLTK, Spacy, Hugging Face Transformers, Azure Text Analytics, OpenAI, Word2Vec, and BERT will also be advantageous. The base salary for this position ranges from $171,000 to $190,000 per annum for 40 hours per week, Monday to Friday. If you have any applications, comments, or questions regarding the job opportunity described, please contact Piyush Khemka, VP, Business Operations, at 111 Town Square Pl., Suite 1203, Jersey City, NJ 07310.,
Posted 6 days ago
3.0 - 7.0 years
0 Lacs
chennai, tamil nadu
On-site
As a Data Engineer specializing in Databricks, your primary responsibility will be to develop, support, and drive end-to-end business intelligence solutions using Databricks. You will collaborate with business analysts and data architects to transform requirements into technical implementations. Your role will involve designing, developing, implementing, and maintaining PySpark code through the Databricks UI to facilitate data and analytics use cases for the client. Additionally, you will code, test, and document new or enhanced data systems to build robust and scalable applications for data analytics. You will also delve into performance, scalability, capacity, and reliability issues to identify and address any arising challenges. Furthermore, you will engage in research projects and proof of concepts to enhance data processing capabilities. Key Requirements: - 3+ years of hands-on experience with Databricks and PySpark. - Proficiency in SQL and adept data manipulation skills. - Sound understanding of data warehousing concepts and technologies. - Familiarity with Google Pub sub, Kafka, or Mongo DB is a plus. - Knowledge of ETL processes and tools for data extraction, transformation, and loading would be beneficial. - Experience with cloud platforms such as Databricks, Snowflake, or Google Cloud. - Understanding of data governance and data quality best practices. Qualifications: - Bachelor's degree in computer science, engineering, or a related field. - Continuous learning demonstrated through technical certifications or related methods. - 3+ years of relevant experience in Data Analytics, preferably within the Retail domain. Desired Qualities: - Self-motivated and dedicated to achieving outcomes for a rapidly growing team and organization. - Effective communication skills through verbal, written, and client presentations. Location: India Years of Experience: 3 to 5 years In this role, your expertise in Databricks and data engineering will play a crucial part in driving impactful business intelligence solutions and contributing to the growth and success of the organization.,
Posted 6 days ago
1.0 - 6.0 years
0 - 0 Lacs
pune, maharashtra
On-site
As a Senior Data Analyst specializing in Tableau and Databricks, you will be a key player in our data team, responsible for converting complex data into actionable insights through visually engaging dashboards and reports. Your role will involve creating and maintaining advanced Tableau visualizations, integrating Tableau with Databricks for efficient data access, collaborating with various stakeholders to understand business requirements, and ensuring data accuracy and security in visualizations. Your responsibilities will include designing, developing, and optimizing Tableau dashboards to support business decision-making, integrating Tableau with Databricks to visualize data from various sources, and working closely with data engineers and analysts to translate business needs into effective visualization solutions. You will also be expected to document best practices for visualization design, data governance, and dashboard deployment, as well as staying updated with the latest features of Tableau and Databricks. To qualify for this role, you should hold a Bachelor's or Master's degree in Computer Science, Information Systems, Data Science, or a related field, along with at least 5 years of experience in building Tableau dashboards and visualizations. Additionally, you should have hands-on experience integrating Tableau with Databricks, a strong understanding of data modeling and ETL processes, proficiency in writing optimized SQL queries, and experience with Tableau Server or Tableau Cloud deployment. Preferred qualifications include experience with scripting or automation tools, familiarity with other BI tools and cloud platforms, Tableau certification, and knowledge of data privacy and compliance standards. Soft skills such as strong analytical abilities, excellent communication skills, attention to detail, and the ability to work independently and collaboratively in a fast-paced environment will also be beneficial in this role. If you meet the requirements and are looking for a challenging opportunity to leverage your Tableau and Databricks expertise, we encourage you to share your CV with us at sathish.m@tekgence.com.,
Posted 6 days ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
As a Lead Data Engineer specializing in Databricks, you will play a crucial role in designing, developing, and optimizing our next-generation data platform. Your responsibilities will include leading a team of data engineers, offering technical guidance, mentorship, and ensuring the scalability and high performance of data solutions. You will be expected to lead the design, development, and implementation of scalable and reliable data pipelines using Databricks, Spark, and other relevant technologies. It will also be part of your role to define and enforce data engineering best practices, coding standards, and architectural patterns. Additionally, providing technical guidance and mentorship to junior and mid-level data engineers, conducting code reviews, and ensuring the quality, performance, and maintainability of data solutions will be key aspects of your job. Your expertise in Databricks will be essential as you architect and implement data solutions on the Databricks platform, including Databricks Lakehouse, Delta Lake, and Unity Catalog. Optimizing Spark workloads for performance and cost efficiency on Databricks, developing and managing Databricks notebooks, jobs, and workflows, and proficiently using Databricks features such as Delta Live Tables (DLT), Photon, and SQL Analytics will be part of your daily tasks. In terms of pipeline development and operations, you will need to develop, test, and deploy robust ETL/ELT pipelines for data ingestion, transformation, and loading from various sources like relational databases, APIs, and streaming data. Implementing monitoring, alerting, and logging for data pipelines to ensure operational excellence, as well as troubleshooting and resolving complex data-related issues, will also fall under your responsibilities. Collaboration and communication are crucial aspects of this role as you will work closely with cross-functional teams, including product managers, data scientists, and software engineers. Clear communication of complex technical concepts to both technical and non-technical stakeholders is vital. Staying updated with industry trends and emerging technologies in data engineering and Databricks will also be expected. Key Skills required for this role include extensive hands-on experience with the Databricks platform, including Databricks Workspace, Spark on Databricks, Delta Lake, and Unity Catalog. Strong proficiency in optimizing Spark jobs, understanding Spark architecture, experience with Databricks features like Delta Live Tables (DLT), Photon, and Databricks SQL Analytics, and a deep understanding of data warehousing concepts, dimensional modeling, and data lake architectures are essential for success in this position.,
Posted 6 days ago
5.0 - 9.0 years
0 Lacs
faridabad, haryana
On-site
The project is a dynamic solution empowering companies to optimize promotional activities for maximum impact. It collects and validates data, analyzes promotion effectiveness, plans calendars, and integrates seamlessly with existing systems. The tool enhances vendor collaboration, negotiates better deals, and employs machine learning to optimize promotional plans, enabling companies to make informed decisions and maximize return on investment. The required technology stack includes Scala, Go-Lang, Docker, Kubernetes, Databricks, with Python as an optional skill. The working time zone for this position is EU, and the specialty sought is Data Science. The ideal candidate should have more than 5 years of experience and English Upper-Intermediate language proficiency. Key soft skills desired for this role include a preference for problem-solving style over experience, ability to clarify requirements with the customer, willingness to pair with other engineers when solving complex issues, and good communication skills. The essential hard skills required for this position are experience in Scala and/or Go for designing and building scalable high-performing applications, containerization and microservices orchestration using Docker and Kubernetes, building data pipelines and ETL solutions using Databricks, data storage and retrieval with PostgreSQL and Elasticsearch, deploying and maintaining solutions in the Azure cloud environment. Experience in Python is considered a nice-to-have skill. Responsibilities and tasks for this role include developing and maintaining distributed systems using Scala and/or Go, working with Docker and Kubernetes for containerization and microservices orchestration, building data pipelines and ETL solutions using Databricks, working with PostgreSQL and Elasticsearch for data storage and retrieval, and deploying and maintaining solutions in the Azure cloud environment.,
Posted 6 days ago
7.0 - 11.0 years
0 Lacs
karnataka
On-site
You will be working as a Data Engineer with expertise in Python and Pyspark programming. You should have a strong background in utilizing Cloud services such as Snowflake, Databricks, Informatica, Azure, AWS, GCP, as well as proficiency in Reporting technologies like PowerBI, Tableau, Spotfire, Alteryx, and Microstrategy. Your responsibilities will include developing and maintaining data pipelines, optimizing data workflows, and ensuring the efficiency and reliability of data integration processes. You will be expected to possess strong programming skills in Python and Pyspark, along with a deep understanding of SQL. It is essential for you to have experience in utilizing Snowflake, Databricks, PowerBI, Microstrategy, Tableau, and Spotfire. Additionally, familiarity with Informatica and Azure/AWS services would be advantageous. The interview process will be conducted virtually, and the work model for this position is remote. If you have 7-10 years of experience in this field and are available to start within 15 days, please consider applying for this opportunity by sending your resume to netra.s@twsol.com.,
Posted 6 days ago
5.0 - 9.0 years
0 Lacs
chennai, tamil nadu
On-site
The Content and Data Analytics team is a part of DataOps within Global Operations at Elsevier, focusing on providing data analysis services primarily using Databricks. The team primarily serves product owners and data scientists of Elsevier's Research Data Platform, contributing to the delivery of leading data analytics products for scientific research, including Scopus and SciVal. As a Senior Data Analyst at Elsevier, you are expected to have a solid understanding of best practices and the ability to execute projects and initiatives independently. You should be capable of creating advanced-level insights and recommendations, as well as leading analytics efforts with high complexity autonomously. Your responsibilities will include supporting data scientists within the Domains of the Research Data Platform, engaging in various analytical activities such as analyzing large datasets, performing data preparation, and reviewing data science algorithms. You must possess a keen eye for detail, strong analytical skills, expertise in at least one data analysis system, curiosity, dedication to quality work, and an interest in scientific research. The requirements for this role include a minimum of 5 years of work experience, coding skills in at least one programming language (preferably Python) and SQL, familiarity with string manipulation functions like regular expressions, prior exposure to data analysis tools such as Pandas or Apache Spark/Databricks, knowledge of basic statistics relevant to data science, and experience with visualization tools like Tableau/Power BI. You will be expected to build and maintain strong relationships with Data Scientists and Product Managers, align activities with stakeholders, and present achievements and project updates to various stakeholders. Key competencies for this role include collaborating effectively as part of a team, taking initiative in problem-solving, and driving tasks to successful conclusions. Elsevier offers various benefits to promote a healthy work-life balance, including well-being initiatives, shared parental leave, study assistance, and sabbaticals. Additionally, the company provides comprehensive health insurance, flexible working arrangements, employee assistance programs, modern family benefits, various paid time off options, and subsidized meals. The company prides itself on being a global leader in information and analytics, supporting science, research, health education, and interactive learning while addressing the world's challenges and fostering a sustainable future.,
Posted 6 days ago
12.0 - 16.0 years
0 Lacs
hyderabad, telangana
On-site
You are a highly skilled Architect with expertise in Snowflake Data Modeling and Cloud Data solutions. With over 12 years of experience in Data Modeling/Data Warehousing and 5+ years specifically in Snowflake, you will lead Snowflake optimizations at warehouse and database levels. Your role involves setting up, configuring, and deploying Snowflake components efficiently for various projects. You will work with a passionate team of engineers at ValueMomentum's Engineering Center, focused on transforming the P&C insurance value chain through innovative solutions. The team specializes in Cloud Engineering, Application Engineering, Data Engineering, Core Engineering, Quality Engineering, and Domain expertise. As part of the team, you will have opportunities for role-specific skill development and contribute to impactful projects. As an Architect, you will be responsible for optimizing Snowflake at both warehouse and database levels, setting up and configuring Snowflake components, and implementing cloud management frameworks. Proficiency in Python, PySpark, SQL, and experience with cloud platforms such as AWS, Azure, and GCP are essential for this role. Key Responsibilities: - Work on Snowflake optimizations at warehouse and database levels. - Setup, configure, and deploy Snowflake components like Databases, Warehouses, and Roles. - Setup and monitor data shares and Snow Pipes for Snowflake projects. - Implement Snowflake Cloud management frameworks for monitoring, alerting, governance, budgets, change management, and cost optimization. - Develop cloud usage reporting for cost-related insights, metrics, and KPIs. - Build and enhance Snowflake forecasting processes and explore cloud spend trends. Requirements: - 12+ years of experience in Data Modeling/Data Warehousing. - 5+ years of experience in Snowflake Data Modeling and Architecture, including expertise in Cloning, Data Sharing, and Search optimization. - Proficiency in Python, PySpark, and complex SQL for analysis. - Experience with cloud platforms like AWS, Azure, and GCP. - Knowledge of Snowflake performance management and cloud-based database role management. ValueMomentum is a leading solutions provider for the global property and casualty insurance industry. It focuses on helping insurers achieve sustained growth, high performance, and stakeholder value. The company has served over 100 insurers and is dedicated to fostering resilient societies. Benefits at ValueMomentum include a competitive compensation package, career advancement opportunities through coaching and mentoring programs, comprehensive training and certification programs, and performance management with goal setting, continuous feedback, and rewards for exceptional performers.,
Posted 6 days ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
You are an experienced Databricks Engineer joining the data engineering team in Pune. Your expertise lies in the Databricks platform, Python, PySpark, and SQL. Your main responsibilities include designing and implementing data pipelines, building scalable data solutions, and enabling CI/CD across data platforms. You will work on modern data engineering practices, collaborating with data architects, analysts, and stakeholders to deliver efficient data solutions. Your tasks include developing reliable data pipelines, implementing CI/CD workflows, optimizing performance, and maintaining unit test cases for data accuracy. Your qualifications include 5-8 years of experience in Data Engineering, proficiency in Databricks platform, Python, PySpark, SQL, CI/CD tools, and MongoDB. Strong problem-solving skills and the ability to work independently in a fast-paced environment are essential for this role.,
Posted 6 days ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
You are a strategic thinker passionate about driving solutions in client profitability analytics. You have found the right team. As an Analytics Solutions Associate within the Wholesale Client Profitability (WCP) Analytics Solutions team, you will spend each day defining, refining, and delivering set goals for our firm. You will manage a range of projects focused on coordinating client profitability data and reporting improvements. You will work collaboratively with business and technology stakeholders to deliver proactive solutions, prioritize requests, and enhance client profitability data and reporting. Your role will be within the Commercial and Investment Banking business of Global Finance and Business Management, reporting to the Client Profitability Reporting Lead in India. Job responsibilities: - Work collaboratively with business and technology stakeholders to deliver proactive solutions, prioritize requests, and improve client profitability data and reporting - Conduct analysis on key issues impacting client profitability to determine root cause and deliver quality documentation capturing proposed solutions - Build and maintain key relationships across business stakeholders (e.g., project managers, business users, subject matter experts), and operations and technology partners - Support WCP production related activities with project impact analyses, thorough unit and end-to-end testing, and subject matter expertise - Prepare and distribute data-driven communications, profitability presentations, and business analysis with accuracy and adherence to JPMC brand style guidelines Required qualifications, capabilities, and skills: - Bachelor's degree in Finance, Accounting, Management Information Solutions, Data Science, or similar discipline. - 5+ years of experience in financial services, business analytics, project management, or equivalent. - Proficiency with Excel, Access, Cognos, Python, SQL and SharePoint, Confluence, JIRA, JIRA queries, JQL - Superior written and verbal business communication, with the ability to communicate effectively with all levels of management and staff globally - Experience required with Data visualization and analysis tools such as Tableau, Qliksense, Databricks Preferred qualifications, capabilities, and skills: - Know-how of the business and related functions - Critical thinking, attention to detail and analytical skills; able to synthesize large amounts of data and formulate appropriate conclusions.,
Posted 6 days ago
5.0 - 10.0 years
0 Lacs
pune, maharashtra
On-site
About the Company Sonny's Enterprises is the world's largest manufacturer of conveyorized car wash equipment, parts, and supplies. The company is recognized and awarded by the International Car Wash Association for innovating new technologies to advance the industry. Sonny's Enterprises takes pride in designing and building products in the USA. The company's culture is centered around embracing change and finding new and better ways to accelerate progress. Sonny's Enterprises values its people as the most valuable resource and invites individuals to explore opportunities for growth and career development within the organization. Position Summary As the Business Applications Manager (GCC India) at Sonny's Enterprises, you will be responsible for leading the India-based global capability center (GCC) team that supports enterprise applications, digital platforms, and business technology services. In this role, you will oversee both functional and technical delivery across various systems such as SAP, Salesforce, ecommerce, data analytics, and internal collaboration tools like SharePoint. Your primary responsibilities will include ensuring system stability, driving enhancement initiatives, supporting global projects, and leading a diverse cross-functional team of analysts, developers, and technical staff. Key Responsibilities - Manage the day-to-day operations and delivery for the India-based team supporting: - SAP Center of Excellence (functional and technical) - Salesforce (Sales Cloud, Service Cloud, Field Service) - Ecommerce integration and application support - Cybersecurity analyst (supporting enterprise InfoSec) - SharePoint/intranet developer - Data engineers and data analysts - Linux administrators and cloud platform support Additional Responsibilities In addition to the key responsibilities mentioned above, you will also: - Provide leadership, mentorship, and performance management for the team - Collaborate with U.S.-based IT leaders and business stakeholders to align priorities and deliver projects - Oversee support and minor enhancements for business-critical applications across finance, sales, service, supply chain, and HR - Ensure adherence to change control, system documentation, and support best practices - Monitor team KPIs and ensure high system availability and responsiveness to incidents - Support hiring, training, and capability building for the India-based applications team - Align with global enterprise architecture, cybersecurity, and data standards Qualifications - Bachelor's Degree in Engineering, Information Technology, or related field - 10+ years of experience in enterprise applications or IT delivery roles - 5+ years managing cross-functional technology teams Required Skills - Proven track record of managing SAP ECC/S/4HANA environments - Experience managing Salesforce support teams and application owners - Exposure to ecommerce systems, cloud environments (preferably Azure), and data analytics platforms - Experience coordinating with U.S.-based/global teams and supporting distributed business operations Preferred Skills - Prior experience in a Global Capability Center (GCC) or shared services center - Experience with Power BI, Databricks, Azure Data Factory, or similar tools - Understanding of cybersecurity controls and frameworks - Prior experience supporting manufacturing, distribution, or industrial sectors Equal Opportunity Statement,
Posted 6 days ago
5.0 years
0 Lacs
Greater Chennai Area
On-site
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NA Minimum 5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Data Engineer, you will design, develop, and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL processes to migrate and deploy data across systems. Your day will involve working on data architecture and engineering tasks to support business operations and decision-making. Roles & Responsibilities: - Expected to be an SME, collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Develop and maintain data pipelines for efficient data processing. - Implement ETL processes to ensure seamless data migration and deployment. - Collaborate with cross-functional teams to design and optimize data solutions. - Conduct data quality assessments and implement improvements for data integrity. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform. - Strong understanding of data architecture principles. - Experience in designing and implementing data solutions. - Proficient in SQL and other data querying languages. - Knowledge of cloud platforms such as AWS or Azure. Additional Information: - The candidate should have a minimum of 5 years of experience in Databricks Unified Data Analytics Platform. - This position is based at our Chennai office. - A 15 years full-time education is required.
Posted 6 days ago
3.0 years
0 Lacs
Greater Kolkata Area
On-site
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Databricks Unified Data Analytics Platform, Microsoft Azure Databricks, PySpark Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. A typical day involves collaborating with team members to understand project needs, developing innovative solutions, and ensuring that applications are optimized for performance and usability. You will engage in problem-solving discussions, contribute to the overall project strategy, and adapt to evolving requirements while maintaining a focus on delivering high-quality applications that align with business objectives. Roles & Responsibilities: - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Assist in the documentation of application processes and workflows. - Engage in continuous learning to stay updated with the latest technologies and methodologies. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform, Microsoft Azure Databricks, PySpark. - Strong understanding of data integration techniques and ETL processes. - Experience with cloud-based data storage solutions and data management. - Familiarity with programming languages such as Python or Scala. - Ability to troubleshoot and optimize application performance. Additional Information: - The candidate should have minimum 3 years of experience in Databricks Unified Data Analytics Platform. - This position is based at our Kolkata office. - A 15 years full time education is required., 15 years full time education
Posted 6 days ago
10.0 years
0 Lacs
Greater Lucknow Area
On-site
Kyndryl Software Engineering, Data Science Bengaluru, Karnataka, India Hyderabad, Telangana, India Chennai, Tamil Nadu, India Gurugram, Haryana, India Pune, Maharashtra, India Noida, Uttar Pradesh, India Posted on Jul 18, 2025 Apply now Who We Are At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. The Role Are you ready to embark on an exhilarating journey as a Data & AI Consultant? Join Kyndryl and become a driving force behind the transformative power of data! We're seeking an exceptionally talented individual to accelerate the competitive performance of our customers worldwide, establishing us as their unrivaled business and technology consulting partner. As a Data & AI Consultant, your impact will be monumental. You will unleash the potential of data technology, collaborating with customers to envision and conceptualize strategic applications. By combining your deep understanding of business and technology, you will become a catalyst for success, delivering invaluable insights and recommendations that propel our customers' organizations forward. Prepare to immerse yourself in the world of data & AI strategies and programs, crafting ingenious approaches to collecting, storing, analyzing, and visualizing data from diverse sources. Your technical expertise in a vast array of cutting-edge big data tools will empower you to develop groundbreaking solutions tailored to meet the unique requirements of each customer. But that's not all – your role as a Data & AI Consultant at Kyndryl will transcend traditional boundaries. You will lead captivating workshops and engaging consulting engagements, helping customers forge data-driven strategies that reshape their future. Educating our customers about the latest Data & AI technologies and frameworks will be second nature to you, enabling them to unlock the full potential of their data resources. Your Future at Kyndryl As a Data Consultant at Kyndryl you will join the Kyndryl Consultant Profession, working with other Kyndryl Consultants, Architects, Project Managers, and cross-functional Technical Subject Matter Experts – presenting unlimited opportunities with unmatched support through our investment in your learning, training, and career growth. Who You Are You’re good at what you do and possess the required experience to prove it. However, equally as important – you have a growth mindset; keen to drive your own personal and professional development. You are customer-focused – someone who prioritizes customer success in their work. And finally, you’re open and borderless – naturally inclusive in how you work with others. Required Technical And Professional Experience Total of around 10+ years of experience of delivering Data & 5+ years of delivering AI Solutions/Applications for clients in above listed industry segments Total of 2+ years of experience with Generative AI usecases and technologies This position has higher focus on Microsoft Azure Data & AI solutions. Atleast 5+ years of exp with Microsoft Data & AI solutions Deep functional & process knowledge within atleast 4 of the following functional domains within atleast 2 of the above listed industry segments: Total of around 12+ years of experience within atleast 2 industry segments from below: Banking Financial Services Insurance – Life; Non-Life; Re-insurance only BFSI multi-industry expertise Neo Finance – Fintech; Regtech; Banking 2.0; InsurTech etc (4-5 yrs exp) Healthcare – Payer; Provider; Manufacturer Retail – Traditional & Modern; Offline/online CPG – Traditional & Modern Manufacturing – Capital, Auto, Process, Contract Consumer/FMCG – Business Side Airlines – Business & Operational Hospitality – Business & Operational Education & Learning – Digital Models Customer Management & Growth Sales & Distribution Marketing Management Supply Chain, Inventory & Procurement Human Resources Core & Branch Operations Manufacturing Operations After Market Operations Digital Operations Broad level knowledge in the following Data Management areas is desirable: Deep understanding of industry/functional data structure / data models to design, develop, and delivery various Data based use cases. Designing and building database solutions (industry oriented models, schemas/views/tables / stored procedures / forms / queries / etc.) Neo data solutions – broad understanding & knowledge on designing and building industry focused data solutions using newer technologies such as Snowflake, Databricks, Cloudera, Data ISV’s like Privacera, Alation etc Deep understanding & knowledge on Big Data ecosystem implementation in various industries – common challenges, solutions, patterns, technologies & operational models Good knowledge of various Data & Analytics technologies like DWH’s; Datalake Solutions; ETL solutions; Governance & Information Management; Compliance & Privacy solutions Very strong PoV & knowledge of various Data Governance, Security, Audit, Compliance & Privacy regulations, policies & standards for atleast 2 of the above listed industries Understanding of various industry domain & functional usecases driven by data & having a strong PoV around data foundation layer/ecosystem needed to power the same Cloud native Data Ecosystem - deep understanding & knowledge on designing and building industry/functional data solutions using cloud PaaS & Cloud Native technologies Broad understanding & experience of delivery in at least 3 of the below mentioned Data Management dimensions in an industry context: Enterprise Data Strategy, vision & mission Data Discovery & Search Extraction, Transform, Load Data Migration (legacy to modern; on-premises to cloud) Persistence & Storage Metadata management Master Data Management Data Quality Engineering Data Access control & Security Model Privacy, ILM & Lineage Data Backup & Resiliency Data Operations Broad understanding & experience of delivery in at least 3 of the below mentioned Data Management dimensions in an industry context: Enterprise Data Strategy, vision & mission AI usecase discovery GenAI usecase discovery AI algorithms and methods Technology stack for AI & MLOps Productionzation of AI models GenAI technology stack and LLMOps Thought leader in the use of AI algorithms to solve business problems Preferred Technical And Professional Experience Self-starter who can think outside of the box, and come up with a solution to resolve and mitigate complex problems Experience in developing maturity models, TCO & ROI models, AS-IS : TO-BE blueprints, technology analysis, Roadmap design, Implementation plan in context of specific industry/functional segment Ability to communicate with a variety of different audiences and strong presentation skills Ability to lead and motivate technical communities Ability to effectively recognize and adapt to change Ability to quickly grasp customers' business challenges and drive positive change Being You Diversity is a whole lot more than what we look like or where we come from, it’s how we think and who we are. We welcome people of all cultures, backgrounds, and experiences. But we’re not doing it single-handily: Our Kyndryl Inclusion Networks are only one of many ways we create a workplace where all Kyndryls can find and provide support and advice. This dedication to welcoming everyone into our company means that Kyndryl gives you – and everyone next to you – the ability to bring your whole self to work, individually and collectively, and support the activation of our equitable culture. That’s the Kyndryl Way. What You Can Expect With state-of-the-art resources and Fortune 100 clients, every day is an opportunity to innovate, build new capabilities, new relationships, new processes, and new value. Kyndryl cares about your well-being and prides itself on offering benefits that give you choice, reflect the diversity of our employees and support you and your family through the moments that matter – wherever you are in your life journey. Our employee learning programs give you access to the best learning in the industry to receive certifications, including Microsoft, Google, Amazon, Skillsoft, and many more. Through our company-wide volunteering and giving platform, you can donate, start fundraisers, volunteer, and search over 2 million non-profit organizations. At Kyndryl, we invest heavily in you, we want you to succeed so that together, we will all succeed. Get Referred! If you know someone that works at Kyndryl, when asked ‘How Did You Hear About Us’ during the application process, select ‘Employee Referral’ and enter your contact's Kyndryl email address. Apply now See more open positions at Kyndryl
Posted 6 days ago
10.0 years
0 Lacs
Greater Lucknow Area
On-site
Kyndryl Software Engineering, Data Science Bengaluru, Karnataka, India Hyderabad, Telangana, India Chennai, Tamil Nadu, India Gurugram, Haryana, India Pune, Maharashtra, India Noida, Uttar Pradesh, India Posted on Jul 18, 2025 Apply now Who We Are At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. The Role Are you ready to embark on an exhilarating journey as a Data Consultant? Join Kyndryl and become a driving force behind the transformative power of data! We're seeking an exceptionally talented individual to accelerate the competitive performance of our customers worldwide, establishing us as their unrivaled business and technology consulting partner. As a Data Consultant, your impact will be monumental. You will unleash the potential of data technology, collaborating with customers to envision and conceptualize strategic applications. By combining your deep understanding of business and technology, you will become a catalyst for success, delivering invaluable insights and recommendations that propel our customers' organizations forward. Prepare to immerse yourself in the world of data strategies and programs, crafting ingenious approaches to collecting, storing, analyzing, and visualizing data from diverse sources. Your technical expertise in a vast array of cutting-edge big data tools will empower you to develop groundbreaking solutions tailored to meet the unique requirements of each customer. But that's not all – your role as a Data Consultant at Kyndryl will transcend traditional boundaries. You will lead captivating workshops and engaging consulting engagements, helping customers forge data-driven strategies that reshape their future. Educating our customers about the latest Data Science technologies and frameworks will be second nature to you, enabling them to unlock the full potential of their data resources. Armed with an analytical approach, you will unveil strategies, assessments, recommendations, and comprehensive plans to address complex issues while seamlessly aligning both the technical and functional requirements of IT and the business. Your deep consulting skills and business acumen will be harnessed to analyze customer business issues, formulate hypotheses, and test conclusions, ultimately delivering data solutions that defy expectations. If you have a passion for turning data into a force for change and thrive in a dynamic, fast-paced environment, this is your chance to shine! Unleash your creativity, embrace innovation, and become a master of efficiency as you create solutions that revolutionize customer business and IT landscapes. Join Kyndryl and become part of a remarkable team that turns data into success stories. Your Future at Kyndryl As a Data Consultant at Kyndryl you will join the Kyndryl Consultant Profession, working with other Kyndryl Consultants, Architects, Project Managers, and cross-functional Technical Subject Matter Experts – presenting unlimited opportunities with unmatched support through our investment in your learning, training, and career growth. Who You Are You’re good at what you do and possess the required experience to prove it. However, equally as important – you have a growth mindset; keen to drive your own personal and professional development. You are customer-focused – someone who prioritizes customer success in their work. And finally, you’re open and borderless – naturally inclusive in how you work with others. Required Technical And Professional Experience Total of around 10+ years of experience of delivering Data & 5+ years of delivering AI Solutions/Applications for clients in above listed industry segments Total of 2+ years of experience with Generative AI usecases and technologies This position has higher focus on Microsoft Azure Data & AI solutions. Atleast 5+ years of exp with Microsoft Data & AI solutions Deep functional & process knowledge within atleast 4 of the following functional domains within atleast 2 of the above listed industry segments: Total of around 12+ years of experience within atleast 2 industry segments from below: Banking Financial Services Insurance – Life; Non-Life; Re-insurance only BFSI multi-industry expertise Neo Finance – Fintech; Regtech; Banking 2.0; InsurTech etc (4-5 yrs exp) Healthcare – Payer; Provider; Manufacturer Retail – Traditional & Modern; Offline/online CPG – Traditional & Modern Manufacturing – Capital, Auto, Process, Contract Consumer/FMCG – Business Side Airlines – Business & Operational Hospitality – Business & Operational Education & Learning – Digital Models Customer Management & Growth Sales & Distribution Marketing Management Supply Chain, Inventory & Procurement Human Resources Core & Branch Operations Manufacturing Operations After Market Operations Digital Operations Broad level knowledge in the following Data Management areas is desirable: Deep understanding of industry/functional data structure / data models to design, develop, and delivery various Data based use cases. Designing and building database solutions (industry oriented models, schemas/views/tables / stored procedures / forms / queries / etc.) Neo data solutions – broad understanding & knowledge on designing and building industry focused data solutions using newer technologies such as Snowflake, Databricks, Cloudera, Data ISV’s like Privacera, Alation etc Deep understanding & knowledge on Big Data ecosystem implementation in various industries – common challenges, solutions, patterns, technologies & operational models Good knowledge of various Data & Analytics technologies like DWH’s; Datalake Solutions; ETL solutions; Governance & Information Management; Compliance & Privacy solutions Very strong PoV & knowledge of various Data Governance, Security, Audit, Compliance & Privacy regulations, policies & standards for atleast 2 of the above listed industries Understanding of various industry domain & functional usecases driven by data & having a strong PoV around data foundation layer/ecosystem needed to power the same Cloud native Data Ecosystem - deep understanding & knowledge on designing and building industry/functional data solutions using cloud PaaS & Cloud Native technologies Broad understanding & experience of delivery in at least 3 of the below mentioned Data Management dimensions in an industry context: Enterprise Data Strategy, vision & mission Data Discovery & Search Extraction, Transform, Load Data Migration (legacy to modern; on-premises to cloud) Persistence & Storage Metadata management Master Data Management Data Quality Engineering Data Access control & Security Model Privacy, ILM & Lineage Data Backup & Resiliency Data Operations Broad understanding & experience of delivery in at least 3 of the below mentioned Data Management dimensions in an industry context: Enterprise Data Strategy, vision & mission AI usecase discovery GenAI usecase discovery AI algorithms and methods Technology stack for AI & MLOps Productionzation of AI models GenAI technology stack and LLMOps Thought leader in the use of AI algorithms to solve business problems Preferred Technical And Professional Experience Self-starter who can think outside of the box, and come up with a solution to resolve and mitigate complex problems Experience in developing maturity models, TCO & ROI models, AS-IS : TO-BE blueprints, technology analysis, Roadmap design, Implementation plan in context of specific industry/functional segment Ability to communicate with a variety of different audiences and strong presentation skills Ability to lead and motivate technical communities Ability to effectively recognize and adapt to change Ability to quickly grasp customers' business challenges and drive positive change Being You Diversity is a whole lot more than what we look like or where we come from, it’s how we think and who we are. We welcome people of all cultures, backgrounds, and experiences. But we’re not doing it single-handily: Our Kyndryl Inclusion Networks are only one of many ways we create a workplace where all Kyndryls can find and provide support and advice. This dedication to welcoming everyone into our company means that Kyndryl gives you – and everyone next to you – the ability to bring your whole self to work, individually and collectively, and support the activation of our equitable culture. That’s the Kyndryl Way. What You Can Expect With state-of-the-art resources and Fortune 100 clients, every day is an opportunity to innovate, build new capabilities, new relationships, new processes, and new value. Kyndryl cares about your well-being and prides itself on offering benefits that give you choice, reflect the diversity of our employees and support you and your family through the moments that matter – wherever you are in your life journey. Our employee learning programs give you access to the best learning in the industry to receive certifications, including Microsoft, Google, Amazon, Skillsoft, and many more. Through our company-wide volunteering and giving platform, you can donate, start fundraisers, volunteer, and search over 2 million non-profit organizations. At Kyndryl, we invest heavily in you, we want you to succeed so that together, we will all succeed. Get Referred! If you know someone that works at Kyndryl, when asked ‘How Did You Hear About Us’ during the application process, select ‘Employee Referral’ and enter your contact's Kyndryl email address. Apply now See more open positions at Kyndryl
Posted 6 days ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Position Title Senior Executive - Programmer Analyst Location Chennai, India Band A2 Designation Lead Programmer Analyst Overview - We are seeking a highly skilled and experienced Lead Programmer Analyst specializing in Microsoft Technologies to join our dynamic team. As a Lead Programmer Analyst, you will play a critical role in shaping the success of our technology projects. Your responsibilities will include architecture design, implementation, and overseeing the development and deployment of Microsoft-based solutions that meet our internal needs. You’ll collaborate closely with architects, business analysts, project managers, developers, testers and other stakeholders to ensure the successful delivery of projects within scope, budget, and schedule. Technical Skills – Microsoft .NET Stack- Proficiency in .NET 8.0, C#, ASP.NET Core, and MVC. Experience with building Web APIs and Minimal APIs. Familiarity with front-end technologies such as React, TypeScript, and NodeJS. Data Persistence and Messaging- Hands-on experience with ORMs (Object-Relational Mappers). Knowledge of messaging and streaming technologies. NoSQL Databases- Understanding of NoSQL databases and their use cases. Microsoft Azure- Designing and implementing cloud-based solutions using Azure services: Azure App Services Azure Functions Azure Web Jobs Azure SQL Database Azure Storage Additional Skills and Value Additions - Experience working in Agile/Scrum environments. Familiarity with Agile methodologies and Scrum practices. Python: General Python skills. Data handling using Python. API development using FastAPI or Flask. Knowledge of PySpark. Big Data: Exposure to technologies such as Databricks and Snowflake. Familiarity with Spark. Good to Have – Relevant Microsoft certifications are a plus. Experience with healthcare data analytics, machine learning, or AI technologies. Certification in healthcare IT (e.g., Certified Professional in Healthcare Information and Management Systems, CHPS). Soft Skills – Strong communication skills - oral and verbal. Ability to work with various stakeholders across various geography. with the ability to build & sustain teams. Mentor people and create a high performing organization, fostering talent, resolving conflicts to build & sustain teams. Education – Master’s or Bachelor’s degree from top tier colleges with good grades from an Engineering Background Business Domain – US Healthcare Insurance & Payer Analytics Insurance Fraud, Waste & Abuse Recovery Audit & Utilization Review Compliance Adherence & Coding Accuracy Payer Management & Code Classification Management Requirements & Responsibilities - Architectural Design and Implementation Design scalable, reliable, and high-performance solutions based on Microsoft technologies, including but not limited to .NET, Azure, SQL Server, and SharePoint Online. Provide expertise in creating robust architectures that align with business objectives. Requirements Gathering and Analysis Collaborate with stakeholders to understand business objectives and technical requirements. Translate requirements into architectural blueprints and design specifications. Mentoring and Knowledge Transfer Mentor new engineers, helping them adapt to the software development environment. Share best practices and guide their learning journey. Alignment with Business Goals: Work closely with project managers, business analysts, and quality assurance teams and ensure that technical solutions align with business requirements. Code Quality and Security: Conduct thorough code reviews & Enforce coding standards, best practices, and security guidelines. Continuous Learning and Adaptation: Stay informed about emerging technologies, trends, and best practices and evaluate their applicability to ongoing projects and solutions. Troubleshooting and Issue Resolution: Assist in resolving complex technical issues during development or deployment. Cloud Migration: Lead the migration of on-premises applications to the cloud, specifically leveraging the Microsoft Azure platform.
Posted 6 days ago
50.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Your Team Responsibilities The Data Technology team at MSCI is responsible for meeting the data requirements across various business areas, including Index, Analytics, and Sustainability. Our team collates data from multiple sources such as vendors (e.g., Bloomberg, Reuters), website acquisitions, and web scraping (e.g., financial news sites, company websites, exchange websites, filings). This data can be in structured or semi-structured formats. We normalize the data, perform quality checks, assign internal identifiers, and release it to downstream applications. Your Key Responsibilities As data engineers, we build scalable systems to process data in various formats and volumes, ranging from megabytes to terabytes. Our systems perform quality checks, match data across various sources, and release it in multiple formats. We leverage the latest technologies, sources, and tools to process the data. Some of the exciting technologies we work with include Snowflake, Databricks, and Apache Spark. Your Skills And Experience That Will Help You Excel Core Java, Spring Boot, Apache Spark, Spring Batch, Python. Exposure to sql databases like Oracle, Mysql, Microsoft Sql is a must. Any experience/knowledge/certification on Cloud technology preferrably Microsoft Azure or Google cloud platform is good to have. Exposures to non sql databases like Neo4j or Document database is again good to have. About MSCI What we offer you Transparent compensation schemes and comprehensive employee benefits, tailored to your location, ensuring your financial security, health, and overall wellbeing. Flexible working arrangements, advanced technology, and collaborative workspaces. A culture of high performance and innovation where we experiment with new ideas and take responsibility for achieving results. A global network of talented colleagues, who inspire, support, and share their expertise to innovate and deliver for our clients. Global Orientation program to kickstart your journey, followed by access to our Learning@MSCI platform, LinkedIn Learning Pro and tailored learning opportunities for ongoing skills development. Multi-directional career paths that offer professional growth and development through new challenges, internal mobility and expanded roles. We actively nurture an environment that builds a sense of inclusion belonging and connection, including eight Employee Resource Groups. All Abilities, Asian Support Network, Black Leadership Network, Climate Action Network, Hola! MSCI, Pride & Allies, Women in Tech, and Women’s Leadership Forum. At MSCI we are passionate about what we do, and we are inspired by our purpose – to power better investment decisions. You’ll be part of an industry-leading network of creative, curious, and entrepreneurial pioneers. This is a space where you can challenge yourself, set new standards and perform beyond expectations for yourself, our clients, and our industry. MSCI is a leading provider of critical decision support tools and services for the global investment community. With over 50 years of expertise in research, data, and technology, we power better investment decisions by enabling clients to understand and analyze key drivers of risk and return and confidently build more effective portfolios. We create industry-leading research-enhanced solutions that clients use to gain insight into and improve transparency across the investment process. MSCI Inc. is an equal opportunity employer. It is the policy of the firm to ensure equal employment opportunity without discrimination or harassment on the basis of race, color, religion, creed, age, sex, gender, gender identity, sexual orientation, national origin, citizenship, disability, marital and civil partnership/union status, pregnancy (including unlawful discrimination on the basis of a legally protected parental leave), veteran status, or any other characteristic protected by law. MSCI is also committed to working with and providing reasonable accommodations to individuals with disabilities. If you are an individual with a disability and would like to request a reasonable accommodation for any part of the application process, please email Disability.Assistance@msci.com and indicate the specifics of the assistance needed. Please note, this e-mail is intended only for individuals who are requesting a reasonable workplace accommodation; it is not intended for other inquiries. To all recruitment agencies MSCI does not accept unsolicited CVs/Resumes. Please do not forward CVs/Resumes to any MSCI employee, location, or website. MSCI is not responsible for any fees related to unsolicited CVs/Resumes. Note on recruitment scams We are aware of recruitment scams where fraudsters impersonating MSCI personnel may try and elicit personal information from job seekers. Read our full note on careers.msci.com
Posted 6 days ago
6.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Description Are you a skilled data professional with a passion to transform raw data into actionable insights, and a demonstrated history of learning and implementing new technologies? The CCB Finance Data & Insights Team is an agile product team responsible for the development, production, and transformation of financial data and reporting across CCB. Our vision is to improve the lives of our people and increase value to the firm by leveraging the power of our data and the best tools to analyze data, generate insights, save time, improve processes & control, and lead the organization in developing skills of the future. Job Summary As a Data Visualization Associate within the Consumer and Community Banking (CCB) Finance Data & Insights Team, you will be integral to an agile product team tasked with developing, producing, and transforming financial reporting for the Consumer and Community Banking division. You will leverage your ability and passion for interpreting complex data to create impactful data visualizations and intelligence solutions that support the organization's top leaders in achieving strategic goals. Your role will involve identifying and evaluating opportunities to streamline processes by eliminating manual tasks and implementing automated solutions using tools like Alteryx or Thought Spot. Additionally, you will be responsible for extracting, analyzing, and summarizing data to fulfill ad hoc stakeholder requests, while contributing significantly to the modernization of our data environment through the transition to a cloud platform. Job Responsibilities Transform raw data into actionable insights, demonstrating a history of learning and implementing new technologies. Lead the Finance Data & Insights Team, an agile product team, taking responsibility for the development, production, and transformation of financial data and reporting across CCB. Improve the lives of our people and increase value to the firm by leveraging the power of data and the best tools to analyze data, generate insights, save time, improve processes & control, and lead the organization in developing skills of the future. Join an agile product team as an Data Visualization Associate on the CCB Finance Data & Insights Team, responsible for the development and production of reporting across CCB. Lead conversations with business teams and create data visualizations and intelligence solutions utilized by the organization's top leaders to reach key strategic imperatives. Identify and assess opportunities to eliminate manual processes and utilize automation tools such as Alteryx or Thought Spot to bring automated solutions to life. Extract, analyze, and summarize data for ad hoc stakeholder requests, playing a role in transforming the data environment to a modernized cloud platform. Required Qualifications, Capabilities And Skills Overall experience of minimum 6 years with 3+ years of experience in Tableau and SQL Minimum 6 years of experience developing data visualization and presentations Experience with data wrangling tools such as Alteryx Experience with relational databases utilizing SQL to pull and summarize large datasets, report creation and ad-hoc analyses Experience in reporting development and testing, and ability to interpret unstructured data and draw objective inferences given known limitations of the data Demonstrated ability to think beyond raw data and to understand the underlying business context and sense business opportunities hidden in data Strong written and oral communication skills; ability to communicate effectively with all levels of management and partners from a variety of business function Preferred Qualifications AWS, Databricks, Snowflake, or other Cloud Data Warehouse experience Experience with ThoughtSpot or similar tools empowering stakeholders to better understand their data Highly motivated, self-directed, curious to learn new technologies About Us JPMorganChase, one of the oldest financial institutions, offers innovative financial solutions to millions of consumers, small businesses and many of the world’s most prominent corporate, institutional and government clients under the J.P. Morgan and Chase brands. Our history spans over 200 years and today we are a leader in investment banking, consumer and small business banking, commercial banking, financial transaction processing and asset management. We recognize that our people are our strength and the diverse talents they bring to our global workforce are directly linked to our success. We are an equal opportunity employer and place a high value on diversity and inclusion at our company. We do not discriminate on the basis of any protected attribute, including race, religion, color, national origin, gender, sexual orientation, gender identity, gender expression, age, marital or veteran status, pregnancy or disability, or any other basis protected under applicable law. We also make reasonable accommodations for applicants’ and employees’ religious practices and beliefs, as well as mental health or physical disability needs. Visit our FAQs for more information about requesting an accommodation. About The Team Our Consumer & Community Banking division serves our Chase customers through a range of financial services, including personal banking, credit cards, mortgages, auto financing, investment advice, small business loans and payment processing. We’re proud to lead the U.S. in credit card sales and deposit growth and have the most-used digital solutions – all while ranking first in customer satisfaction. The CCB Data & Analytics team responsibly leverages data across Chase to build competitive advantages for the businesses while providing value and protection for customers. The team encompasses a variety of disciplines from data governance and strategy to reporting, data science and machine learning. We have a strong partnership with Technology, which provides cutting edge data and analytics infrastructure. The team powers Chase with insights to create the best customer and business outcomes.
Posted 6 days ago
4.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Description: Desired Skills & Qualifications: Must have 4+ years of work experience. Strong understanding of Azure security, administration, architecture, ADO Pipeline CI/CD, Terraform, GitHub Actions, Powershell, Databricks, Snowflake, EventHubs Expertise in KSH/BASH, Python. Proficient in using Visual Studio Code, GitHub, JFROG Proficient in RHEL KVM, Openshift, Kubernetes, REDIS, and KAFKA. Strong problem-solving skills and the ability to troubleshoot complex issues. Excellent communication and collaboration skills to work effectively within a team. Ability to manage multiple tasks and prioritize effectively in a fast-paced environment Hands on experience on Python, Java, Github and Kubernetes are preferred Weekly Hours: 40 Time Type: Regular Location: IND:KA:Bengaluru / Innovator Building, Itpb, Whitefield Rd - Adm: Intl Tech Park, Innovator Bldg It is the policy of AT&T to provide equal employment opportunity (EEO) to all persons regardless of age, color, national origin, citizenship status, physical or mental disability, race, religion, creed, gender, sex, sexual orientation, gender identity and/or expression, genetic information, marital status, status with regard to public assistance, veteran status, or any other characteristic protected by federal, state or local law. In addition, AT&T will provide reasonable accommodations for qualified individuals with disabilities. AT&T is a fair chance employer and does not initiate a background check until an offer is made. JobCategory:BigData
Posted 6 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
31458 Jobs | Dublin
Wipro
16542 Jobs | Bengaluru
EY
10788 Jobs | London
Accenture in India
10711 Jobs | Dublin 2
Amazon
8660 Jobs | Seattle,WA
Uplers
8559 Jobs | Ahmedabad
IBM
7988 Jobs | Armonk
Oracle
7535 Jobs | Redwood City
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi
Capgemini
6091 Jobs | Paris,France