Jobs
Interviews

1401 Data Bricks Jobs - Page 30

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 10.0 years

0 - 1 Lacs

Kolkata, Hyderabad, Pune

Work from Office

•Experience in MongoDB database is important, other databases of SQL Server, PostgreSQL knowledge also required. Require understand & modifying the database objects based on business request. Must have deep understanding of preparing complex queries,

Posted 1 month ago

Apply

6.0 - 10.0 years

14 - 24 Lacs

Hyderabad

Work from Office

Role & responsibilities Job Title: Data Engineer Years of experience: 6 to 10 years (Minimum 5 years of relevant experience) Work Mode: Work From Office Hyderabad Notice Period-Immediate to 30 Days only Key Skills: Python, SQL, AWS, Spark, Databricks - ( Mandate) Airflow- Good to have

Posted 1 month ago

Apply

5.0 - 9.0 years

19 - 23 Lacs

Mumbai

Work from Office

Overview MSCI has an immediate opening in of our fastest growing product lines. As a Lead Architect within Sustainability and Climate, you are an integral part of a team that works to develop high-quality architecture solutions for various software applications on modern cloud-based technologies. As a core technical contributor, you are responsible for conducting critical architecture solutions across multiple technical areas to support project goals. The systems under your responsibility will be amongst the most mission critical systems of MSCI. They require strong technology expertise and a strong sense of enterprise system design, state-of-the-art scalability and reliability but also innovation. Your ability to take technology decisions in a consistent framework to support the growth of our company and products, lead the various software implementations in close partnerships with global leaders and multiple product organizations and drive the technology innovations will be the key measures of your success in our dynamic and rapidly growing environment. At MSCI, you will be operating in a culture where we value merit and track record. You will own the full life-cycle of the technology services and provide management, technical and people leadership in the design, development, quality assurance and maintenance of our production systems, making sure we continue to scale our great franchise. Responsibilities Engages technical teams and business stakeholders to discuss and propose technical approaches to meet current and future needs • Defines the technical target state of the product and drives achievement of the strategy • As the Lead Architect you will be responsible for leading the design, development, and maintenance of our data architecture, ensuring scalability, efficiency, and reliability. • Create and maintain comprehensive documentation for the architecture, processes, and best practices including Architecture Decision Records (ADRs). • Evaluates recommendations and provides feedback on new technologies • Develops secure and high-quality production code, and reviews and debugs code written by others Informaon Classificaon: GENERAL • Identifies opportunities to eliminate or automate remediation of recurring issues to improve overall operational stability of software applications and systems • Collaborating with a cross functional team to draft, implement and adapt the overall architecture of our products and support infrastructure in conjunction with software development managers, and product management teams • Staying abreast of new technologies and issues in the software-as-a-service industry, including current technologies, platforms, standards and methodologies • Being actively engaged in setting technology standards that impact the company and its offerings • Ensuring the knowledge sharing of engineering best practices across departments; and developing and monitoring technical standards to ensure adherence to them. Qualifications Prior senior Software Architecture roles • Demonstrate proficiency in programming languages such as Python/Java/Scala and knowledge of SQL and NoSQL databases. • Drive the development of conceptual, logical, and physical data models aligned with business requirements. • Lead the implementation and optimization of data technologies, including Apache Spark. • Experience with one of the table formats, such as Delta, Iceberg. • Strong hands-on experience in data architecture, database design, and data modeling. • Proven experience as a Data Platform Architect or in a similar role, with expertise in Airflow, Databricks, Snowflake, Collibra, and Dremio. • Experience with cloud platforms such as AWS, Azure, or Google Cloud. • Ability to dive into details, hands on technologist with strong core computer science fundamentals. • Strong preference for financial services experience • Proven leadership of large-scale distributed software teams that have delivered great products on deadline • Experience in a modern iterative software development methodology • Experience with globally distributed teams and business partners • Experience in building and maintaining applications that are mission critical for customers • M.S. in Computer Science, Management Information Systems or related engineering field • 15+ years of software engineering experience • Demonstrated consensus builder and collegial peer What we offer you Transparent compensation schemes and comprehensive employee benefits, tailored to your location, ensuring your financial security, health, and overall wellbeing. Flexible working arrangements, advanced technology, and collaborative workspaces. A culture of high performance and innovation where we experiment with new ideas and take responsibility for achieving results. A global network of talented colleagues, who inspire, support, and share their expertise to innovate and deliver for our clients. Global Orientation program to kickstart your journey, followed by access to our Learning@MSCI platform, LinkedIn Learning Pro and tailored learning opportunities for ongoing skills development. Multi-directional career paths that offer professional growth and development through new challenges, internal mobility and expanded roles. We actively nurture an environment that builds a sense of inclusion belonging and connection, including eight Employee Resource Groups. All Abilities, Asian Support Network, Black Leadership Network, Climate Action Network, Hola! MSCI, Pride & Allies, Women in Tech, and Women’s Leadership Forum. At MSCI we are passionate about what we do, and we are inspired by our purpose – to power better investment decisions. You’ll be part of an industry-leading network of creative, curious, and entrepreneurial pioneers. This is a space where you can challenge yourself, set new standards and perform beyond expectations for yourself, our clients, and our industry. MSCI is a leading provider of critical decision support tools and services for the global investment community. With over 50 years of expertise in research, data, and technology, we power better investment decisions by enabling clients to understand and analyze key drivers of risk and return and confidently build more effective portfolios. We create industry-leading research-enhanced solutions that clients use to gain insight into and improve transparency across the investment process. MSCI Inc. is an equal opportunity employer. It is the policy of the firm to ensure equal employment opportunity without discrimination or harassment on the basis of race, color, religion, creed, age, sex, gender, gender identity, sexual orientation, national origin, citizenship, disability, marital and civil partnership/union status, pregnancy (including unlawful discrimination on the basis of a legally protected parental leave), veteran status, or any other characteristic protected by law. MSCI is also committed to working with and providing reasonable accommodations to individuals with disabilities. If you are an individual with a disability and would like to request a reasonable accommodation for any part of the application process, please email Disability.Assistance@msci.com and indicate the specifics of the assistance needed. Please note, this e-mail is intended only for individuals who are requesting a reasonable workplace accommodation; it is not intended for other inquiries. To all recruitment agencies MSCI does not accept unsolicited CVs/Resumes. Please do not forward CVs/Resumes to any MSCI employee, location, or website. MSCI is not responsible for any fees related to unsolicited CVs/Resumes. Note on recruitment scams We are aware of recruitment scams where fraudsters impersonating MSCI personnel may try and elicit personal information from job seekers. Read our full note on careers.msci.com

Posted 1 month ago

Apply

5.0 - 10.0 years

9 - 18 Lacs

Coimbatore

Hybrid

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Microsoft Azure Analytics Services Good to have skills : NA Minimum 5 year(s) of experience is required Educational Qualification : BEBTECHMTECH Summary: As an Application Lead, you will be responsible for leading the effort to design, build and configure applications, acting as the primary point of contact. Your typical day will involve working with Microsoft Azure Analytics Services and collaborating with cross-functional teams to deliver high-quality solutions. Roles & Responsibilities: - Lead the design, development, and deployment of applications using Microsoft Azure Analytics Services. - Collaborate with cross-functional teams to ensure the timely delivery of high-quality solutions. - Act as the primary point of contact for all application-related issues, providing technical guidance and support to team members. - Ensure adherence to best practices and standards for application development, testing, and deployment. - Identify and mitigate risks and issues related to application development and deployment. Professional & Technical Skills: - Must To Have Skills: Strong experience in Microsoft Azure Analytics Services. - Good To Have Skills: Experience in other Microsoft Azure services such as Azure Functions, Azure Logic Apps, and Azure Event Grid. - Experience in designing, developing, and deploying applications using Microsoft Azure Analytics Services. - Strong understanding of cloud computing concepts and principles. - Experience in working with Agile methodologies. - Excellent problem-solving and analytical skills. Additional Information: - The candidate should have a minimum of 5 years of experience in Microsoft Azure Analytics Services. - The ideal candidate will possess a strong educational background in computer science or a related field, along with a proven track record of delivering high-quality solutions.

Posted 1 month ago

Apply

10.0 - 20.0 years

25 - 40 Lacs

Hyderabad, Pune

Hybrid

Data Modeler / Lead - Healthcare Data Systems Position Overview We are seeking an experienced Data Modeler/Lead with deep expertise in health plan data models and enterprise data warehousing to drive our healthcare analytics and reporting initiatives. The candidate should have hands-on experience with modern data platforms and a strong understanding of healthcare industry data standards. Key Responsibilities Data Architecture & Modeling • Design and implement comprehensive data models for health plan operations, including member enrollment, claims processing, provider networks, and medical management • • • Develop logical and physical data models that support analytical and regulatory reporting requirements (HEDIS, Stars, MLR, risk adjustment) Create and maintain data lineage documentation and data dictionaries for healthcare datasets Establish data modeling standards and best practices across the organization Technical Leadership • • • Lead data warehousing initiatives using modern platforms like Databricks or traditional ETL tools like Informatica Architect scalable data solutions that handle large volumes of healthcare transactional data Collaborate with data engineers to optimize data pipelines and ensure data quality Healthcare Domain Expertise • Apply deep knowledge of health plan operations, medical coding (ICD-10, CPT, HCPCS), and healthcare data standards (HL7, FHIR, X12 EDI) • • Design data models that support analytical, reporting and AI/ML needs Ensure compliance with healthcare regulations including HIPAA/PHI, and state insurance regulations • Partner with business stakeholders to translate healthcare business requirements into technical data solutions Data Governance & Quality • • • Implement data governance frameworks specic to healthcare data privacy and security requirements Establish data quality monitoring and validation processes for critical health plan metrics Lead eorts to standardize healthcare data denitions across multiple systems and data sources Required Qualications Technical Skills • • • • • 10+ years of experience in data modeling with at least 4 years focused on healthcare/health plan data Expert-level prociency in dimensional modeling, data vault methodology, or other enterprise data modeling approaches Hands-on experience with Informatica PowerCenter/IICS or Databricks platform for large-scale data processing Strong SQL skills and experience with Oracle Exadata and cloud data warehouses (Databricks) Prociency with data modeling tools (Hackolade, ERwin, or similar) Healthcare Industry Knowledge • Deep understanding of health plan data structures including claims, eligibility, provider data, and pharmacy data • • Experience with healthcare data standards and medical coding systems Knowledge of regulatory reporting requirements (HEDIS, Medicare Stars, MLR reporting, risk adjustment) • Familiarity with healthcare interoperability standards (HL7 FHIR, X12 EDI) Leadership & Communication • • • • Proven track record of leading data modeling projects in complex healthcare environments Strong analytical and problem-solving skills with ability to work with ambiguous requirements Excellent communication skills with ability to explain technical concepts to business stakeholders Experience mentoring team members and establishing technical standards Preferred Qualications • Experience with Medicare Advantage, Medicaid, or Commercial health plan operations • • • • Cloud platform certications (AWS, Azure, or GCP) Experience with real-time data streaming and modern data lake architectures Knowledge of machine learning applications in healthcare analytics Previous experience in a lead or architect role within healthcare organizations

Posted 1 month ago

Apply

4.0 - 9.0 years

1 - 2 Lacs

Kolkata, Pune, Chennai

Hybrid

Role & responsibilities: Developing Modern Data Warehouse solutions using Databricks and AWS/ Azure Stack Ability to provide solutions that are forward-thinking in data engineering and analytics space Collaborate with DW/BI leads to understand new ETL pipeline development requirements. Triage issues to find gaps in existing pipelines and fix the issues Work with business to understand the need in reporting layer and develop data model to fulfill reporting needs Help joiner team members to resolve issues and technical challenges. Drive technical discussion with client architect and team members Orchestrate the data pipelines in scheduler via Airflow Preferred candidate profile: Bachelor's and/or masters degree in computer science or equivalent experience. Must have total 3+ yrs. of IT experience and 3+ years' experience in Data warehouse/ETL projects. Deep understanding of Star and Snowflake dimensional modelling. Strong knowledge of Data Management principles Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture Should have hands-on experience in SQL , Python and Spark (PySpark) Candidate must have experience in AWS/ Azure stack Desirable to have ETL with batch and streaming (Kinesis). Experience in building ETL / data warehouse transformation processes Experience with Apache Kafka for use with streaming data / event-based data Experience with other Open-Source big data products Hadoop (incl. Hive, Pig, Impala) Experience with Open Source non-relational / NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J) Experience working with structured and unstructured data including imaging & geospatial data. Experience working in a Dev/Ops environment with tools such as Terraform, CircleCI, GIT. Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning and troubleshoot Databricks Certified Data Engineer Associate/Professional Certification (Desirable). Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects Should have experience working in Agile methodology Strong verbal and written communication skills. Strong analytical and problem-solving skills with a high attention to detail.

Posted 1 month ago

Apply

4.0 - 8.0 years

6 - 10 Lacs

Pune, Gurugram

Work from Office

ZS is a place where passion changes lives. As a management consulting and technology firm focused on improving life and how we live it , our most valuable asset is our people. Here you ll work side-by-side with a powerful collective of thinkers and experts shaping life-changing solutions for patients, caregivers and consumers, worldwide. ZSers drive impact by bringing a client first mentality to each and every engagement. We partner collaboratively with our clients to develop custom solutions and technology products that create value and deliver company results across critical areas of their business. Bring your curiosity for learning; bold ideas; courage an d passion to drive life-changing impact to ZS. Our most valuable asset is our people . At ZS we honor the visible and invisible elements of our identities, personal experiences and belief systems the ones that comprise us as individuals, shape who we are and make us unique. We believe your personal interests, identities, and desire to learn are part of your success here. Learn more about our diversity, equity, and inclusion efforts and the networks ZS supports to assist our ZSers in cultivating community spaces, obtaining the resources they need to thrive, and sharing the messages they are passionate about. Business Technology ZS s Technology group focuses on scalable strategies, assets and accelerators that deliver to our clients enterprise-wide transformation via cutting-edge technology. We leverage digital and technology solutions to optimize business processes, enhance decision-making, and drive innovation. Our services include, but are not limited to, Digital and Technology advisory, Product and Platform development and Data, Analytics and AI implementation. What you ll do Undertake complete ownership in accomplishing activities and assigned responsibilities across all phases of project lifecycle to solve business problems across one or more client engagements; Apply appropriate development methodologies (e.g.agile, waterfall) and best practices (e.g. mid-development client reviews, embedded QA procedures, unit testing) to ensure successful and timely completion of assignments; Collaborate with other team members to leverage expertise and ensure seamless transitions; Exhibit flexibility in undertaking new and challenging problems and demonstrate excellent task management; Assist in creating project outputs such as business case development, solution vision and design, user requirements, prototypes, and technical architecture (if needed), test cases, and operations management; Bring transparency in driving assigned tasks to completion and report accurate status; Bring Consulting mindset in problem solving, innovation by leveraging technical and business knowledge/ expertise and collaborate across other teams; Assist senior team members, delivery leads in project management responsibilities What you ll bring Big Data TechnologiesProficiency in working with big data technologies, particularly in the context of Azure Databricks, which may include Apache Spark for distributed data processing. Azure DatabricksIn-depth knowledge of Azure Databricks for data engineering tasks, including data transformations, ETL processes, and job scheduling. SQL and Query OptimizationStrong SQL skills for data manipulation and retrieval, along with the ability to optimize queries for performance in Snowflake. ETL (Extract, Transform, Load)Expertise in designing and implementing ETL processes to move and transform data between systems, utilizing tools and frameworks available in Azure Databricks. Data IntegrationExperience with integrating diverse data sources into a cohesive and usable format, ensuring data quality and integrity. Python/PySparkKnowledge of programming languages like Python and PySpark for scripting and extending the functionality of Azure Databricks notebooks. Version ControlFamiliarity with version control systems, such as Git, for managing code and configurations in a collaborative environment. Monitoring and OptimizationAbility to monitor data pipelines, identify bottlenecks, and optimize performance for both Azure Data Factory Security and ComplianceUnderstanding of security best practices and compliance considerations when working with sensitive data in Azure and Snowflake environments. Snowflake Data WarehouseExperience in designing, implementing, and optimizing data warehouses using Snowflake, including schema design, performance tuning, and query optimization. Healthcare Domain Knowledge: Familiarity with US health plan terminologies and datasets is essential. Programming/Scripting Languages: Proficiency in Python, SQL, and PySpark is required. Cloud Platforms: Experience with AWS or Azure, specifically in building data pipelines, is needed. Cloud-Based Data Platforms: Working knowledge of Snowflake and Databricks is preferred. Data Pipeline Orchestration: Experience with Azure Data Factory and AWS Glue for orchestrating data pipelines is necessary. Relational Databases: Competency with relational databases such as PostgreSQL and MySQL is required, while experience with NoSQL databases is a plus. BI Tools: Knowledge of BI tools such as Tableau and PowerBI is expected. Version Control: Proficiency with Git, including branching, merging, and pull requests, is required. CI/CD for Data Pipelines: Experience in implementing continuous integration and delivery for data workflows using tools like Azure DevOps is essential. Additional Skills Experience with front-end technologies such as SQL, JavaScript, HTML, CSS, and Angular is advantageous. Familiarity with web development frameworks like Flask, Django, and FAST API is beneficial. Basic knowledge of AWS CI/CD practices is a plus. Strong verbal and written communication skills with ability to articulate results and issues to internal and client teams; Proven ability to work creatively and analytically in a problem-solving environment; Willingness to travel to other global offices as needed to work with client or other internal project teams. Perks & Benefits ZS offers a comprehensive total rewards package including health and well-being, financial planning, annual leave, personal growth and professional development. Our robust skills development programs, multiple career progression options and internal mobility paths and collaborative culture empowers you to thrive as an individual and global team member. We are committed to giving our employees a flexible and connected way of working. A flexible and connected ZS allows us to combine work from home and on-site presence at clients/ZS offices for the majority of our week. The magic of ZS culture and innovation thrives in both planned and spontaneous face-to-face connections. Travel Travel is a requirement at ZS for client facing ZSers; business needs of your project and client are the priority. While some projects may be local, all client-facing ZSers should be prepared to travel as needed. Travel provides opportunities to strengthen client relationships, gain diverse experiences, and enhance professional growth by working in different environments and cultures. Considering applying At ZS, we're building a diverse and inclusive company where people bring their passions to inspire life-changing impact and deliver better outcomes for all. We are most interested in finding the best candidate for the job and recognize the value that candidates with all backgrounds, including non-traditional ones, bring. If you are interested in joining us, we encourage you to apply even if you don't meet 100% of the requirements listed above. To Complete Your Application Candidates must possess or be able to obtain work authorization for their intended country of employment.An on-line application, including a full set of transcripts (official or unofficial), is required to be considered.

Posted 1 month ago

Apply

2.0 - 4.0 years

4 - 6 Lacs

Hyderabad

Work from Office

Overview Data Science Team works in developing Machine Learning (ML) and Artificial Intelligence (AI) projects. Specific scope of this role is to develop ML solution in support of ML/AI projects using big analytics toolsets in a CI/CD environment. Analytics toolsets may include DS tools/Spark/Databricks, and other technologies offered by Microsoft Azure or open-source toolsets. This role will also help automate the end-to-end cycle with Azure Pipelines. You will be part of a collaborative interdisciplinary team around data, where you will be responsible of our continuous delivery of statistical/ML models. You will work closely with process owners, product owners and final business users. This will provide you the correct visibility and understanding of criticality of your developments. Responsibilities Delivery of key Advanced Analytics/Data Science projects within time and budget, particularly around DevOps/MLOps and Machine Learning models in scope Active contributor to code & development in projects and services Partner with data engineers to ensure data access for discovery and proper data is prepared for model consumption. Partner with ML engineers working on industrialization. Communicate with business stakeholders in the process of service design, training and knowledge transfer. Support large-scale experimentation and build data-driven models. Refine requirements into modelling problems. Influence product teams through data-based recommendations. Research in state-of-the-art methodologies. Create documentation for learnings and knowledge transfer. Create reusable packages or libraries. Ensure on time and on budget delivery which satisfies project requirements, while adhering to enterprise architecture standards Leverage big data technologies to help process data and build scaled data pipelines (batch to real time) Implement end-to-end ML lifecycle with Azure Databricks and Azure Pipelines Automate ML models deployments Qualifications BE/B.Tech in Computer Science, Maths, technical fields. Overall 2-4 years of experience working as a Data Scientist. 2+ years experience building solutions in the commercial or in the supply chain space. 2+ years working in a team to deliver production level analytic solutions. Fluent in git (version control). Understanding of Jenkins, Docker are a plus. Fluent in SQL syntaxis. 2+ years experience in Statistical/ML techniques to solve supervised (regression, classification) and unsupervised problems. 2+ years experience in developing business problem related statistical/ML modeling with industry tools with primary focus on Python or Pyspark development. Data Science Hands on experience and strong knowledge of building machine learning models supervised and unsupervised models. Knowledge of Time series/Demand Forecast models is a plus Programming Skills Hands-on experience in statistical programming languages like Python, Pyspark and database query languages like SQL Statistics Good applied statistical skills, including knowledge of statistical tests, distributions, regression, maximum likelihood estimators Cloud (Azure) Experience in Databricks and ADF is desirable Familiarity with Spark, Hive, Pig is an added advantage Business storytelling and communicating data insights in business consumable format. Fluent in one Visualization tool. Strong communications and organizational skills with the ability to deal with ambiguity while juggling multiple priorities Experience with Agile methodology for team work and analytics product creation. Experience in Reinforcement Learning is a plus. Experience in Simulation and Optimization problems in any space is a plus. Experience with Bayesian methods is a plus. Experience with Causal inference is a plus. Experience with NLP is a plus. Experience with Responsible AI is a plus. Experience with distributed machine learning is a plus Experience in DevOps, hands-on experience with one or more cloud service providers AWS, GCP, Azure(preferred) Model deployment experience is a plus Experience with version control systems like GitHub and CI/CD tools Experience in Exploratory data Analysis Knowledge of ML Ops / DevOps and deploying ML models is preferred Experience using MLFlow, Kubeflow etc. will be preferred Experience executing and contributing to ML OPS automation infrastructure is good to have Exceptional analytical and problem-solving skills Stakeholder engagement-BU, Vendors. Experience building statistical models in the Retail or Supply chain space is a plus

Posted 1 month ago

Apply

3.0 - 7.0 years

6 - 10 Lacs

Mumbai

Work from Office

Senior Azure Data Engineer ? L1 Support

Posted 1 month ago

Apply

7.0 - 12.0 years

13 - 23 Lacs

Pune, Bengaluru, Mumbai (All Areas)

Hybrid

We are eagerly seeking candidates with 5 to 13 years experience for a Data Engineer / Lead, to join our dynamic team. The ideal candidate will play a pivotal role within the team to who is a skilled professional with exposure to Python, Spark, Hive, AWS. You will collaborate with internal teams to design, develop, deploy, and maintain software applications at scale Role: Data Engineer / Lead Location: PAN India Experience: 5 to 13 years Job type: Full time Work type: Hybrid Data Engineer with minimum 5 years of relevant professional experience • Should have expertise in Python Scripting and Big data technologies like Spark, Hive, Presto etc. • Experience with AWS services – IAM, EC2, S3, EMR, Lambda Functions, Step Functions, CloudWatch, Redshift, Athena, GLUE etc. • Hands-on experience with Databricks. • Proficient writing Spark jobs in Pyspark and Scala. • Experience writing queries with both SQL and NoSQL DB ((Hive, HBase, MongoDB, Elasticsearch, PostgreSQL etc.) • Should have good understanding around python data structures including data frames, datasets, RDD’s etc. • Experience in ML – integration of ML models • Experience with Data profiling, data migration. • Developing Hive UDF and Hive jobs • Proven hands-on Software Development experience • Experience with test-driven development • Preferred experience in the insurance domain • Must have good understanding of Data warehousing concepts. • Experience using CI/CD tools like GitHub Actions, Jenkins, Azure Devops etc. • Experienced working in Agile projects – Sprint planning, grooming and providing estimations. • Experience using JIRA, Confluence, VS Code or similar IDE’s, Jupyter notebooks etc. • Good communication and collaborative skills with internal and external teams • Flexibility and ability to work in onshore/offshore model involving multiple agile teams • Mentor and guide junior developers, review code, familiar with estimation techniques using story points • Strong analytical and problem-solving skills Qualification you must require: Bachelors or master’s with Computer Science or related field

Posted 1 month ago

Apply

5.0 - 8.0 years

15 - 25 Lacs

Gurugram, Bengaluru

Hybrid

Warm Greetings from SP Staffing!! Role :Azure Data Engineer Experience Required :5 to 8 yrs Work Location : Bangalore/Gurgaon Required Skills, Azure Databricks, ADF, Pyspark/SQL Interested candidates can send resumes to nandhini.spstaffing@gmail.com

Posted 1 month ago

Apply

6.0 - 8.0 years

0 - 1 Lacs

Hyderabad

Hybrid

ML Engineer | RAG, LLM, AWS, Databricks | 6–8 Yrs Exp | Build scalable ML systems with GenAI, pipelines & cloud integration

Posted 1 month ago

Apply

12.0 - 22.0 years

8 - 18 Lacs

Pune, Bengaluru

Hybrid

Role & responsibilities Understanding of the business area that the project is involved with. Working with data stewards to understand the data sources. Clear understanding of data entities, relationships, cardinality etc for the inbound sources based on inputs from the data stewards / source system experts. Performance tuning understanding the overall requirement, reporting impact. Data Modeling for the business and reporting models as per the reporting needs or delivery needs to other downstream systems. Have experience to components and languages like Databricks, Python, PySpark, SCALA, R. Ability to ask strong questions to help the team see areas that may lead to problems. Ability to validate the data by writing sql queries and compare against the source system and transformation mapping. Work closely with teams to collect and translate information requirements into data to develop data-centric solutions. Ensure that industry-accepted data architecture principles and standards are integrated and followed for modeling, stored procedures, replication, regulations, and security, among other concepts, to meet technical and business goals. Continuously improve the quality, consistency, accessibility, and security of our data activity across company needs. Experience on Azure DevOps project tracking tool or equivalent tools like JIRA. Should have Outstanding verbal, non-verbal communication. Should have experience and desire to work in a Global delivery environment.

Posted 1 month ago

Apply

9.0 - 14.0 years

9 - 24 Lacs

Visakhapatnam

Work from Office

Responsibilities: * Design, develop & maintain data pipelines using PySpark, SQL & DBs. * Collaborate with cross-functional teams on project delivery. *Strong in Databricks, PySpark, SQL * Databricks certification is mandatory *Location: Remote

Posted 1 month ago

Apply

10.0 - 15.0 years

22 - 37 Lacs

Bengaluru

Work from Office

Who We Are At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. The Role At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. As GCP Data Engineer at Kyndryl, you will be responsible for designing and developing data pipelines, participating in architectural discussions, and implementing data solutions in a cloud environment using GCP data services. You will collaborate with global architects and business teams to design and deploy innovative solutions, supporting data analytics, automation, and transformation needs. Responsibilities: Design, develop, and maintain scalable data pipelines using GCP services such as BigQuery, Dataflow, Pub/Sub, and Cloud Storage. Participate in architectural discussions, conduct system analysis, and suggest optimal solutions that are scalable, future-proof, and aligned with business requirements. Collaborate with stakeholders to gather requirements and create high-level and detailed technical designs. Design data models suitable for both transactional and big data environments, supporting Machine Learning workflows. Build and optimize ETL/ELT infrastructure using a variety of data sources and GCP services. Develop and maintain Python / PySpark for data processing and integrate with GCP services for seamless data operations. Develop and optimize SQL queries for data analysis and reporting. Monitor and troubleshoot data pipeline issues to ensure timely resolution. Implement data governance and security best practices within GCP. Perform data quality checks and validation to ensure accuracy and consistency. Support DevOps automation efforts to ensure smooth integration and deployment of data pipelines. Provide design expertise in Master Data Management (MDM), Data Quality, and Metadata Management. Provide technical support and guidance to junior data engineers and other team members. Participate in code reviews and contribute to continuous improvement of data engineering practices. Implement best practices for cost management and resource utilization within GCP. If you're ready to embrace the power of data to transform our business and embark on an epic data adventure, then join us at Kyndryl. Together, let's redefine what's possible and unleash your potential. Your Future at Kyndryl Every position at Kyndryl offers a way forward to grow your career. We have opportunities that you won’t find anywhere else, including hands-on experience, learning opportunities, and the chance to certify in all four major platforms. Whether you want to broaden your knowledge base or narrow your scope and specialize in a specific sector, you can find your opportunity here. Who You Are You’re good at what you do and possess the required experience to prove it. However, equally as important – you have a growth mindset; keen to drive your own personal and professional development. You are customer-focused – someone who prioritizes customer success in their work. And finally, you’re open and borderless – naturally inclusive in how you work with others. Required Technical and Professional Experience: Bachelor’s or master’s degree in computer science, Engineering, or a related field with over 8 years of experience in data engineering More than 3 years of experience with the GCP data ecosystem Hands-on experience and Strong proficiency in GCP components such as Dataflow, Dataproc, BigQuery, Cloud Functions, Composer, Data Fusion. Excellent command of SQL with the ability to write complex queries and perform advanced data transformation. Strong programming skills in PySpark and/or Python, specifically for building cloud-native data pipelines. Familiarity with GCP tools like Looker, Airflow DAGs, Data Studio, App Maker, etc. Hands-on experience implementing enterprise-wide cloud data lake and data warehouse solutions on GCP. Knowledge of data governance, security, and compliance best practices. Experience with private and public cloud architectures, pros/cons, and migration considerations. Excellent problem-solving, analytical, and critical thinking skills. Ability to manage multiple projects simultaneously, while maintaining a high level of attention to detail. Communication Skills: Must be able to communicate with both technical and nontechnical. Able to derive technical requirements with the stakeholders. Ability to work independently and in agile teams. Preferred Technical And Professional Experience GCP Data Engineer Certification is highly preferred. Professional certification, e.g., Open Certified Technical Specialist with Data Engineering Specialization. Experience working as a Data Engineer and/or in cloud modernization. Knowledge of Databricks, Snowflake, for data analytics. Experience in NoSQL databases Familiarity with containerization and orchestration tools (e.g., Docker, Kubernetes). Familiarity with BI dashboards and Google Data Studio is a plus. Being You Diversity is a whole lot more than what we look like or where we come from, it’s how we think and who we are. We welcome people of all cultures, backgrounds, and experiences. But we’re not doing it single-handily: Our Kyndryl Inclusion Networks are only one of many ways we create a workplace where all Kyndryls can find and provide support and advice. This dedication to welcoming everyone into our company means that Kyndryl gives you – and everyone next to you – the ability to bring your whole self to work, individually and collectively, and support the activation of our equitable culture. That’s the Kyndryl Way. What You Can Expect With state-of-the-art resources and Fortune 100 clients, every day is an opportunity to innovate, build new capabilities, new relationships, new processes, and new value. Kyndryl cares about your well-being and prides itself on offering benefits that give you choice, reflect the diversity of our employees and support you and your family through the moments that matter – wherever you are in your life journey. Our employee learning programs give you access to the best learning in the industry to receive certifications, including Microsoft, Google, Amazon, Skillsoft, and many more. Through our company-wide volunteering and giving platform, you can donate, start fundraisers, volunteer, and search over 2 million non-profit organizations. At Kyndryl, we invest heavily in you, we want you to succeed so that together, we will all succeed. Get Referred! If you know someone that works at Kyndryl, when asked ‘How Did You Hear About Us’ during the application process, select ‘Employee Referral’ and enter your contact's Kyndryl email address.

Posted 1 month ago

Apply

5.0 - 9.0 years

14 - 24 Lacs

Bengaluru

Work from Office

Job Title: Senior Data Engineer Azure Location: Bengaluru Experience: 6+ years (3+ years on Azure data services preferred) Department: Data Engineering / IT Roles and Responsibilities: The Data Engineer will work on data engineering projects for various business units, focusing on delivery of complex data management solutions by leveraging industry best practices. They work with the project team to build the most efficient data pipelines and data management solutions that make data easily available for consuming applications and analytical solutions. A Data engineer is expected to possess strong technical skills. Key Characteristics Technology champion who constantly pursues skill enhancement and has inherent curiosity to understand work from multiple dimensions. Interest and passion in Big Data technologies and appreciates the value that can be brought in with an effective data management solution. Has worked on real data challenges and handled high volume, velocity, and variety of data. Excellent analytical & problem-solving skills, willingness to take ownership and resolve technical challenges. Contributes to community building initiatives like CoE, CoP. Mandatory skills: Azure - Master ELT - Skill Data Modeling - Skill Data Integration & Ingestion - Skill Data Manipulation and Processing - Skill GITHUB, Action, Azure DevOps - Skill Data factory, Databricks, SQL DB, Synapse, Stream Analytics, Glue, Airflow, Kinesis, Redshift, SonarQube, PyTest - Skill Optional skills: Experience in project management, running a scrum team. Experience working with BPC, Planning. Exposure to working with external technical ecosystem. MKDocs documentation

Posted 1 month ago

Apply

6.0 - 10.0 years

15 - 20 Lacs

Pune, Bengaluru, Delhi / NCR

Work from Office

What impact will you make? Every day, your work will make an impact that matters, while you thrive in a dynamic culture of inclusion, collaboration and high performance. As the undisputed leader in professional services, Deloitte is where you will find unrivaled opportunities to succeed and realize your full potential The Team Deloittes AI&D practice can help you uncover and unlock the value buried deep inside vast amounts of data. Our global network provides strategic guidance and implementation services to help companies manage data from disparate sources and convert it into accurate, actionable information that can support fact-driven decision-making and generate an insight-driven advantage. Our practice addresses the continuum of opportunities in business intelligence & visualization, data management, performance management and next-generation analytics and technologies, including big data, cloud, cognitive and machine learning. Work youll do Location: Bangalore/Mumbai/Pune/Delhi/Chennai/Hyderabad/Kolkata Roles: Databricks Data Engineering Senior Consultant We are seeking highly skilled Databricks Data Engineers to join our data modernization team. You will play a pivotal role in designing, developing, and maintaining robust data solutions on the Databricks platform. Your experience in data engineering, along with a deep understanding of Databricks, will be instrumental in building solutions to drive data-driven decision-making across a variety of customers. Mandatory Skills: Databricks, Spark, Python / SQL Responsibilities • Design, develop, and optimize data workflows and notebooks using Databricks to ingest, transform, and load data from various sources into the data lake. • Build and maintain scalable and efficient data processing workflows using Spark (PySpark or Spark SQL) by following coding standards and best practices. • Collaborate with technical and business stakeholders to understand data requirements and translate them into technical solutions. • Develop data models and schemas to support reporting and analytics needs. • Ensure data quality, integrity, and security by implementing appropriate checks and controls. • Monitor and optimize data processing performance, identifying, and resolving bottlenecks. • Stay up to date with the latest advancements in data engineering and Databricks technologies. Qualifications • Bachelors or masters degree in any field • 6-10 years of experience in designing, implementing, and maintaining data solutions on Databricks • Experience with at least one of the popular cloud platforms – Azure, AWS or GCP • Experience with ETL (Extract, Transform, Load) and ELT (Extract, Load, Transform) processes • Knowledge of data warehousing and data modelling concepts • Experience with Python or SQL • Experience with Delta Lake • Understanding of DevOps principles and practices • Excellent problem-solving and troubleshooting skills • Strong communication and teamwork skills Your role as a leader At Deloitte India, we believe in the importance of leadership at all levels. We expect our people to embrace and live our purpose by challenging themselves to identify issues that are most important for our clients, our people, and for society and make an impact that matters. In addition to living our purpose, Senior Consultant across our organization: Develop high-performing people and teams through challenging and meaningful opportunities Deliver exceptional client service; maximize results and drive high performance from people while fostering collaboration across businesses and borders Influence clients, teams, and individuals positively, leading by example and establishing confident relationships with increasingly senior people Understand key objectives for clients and Deloitte; align people to objectives and set priorities and direction. Acts as a role model, embracing and living our purpose and values, and recognizing others for the impact they make How you will grow At Deloitte, our professional development plan focuses on helping people at every level of their career to identify and use their strengths to do their best work every day. From entry-level employees to senior leaders, we believe there is always room to learn. We offer opportunities to help build excellent skills in addition to hands-on experience in the global, fast-changing business world. From on-the-job learning experiences to formal development programs at Deloitte University, our professionals have a variety of opportunities to continue to grow throughout their career. Explore Deloitte University, The Leadership Centre. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our purpose Deloitte is led by a purpose: To make an impact that matters . Every day, Deloitte people are making a real impact in the places they live and work. We pride ourselves on doing not only what is good for clients, but also what is good for our people and the Communities in which we live and work—always striving to be an organization that is held up as a role model of quality, integrity, and positive change. Learn more about Deloitte's impact on the world Recruiter tips We want job seekers exploring opportunities at Deloitte to feel prepared and confident. To help you with your interview, we suggest that you do your research: know some background about the organization and the business area you are applying to. Check out recruiting tips from Deloitte professionals.

Posted 1 month ago

Apply

7.0 - 12.0 years

18 - 30 Lacs

Hyderabad

Hybrid

Overall Purpose of the Job We are seeking an experienced and innovative Data Solutions Architect to join our team. The ideal candidate will have a strong background in designing and implementing data systems, ensuring seamless integration and scalability of data solutions, and leading the design of data architectures that meet business needs. This role requires proficiency in leveraging the Azure IoT Framework to enable IoT-driven data solutions. As a Data Solutions Architect, you will work closely with cross-functional teams to create robust, secure, and scalable data solutions that empower the business to leverage data for strategic decision-making. Role & responsibilities Design and implement scalable, secure, and high-performance data architectures that meet business requirements. Lead the integration of Azure IoT Framework into the architecture, enabling real-time data ingestion, processing, and analysis from IoT devices. Collaborate with stakeholders (business, IT, and data science teams) to understand data needs and translate them into effective solutions. Oversee the full data lifecycle, including data collection, transformation, storage, and consumption. Evaluate and recommend tools, platforms, and technologies for data storage, processing, and analytics, ensuring that the solutions are aligned with business goals. Define data integration strategies, ensuring that data from disparate sources, including IoT devices and sensors, can be ingested, processed, and unified effectively. Create and enforce data governance practices, ensuring compliance with data privacy, security, and regulatory requirements. Lead and mentor teams in the development and implementation of data solutions, ensuring adherence to architectural best practices and design patterns. Ensure the scalability, reliability, and performance of data systems to handle increasing volumes of data, including large datasets from IoT sources. Stay updated on emerging trends and technologies in data management, cloud platforms, Azure IoT, and data engineering. Required Skills & Qualifications: Bachelors or masters degree in computer science, Data Engineering, Information Technology, or a related field. Proven experience (typically 5+ years) in designing, implementing, and managing large-scale data architectures. Expertise in Azure IoT Framework, including services such as Azure IoT Hub, Azure Stream Analytics, and Azure Digital Twins. Strong understanding of data management, ETL processes, and database design (e.g., SQL, NoSQL, data lakes, and data warehouses). Hands-on experience with modern data technologies and tools such as Hadoop, Spark, Kafka, and ETL frameworks. Proficiency with data governance, security, and privacy standards. Familiarity with machine learning, AI integration, and analytics platforms is a plus. Strong leadership skills with the ability to mentor and guide technical teams. Excellent problem-solving abilities and a deep understanding of system architecture and distributed systems. Excellent communication skills, both written and verbal, with the ability to translate complex technical concepts into business-friendly terms. Preferred Skills: Experience with cloud-native data platforms and services (e.g., Azure Synapse Analytics, Azure Databricks). Knowledge of data visualization and business intelligence tools (e.g., Tableau, Power BI, Looker). Experience with DevOps and CI/CD practices for data pipelines and data-driven applications. Familiarity with microservices architectures and APIs. Certifications in Azure technologies (e.g., Microsoft Certified: Azure Solutions Architect Expert, Microsoft Certified: Azure IoT Developer). Experience with edge computing and real-time data processing for IoT solutions. Required Immediate joiner.

Posted 1 month ago

Apply

5.0 - 8.0 years

20 - 30 Lacs

Bengaluru

Remote

Company name: PulseData labs Pvt Ltd (captive Unit for URUS, USA) About URUS We are the URUS family (US), a global leader in products and services for Agritech. Job Summary: We are seeking a detail-oriented and results-driven Project Manager to lead projects within our DevOps team, with a strong focus on Devops project implementations . The ideal candidate will have experience managing end-to-end delivery of AWS cloud-based DevOps projects and collaborating cross-functionally across engineering, analytics, DevOps, and business stakeholders. Key Responsibilities: Lead and manage full lifecycle projects related to data platform initiatives, especially Databricks-based solutions across AWS or Azure. Develop and maintain project plans, schedules, budgets, and resource forecasts using tools like Jira, MS Project, or similar. Coordinate across technical teams (engineering, ML, DevOps) and business units to define scope, deliverables, and success metrics. Facilitate sprint planning, daily stand-ups, retrospectives, and status reporting following Agile/Scrum or hybrid methodologies. Identify risks, dependencies, and blockers early; drive resolution through mitigation plans and stakeholder communication. Manage vendor relationships (where applicable), ensuring delivery quality, alignment with architecture standards, and on-time execution. Ensure compliance with data governance, security, and documentation standards. Communicate regularly with senior leadership on project status, KPIs, and key decisions. Required Qualifications: 5+ years of experience managing technical or data-related projects, with at least 2+ years in cloud data platforms . Proven experience leading projects involving AWS. Solid understanding of Agile delivery practices, change management, and cross-functional coordination. Proficiency in project tracking tools (Jira, Confluence, Smartsheet, or Microsoft Project). Exceptional written and verbal communication skills; able to translate technical concepts to business audiences. Preferred Qualifications: PMP, PMI-ACP, or Certified Scrum Master (CSM) certification. Prior experience on multi-cloud platforms is preferred Familiarity with tools such as Airflow, Unity Catalog, Power BI/Tableau, and Git-based CI/CD processes. Soft Skills: Strong leadership and stakeholder management Proactive problem solver with a bias for execution Excellent time management and multitasking ability Comfortable working in a fast-paced, evolving environment

Posted 1 month ago

Apply

7.0 - 12.0 years

15 - 30 Lacs

Pune, Chennai

Work from Office

Data Modeler- Primary Skills: Data modeling (conceptual, logical, physical), relational/dimensional/data vault modeling, ERwin/IBM Infosphere, SQL (Oracle, SQL Server, PostgreSQL, DB2), banking domain data knowledge (Retail, Corporate, Risk, Compliance), data governance (BCBS 239, AML, GDPR), data warehouse/lake design, Azure cloud. Secondary Skills: MDM, metadata management, real-time modeling (payments/fraud), big data (Hadoop, Spark), streaming platforms (Confluent), M&A data integration, data cataloguing, documentation, regulatory trend awareness. Soft skills: Attention to detail, documentation, time management, and teamwork.

Posted 1 month ago

Apply

5.0 - 10.0 years

11 - 15 Lacs

Pune

Work from Office

Project description You'll be working in the GM Business Analytics team located in Pune. The successful candidate will be a member of the global Distribution team, which has team members in London and Pune. We work as part of a global team providing analytical solutions for IB distribution/sales people. Solutions deployed should be extensible globally with minimal localization. Responsibilities Are you passionate about data and analyticsAre you keen to be part of the journey to modernize a data warehouse/ analytics suite of application(s). Do you take pride in the quality of software delivered for each development iteration We're looking for someone like that to join us and be a part of a high-performing team on a high-profile project. solve challenging problems in an elegant way master state-of-the-art technologies build a highly responsive and fast updating application in an Agile & Lean environment apply best development practices and effectively utilize technologies work across the full delivery cycle to ensure high-quality delivery write high-quality code and adhere to coding standards work collaboratively with diverse team(s) of technologists You are: Curious and collaborative, comfortable working independently, as well as in a team Focused on delivery to the business Strong in analytical skills. For example, the candidate must understand the key dependencies among existing systems in terms of the flow of data among them. It is essential that the candidate learns to understand the 'big picture' of how IB industry/business functions. Able to quickly absorb new terminology and business requirements Already strong in analytical tools, technologies, platforms, etc. The candidate must also demonstrate a strong desire for learning and self-improvement. Open to learning home-grown technologies, support current state infrastructure and help drive future state migrations. imaginative and creative with newer technologies Able to accurately and pragmatically estimate the development effort required for specific objectives You will have the opportunity to work under minimal supervision to understand local and global system requirements, design and implement the required functionality/bug fixes/enhancements. You will be responsible for components that are developed across the whole team and deployed globally. You will also have the opportunity to provide third-line support to the application's global user community, which will include assisting dedicated support staff and liaising with the members of other development teams directly, some of which will be local and some remote. Skills Must have A bachelor's or master's degree, preferably in Information Technology or a related field (computer science, mathematics, etc.), focusing on data engineering. 5+ years of relevant experience as a data engineer in Big Data is required. Strong Knowledge of programming languages (Python / Scala) and Big Data technologies (Spark, Databricks or equivalent) is required. Strong experience in executing complex data analysis and running complex SQL/Spark queries. Strong experience in building complex data transformations in SQL/Spark. Strong knowledge of Database technologies is required. Strong knowledge of Azure Cloud is advantageous. Good understanding and experience with Agile methodologies and delivery. Strong communication skills with the ability to build partnerships with stakeholders. Strong analytical, data management and problem-solving skills. Nice to have Experience working on the QlikView tool Understanding of QlikView scripting and data model Other Languages EnglishC1 Advanced Seniority Senior

Posted 1 month ago

Apply

0.0 - 4.0 years

5 - 10 Lacs

Mumbai

Work from Office

Who We Are At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. The Role Are you ready to dive headfirst into the captivating world of data engineering at Kyndryl? As a Data Engineer, you'll be the visionary behind our data platforms, crafting them into powerful tools for decision-makers. Your role? Ensuring a treasure trove of pristine, harmonized data is at everyone's fingertips. As a Data Engineer at Kyndryl, you'll be at the forefront of the data revolution, crafting and shaping data platforms that power our organization's success. This role is not just about code and databases; it's about transforming raw data into actionable insights that drive strategic decisions and innovation. In this role, you'll be engineering the backbone of our data infrastructure, ensuring the availability of pristine, refined data sets. With a well-defined methodology, critical thinking, and a rich blend of domain expertise, consulting finesse, and software engineering prowess, you'll be the mastermind of data transformation. Your journey begins by understanding project objectives and requirements from a business perspective, converting this knowledge into a data puzzle. You'll be delving into the depths of information to uncover quality issues and initial insights, setting the stage for data excellence. But it doesn't stop there. You'll be the architect of data pipelines, using your expertise to cleanse, normalize, and transform raw data into the final dataset—a true data alchemist. Armed with a keen eye for detail, you'll scrutinize data solutions, ensuring they align with business and technical requirements. Your work isn't just a means to an end; it's the foundation upon which data-driven decisions are made – and your lifecycle management expertise will ensure our data remains fresh and impactful. So, if you're a technical enthusiast with a passion for data, we invite you to join us in the exhilarating world of data engineering at Kyndryl. Let's transform data into a compelling story of innovation and growth. Your Future at Kyndryl Every position at Kyndryl offers a way forward to grow your career. We have opportunities that you won’t find anywhere else, including hands-on experience, learning opportunities, and the chance to certify in all four major platforms. Whether you want to broaden your knowledge base or narrow your scope and specialize in a specific sector, you can find your opportunity here. Who You Are Who You Are You’re good at what you do and possess the required experience to prove it. However, equally as important – you have a growth mindset; keen to drive your own personal and professional development. You are customer-focused – someone who prioritizes customer success in their work. And finally, you’re open and borderless – naturally inclusive in how you work with others. Required Skills and Experience •Expertise in data mining, data storage and Extract-Transform-Load (ETL) processes •Experience in data pipelines development and tooling, e.g., Glue, Databricks, Synapse, or Dataproc •Experience with both relational and NoSQL databases, PostgreSQL, DB2, MongoDB •Excellent problem-solving, analytical, and critical thinking skills •Ability to manage multiple projects simultaneously, while maintaining a high level of attention to detail •Communication Skills: Must be able to communicate with both technical and non-technical colleagues, to derive technical requirements from business needs and problems Preferred Skills and Experience •Experience working as a Data Engineer and/or in cloud modernization •Experience in Data Modelling, to create conceptual model of how data is connected and how it will be used in business processes •Professional certification, e.g., Open Certified Technical Specialist with Data Engineering Specialization •Cloud platform certification, e.g., AWS Certified Data Analytics– Specialty, Elastic Certified Engineer, Google CloudProfessional Data Engineer, or Microsoft Certified: Azure Data Engineer Associate •Experience working with Kafka , ElasticSearch, Kibana & maintaining data lake •Managing interfaces, monitoring for production deployment including log shipping tool. •Experience in updates, upgrade, patches, VA closure, support with industry best tools •Degree in a scientific discipline, such as Computer Science, Software Engineering, or Information Technology Being You Diversity is a whole lot more than what we look like or where we come from, it’s how we think and who we are. We welcome people of all cultures, backgrounds, and experiences. But we’re not doing it single-handily: Our Kyndryl Inclusion Networks are only one of many ways we create a workplace where all Kyndryls can find and provide support and advice. This dedication to welcoming everyone into our company means that Kyndryl gives you – and everyone next to you – the ability to bring your whole self to work, individually and collectively, and support the activation of our equitable culture. That’s the Kyndryl Way. What You Can Expect With state-of-the-art resources and Fortune 100 clients, every day is an opportunity to innovate, build new capabilities, new relationships, new processes, and new value. Kyndryl cares about your well-being and prides itself on offering benefits that give you choice, reflect the diversity of our employees and support you and your family through the moments that matter – wherever you are in your life journey. Our employee learning programs give you access to the best learning in the industry to receive certifications, including Microsoft, Google, Amazon, Skillsoft, and many more. Through our company-wide volunteering and giving platform, you can donate, start fundraisers, volunteer, and search over 2 million non-profit organizations. At Kyndryl, we invest heavily in you, we want you to succeed so that together, we will all succeed. Get Referred! If you know someone that works at Kyndryl, when asked ‘How Did You Hear About Us’ during the application process, select ‘Employee Referral’ and enter your contact's Kyndryl email address.

Posted 1 month ago

Apply

0.0 - 2.0 years

2 - 5 Lacs

Ranchi

Work from Office

Job Title: Data Engineer Experience: 1+ Years (Freshers with relevant training & certification may apply) Location: Ranchi (Work from Office) Job Summary: We are looking for a Data Engineer with at least 1 year of hands-on experience in data engineering practices. The ideal candidate will work closely with our data and analytics teams to build robust and scalable data pipelines. Experience with Snowflake is a plus. Key Responsibilities: Design, build, and maintain data pipelines using modern data engineering tools. Transform and clean data from multiple sources for reporting and analytics. Optimize data pipelines for performance and scalability. Collaborate with cross-functional teams including BI, analytics, and application developers. Monitor, troubleshoot, and maintain data workflows. Required Skills: Strong understanding of data warehousing concepts. Proficiency in SQL and Python. Knowledge of ETL tools and processes. Familiarity with cloud platforms such as AWS, Snowflake, Databricks, Azure or GCP. Exposure to data visualization tools is a plus. Good to have any of below certification: Snowflake SnowPro Core Certification Snowflake Advanced: Data Engineer Certification Google Cloud Professional Data Engineer Microsoft Certified: Azure Data Engineer Associate AWS Certified Data Analytics Specialty Qualifications: Bachelor's degree in Computer Science, Information Technology, or related field.

Posted 1 month ago

Apply

5.0 - 7.0 years

20 - 25 Lacs

Bangalore Rural

Work from Office

Data Governance Specialist Req number: R5477 Employment type: Full time Worksite flexibility: Remote Who we are CAI is a global technology services firm with over 8,500 associates worldwide and a yearly revenue of $1 billion+. We have over 40 years of excellence in uniting talent and technology to power the possible for our clients, colleagues, and communities. As a privately held company, we have the freedom and focus to do what is right—whatever it takes. Our tailor-made solutions create lasting results across the public and commercial sectors, and we are trailblazers in bringing neurodiversity to the enterprise. Job Summary We are looking for a motivated Data Governance Specialist ready to take us to the next level! If you are strong in creating and maintaining integrations between data systems and data governance tools and are looking for your next career move, apply now Job Description We are looking for a Data Governance Specialist to develop and maintain Collibra workflows to support data governance initiatives. This position will be Full Time and remote. What You’ll Do Develop and maintain Collibra workflows to support data governance initiatives. Create and maintain integrations between data systems and data governance tools. Write and maintain data quality rules to measure data quality. Work with vendors to troubleshoot and resolve technical issues related to workflows and integrations. Work with other teams to ensure adherence to DG policies and standards. Assist in implementing data governance initiatives around data quality, master data, and metadata management. What You'll Need Required: Experience: 5-7 years Strong programming skills. Knowledge of system integration and use of middleware solutions. Proficiency in SQL and relational databases. Understanding of data governance, including data quality, master data, and metadata management. Willingness to learn new tools and skills. Preferred Proficient with Java or Groovy. Proficient with Mulesoft or other middleware. Proficient with Collibra DIP, Collibra Data Quality, and DQLabs. Experience with AWS Redshift, and Databricks. Physical Demands Ability to safely and successfully perform the essential job functions Sedentary work that involves sitting or remaining stationary most of the time with occasional need to move around the office to attend meetings, etc. Ability to conduct repetitive tasks on a computer, utilizing a mouse, keyboard, and monitor Reasonable accommodation statement If you require a reasonable accommodation in completing this application, interviewing, completing any pre-employment testing, or otherwise participating in the employment selection process, please direct your inquiries to application.accommodations@cai.io or (888) 824 – 8111.

Posted 1 month ago

Apply

4.0 - 9.0 years

12 - 16 Lacs

Kochi

Work from Office

As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and Azure Cloud Data Platform Responsibilities: Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark and Hive, Hbase or other NoSQL databases on Azure Cloud Data Platform or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / Azure eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala; Minimum 3 years of experience on Cloud Data Platforms on Azure; Experience in DataBricks / Azure HDInsight / Azure Data Factory, Synapse, SQL Server DB Good to excellent SQL skills Exposure to streaming solutions and message brokers like Kafka technologies Preferred technical and professional experience Certification in Azure and Data Bricks or Cloudera Spark Certified developers

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies