Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 10.0 years
15 - 25 Lacs
Noida, Pune, Bengaluru
Hybrid
Job description Key Responsibilities: Data Pipeline Development & Optimization: Design, develop, and maintain scalable and high-performance data pipelines using PySpark and Databricks . Ensure data quality, consistency, and security throughout all pipeline stages. Optimize data workflows and pipeline performance, ensuring efficient data processing. Cloud-Based Data Solutions: Architect and implement cloud-native data solutions using AWS services (e.g., S3 , Glue , Lambda , Redshift ), GCP ( DataProc , DataFlow ), and Azure ( ADF , ADLF ). Work on ETL processes to transform, load, and process data across cloud platforms. SQL & Data Modeling: Utilize SQL (including windowing functions) to query and analyze large datasets efficiently. Work with different data schemas and models relevant to various business contexts (e.g., star/snowflake schemas, normalized, and denormalized models). Data Security & Compliance: Implement robust data security measures, ensuring encryption, access control, and compliance with industry standards and regulations. Monitor and troubleshoot data pipeline performance and security issues. Collaboration & Communication: Collaborate with cross-functional teams (data scientists, software engineers, and business stakeholders) to design and integrate end-to-end data pipelines. Communicate technical concepts clearly and effectively to non-technical stakeholders. Domain Expertise: Understand and work with domain-related data, tailoring solutions to address the specific business needs of the customer. Optimize data solutions for the business context, ensuring alignment with customer requirements and goals. Mentorship & Leadership: Provide guidance to junior team members, fostering a collaborative environment and ensuring best practices are followed. Drive innovation and promote a culture of continuous learning and improvement within the team. Required Qualifications: Experience : 6-8 years of total experience in data engineering, with 3+ years of hands-on experience in Databricks , PySpark , and AWS . 3+ years of experience in Python and SQL for data engineering tasks. Experience working with cloud ETL services such as AWS Glue , GCP DataProc/DataFlow , Azure ADF and ADLF . Technical Skills : Strong proficiency in PySpark for large-scale data processing and transformation. Expertise in SQL , including window functions, for data manipulation and querying. Experience with cloud-based ETL tools (AWS Glue, GCP DataFlow , Azure ADF ) and understanding of their integration with cloud data platforms. Deep understanding of data schemas and models used across various business contexts. Familiarity with data warehousing optimization techniques , including partitioning, indexing, and query optimization. Knowledge of data security best practices (e.g., encryption, access control, and compliance). Agile Methodologies : Experience working in Agile (Scrum or Kanban) teams for iterative development and delivery. Communication : Excellent verbal and written communication skills, with the ability to explain complex technical concepts to non-technical stakeholders. Skills Python,Databricks,Pyspark,Sql
Posted 3 weeks ago
10.0 - 14.0 years
18 - 20 Lacs
Noida
Work from Office
Position Summary This is a highly visible role that requires a perfect combination of deep technical credibility, strategic acumen and demonstrable leadership competency. You will be the ultimate Trusted Advisor, capable of engaging business and technology leaders within the worlds largest enterprises, and guiding their strategic AI-enabled journey. The Country Leader, AI Architecture, is responsible for leading the Labs Architectural services within the region. You will need to provide hands-on technical leadership, whilst managing a small team of senior AI architects and consultants. Operating in a fast-moving, highly innovative environment, collaborating with senior Sales and Technical leaders. You will have business responsibility for the provision of innovation-led Labs services. focusing on the design and implementation of advanced AI solutions enabling genuine transformational outcomes. This hands-on leadership role demands deep understanding of AI and related technologies, running in Edge, onprem and Public Cloud environments. Acting at the forefront of our industry you will be fully conversant with Generative AI, and its impact at both the individual employee and strategic organisational level. The ideal candidate will be an established thought-leader in the AI domain, with solid architectural and engineering credentials that you maintain at the highest level. Working ahead of industry trends, deeply passionate about AI-enabled business transformation and demonstrating a strong innovation-led posture. As a thought leader, you will interact frequently with CxO level clients, industry leaders, provide expert opinions, and contribute to HCLs strategic vision. Key Responsibilities Technical & Engineering Leadership Act as ultimate Design Authority for sophisticated AI solutions and related technology architecture. Lead high-level architectural discussions with clients, providing expert guidance on best practices for AI implementations across AI PC, Edge, Data Centre and Public Cloud environments. Ensure solutions align with modern best practices across the full spectrum of platforms and environments. Deep understanding across GPU/NPU, Cognitive Infrastructure, Application and Copilot/agent domains. Contribute to HCLTech thought leadership in the AI & Cloud domains with a deep understanding of open-source (e.g., Kubernetes, OPEA) and partner technologies. Collaborate on joint technical projects with global partners, including Google, Microsoft, AWS, NVIDIA, IBM, Red Hat, Intel, and Dell. Service Delivery & Innovation Design innovative AI solutions from ideation to MVP, rapidly enabling genuine business value. Optimize AI and cloud architectures to meet client requirements, balancing efficiency, accuracy and effectiveness. Assess and review existing complex solutions and recommend architectural improvements to transform applications with latest AI technologies. Drive the adoption of cutting-edge GenAI technologies spearheading initiatives that push the boundaries of AI capability across the full spectrum of environments. Thought Leadership and Client Engagement Provide expert architectural and strategy guidance to clients on incorporating Generative AI into their business and technology landscape. Conduct workshops, briefings, and strategic dialogues to educate clients on AI benefits and applications, establishing strong, trust-based relationships. Act as a trusted advisor, contributing to technical projects with a strong focus on technical excellence and on-time delivery. Author whitepapers, blogs, and speak at industry events, maintaining a visible presence as a thought leader in AI and associated technologies. Collaboration and Customer Engagement Engage with multiple customers simultaneously, building high-impact consultative relationships. Work closely with internal teams and global partners to ensure seamless collaboration and knowledge sharing across projects. Maintain hands-on technical credibility, staying ahead of industry trends and mentoring others in the organization. Management and Leadership Demonstrable track record building and managing small Architectural or Engineering teams. Support career growth and professional development of the team. Enrich and enable world-class technical excellence across the team; supported by a culture of collaboration, respect, diversity, inclusion and deep trustful relationships. Mandatory Skills & Experience Management & leadership : Demonstrable track record building and leading Architectural or Engineering teams. Proven ability to combine strategic business and commercial skills, performing at the highest-level in senior client relationships. Experience: 10+ years architecture design 10+ years software engineering. 5+ years in a senior Team Leader or similar management position. Significant client-facing engagement within a GSI, system integrator, professional services or technology organization. Technologies: Professional-level expertise in Public Cloud environments (AWS, Azure, Google Cloud). Demonstrable coding proficiency with Python, Java or Go languages. AI Expertise: Advanced machine learning algorithms, GenAI models (e.g., GPT, BERT, DALL-E, GEMINI), NLP techniques. Working familiarity with Copilot solutions, in both software engineering and office productivity domains. Business Expertise: Extensive track record performing a lead technical role in a sales, business-development or other commercial environment. Negotiating and consultative skills; experience leading the complete engagement lifecycle. Communication: Experienced public speaker, with an ability to connect with senior business leaders. Project Methodologies: Agile and Scrum project management. Desired Skills & Experience Knowledge of GenAI operations (LLMOps), experience Governing AI models in production environments. Proficiency in data engineering for AI, including data preprocessing, feature engineering, and pipeline creation. Expertise in AI model fine-tuning and evaluation, with a focus on improving performance for specialized tasks. Copilot design, engineering and extensions. Knowledgeable about Responsible AI, including governance and ethics. Bias mitigation, with experience in implementing strategies to ensure fair and unbiased AI solutions. Deep Learning Frameworks (TensorFlow, PyTorch) Innovation and Emerging Technology Trends Strategic AI Vision and Road mapping Enthusiastic about working in a fast-paced environment using the latest technologies, and passionate about HCLs dynamic and high-energy Lab culture. Verifiable Certification Recognized Professional certification from Google, Microsoft or AWS in an AI and/or Cloudrelated domain. Soft Skills and Behavioural Competencies Exemplary communication and leadership skills, capable of inspiring teams and making strategic decisions that align with business goals. Demonstrates a strong customer orientation, innovative problem-solving abilities, and effective cross-cultural collaboration. Expert at driving organizational change and fostering a culture of innovation.
Posted 3 weeks ago
4.0 - 8.0 years
6 - 10 Lacs
Pune
Work from Office
Responsibilities: designing, developing, and maintaining scalable data pipelines using Databricks, PySpark, Spark SQL, and Delta Live Tables. Collaborate with cross-functional teams to understand data requirements and translate them into efficient data models and pipelines. Implement best practices for data engineering, including data quality, and data security. Optimize and troubleshoot complex data workflows to ensure high performance and reliability. Develop and maintain documentation for data engineering processes and solutions. Requirements: Bachelor's or Master's degree. Proven experience as a Data Engineer, with a focus on Databricks, PySpark, Spark SQL, and Delta Live Tables. Strong understanding of data warehousing concepts, ETL processes, and data modelling. Proficiency in programming languages such as Python and SQL. Experience with cloud platforms (e.g., AWS, Azure, GCP) and their data services. Excellent problem-solving skills and the ability to work in a fast-paced environment. Strong leadership and communication skills, with the ability to mentor and guide team members.
Posted 3 weeks ago
6.0 - 11.0 years
8 - 13 Lacs
Hyderabad
Work from Office
GCP data engineer Big Query SQL Python Talend ETL Programmer GCP or Any Cloud technology. Job Description: Experienced in GCP data engineer Big Query SQL Python Talend ETL Programmer GCP or Any Cloud technology. Good experience in building the pipeline of GCP Components to load the data into Big Query and to cloud storage buckets. Excellent Data Analysis skills. Good written and oral communication skills Self-motivated able to work independently
Posted 3 weeks ago
5.0 - 10.0 years
0 - 0 Lacs
Chennai
Work from Office
Deep knowledge of cloud platforms (AWS, Azure, Google Cloud) including their AI-specific services such as AWS SageMaker or Google AI Platform. AI/ML Proficiency: In-depth understanding of AI/ML frameworks and libraries such as TensorFlow, PyTorch, Scikit-learn, along with experience in ML model lifecycle management. Infrastructure as Code: Proficiency in infrastructure-as-code tools such as Terraform and AWS CloudFormation to automate and manage cloud deployment processes. Regards, JayasuryaV jayasurya.v@vdartinc.com
Posted 3 weeks ago
6.0 - 10.0 years
6 - 10 Lacs
Hyderabad, Greater Noida
Work from Office
Work closely with source data application teams and product owners to design, implement and support analytics solutions that provide insights to make better decisions. Implement data migration and data engineering solutions using Azure products and services: (Azure Data Lake Storage, Azure Data Factory, Azure Functions, Event Hub, Azure Stream Analytics, Azure Databricks, etc.) and traditional data warehouse tools. Perform multiple aspects involved in the development lifecycle. Design, cloud engineering (Infrastructure, network, security, and administration), ingestion, preparation, data modeling, testing, CICD pipelines, performance tuning, deployments, consumption, BI, alerting, prod support. Provide technical leadership and collaborate within a team environment as well as work independently. Be a part of a DevOps team that completely owns and supports their product. Implement batch and streaming data pipelines using cloud technologies. Leads development of coding standards, best practices and privacy and security guidelines. Mentors' others on technical and domain skills to create multi-functional teams All you'll need for success. Minimum Qualifications: Education & Prior Job Experience 1. Bachelor's degree in Computer Science, Computer Engineering, Technology, Information Systems (CIS/MIS), Engineering or related technical discipline, or equivalent experience/training 2. 3 years software solution development using agile, DevOps, operating in a product model that includes designing, developing, and implementing large-scale applications or data engineering solutions 3. 3 years Data Engineering experience using SQL 4. 2 years of cloud development (prefer Microsoft Azure) including Azure EventHub, Azure Data Factory, Azure Databricks, Azure DevOps, Azure Blob Storage, Azure Power Apps and Power BI. 5. Combination of Development, Administration & Support experience in several of the following tools/platforms required: a. Scripting: Python, PySpark, Unix, SQL b. Data Platforms: Teradata, SQL Server c. Azure Data Explorer. Administration skills are a plus d. Azure Cloud Technologies: Top 3 Mandatory Skills and Experience: SQL, Python, PySpark
Posted 3 weeks ago
4.0 - 9.0 years
3 - 8 Lacs
Pune
Work from Office
About Client Hiring for One of the Most Prestigious Multinational Corporations Job Title : Big Data Engineer (Spark, Scala) Experience : 4 to 10 years Key Responsibilities : Data Engineering : Design, develop, and maintain large-scale distributed data processing pipelines and data solutions using Apache Spark with Scala and/or Python . Data Integration : Work on integrating various data sources (batch and real-time) from both structured and unstructured data formats into big data platforms like Hadoop , AWS EMR , or Azure HDInsight . Performance Optimization : Optimize Spark jobs for better performance, managing large datasets and ensuring efficient resource usage. Architecture Design : Participate in the design and implementation of data pipelines and data lakes that support analytics, reporting, and machine learning applications. Collaboration : Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and ensure solutions are aligned with business goals. Big Data Tools : Implement and manage big data technologies like Hadoop , Kafka , HBase , Hive , Presto , etc. Automation : Automate repetitive tasks using scripting and monitoring solutions for continuous data pipeline management. Troubleshooting : Identify and troubleshoot data pipeline issues and ensure data integrity. Cloud Platforms : Work with cloud-based services and platforms like AWS , Azure , or Google Cloud for data storage, compute, and deployment. Code Quality : Ensure high code quality by following best practices, code reviews, and implementing unit and integration tests. Technical Skills : Experience : 6-9 years of hands-on experience in Big Data Engineering with a focus on Apache Spark (preferably with Scala and/or Python ). Languages : Proficiency in Scala and/or Python for building scalable data processing applications. Knowledge of Java is a plus. Big Data Frameworks : Strong experience with Apache Spark , Hadoop , Hive , HBase , Kafka , and other big data tools. Data Processing : Strong understanding of batch and real-time data processing and workflows. Cloud Experience : Proficient in cloud platforms such as AWS , Azure , or Google Cloud Platform for deploying and managing big data solutions. SQL/NoSQL : Experience working with SQL and NoSQL databases, particularly Hive , HBase , or Cassandra . Data Integration : Strong skills in integrating and processing diverse data sources, including working with data lakes and data warehouses. Performance Tuning : Hands-on experience in performance tuning and optimization of Spark jobs and jobs running on Hadoop clusters. Data Pipelines : Strong background in designing, building, and maintaining robust data pipelines for large-scale data processing. Version Control : Familiarity with Git or other version control systems. DevOps & Automation : Knowledge of automation tools and CI/CD pipelines for data workflows (Jenkins, Docker, Kubernetes). Analytical Skills : Strong problem-solving skills and a deep understanding of data modeling, data structures, and algorithms. Notice period : Immediate joiners Location : Pune Mode of Work :WFO(Work From Office) Thanks & Regards, SWETHA Black and White Business Solutions Pvt.Ltd. Bangalore,Karnataka,INDIA. Contact Number:8067432433 rathy@blackwhite.in |www.blackwhite.in
Posted 3 weeks ago
10.0 - 15.0 years
12 - 17 Lacs
Chennai
Work from Office
Job Purpose: We are looking for a Senior Data Engineer with extensive experience in developing ETL processes using PySpark Notebooks and Microsoft Fabric, and supporting existing legacy SQL Server environments. The ideal candidate will possess a strong background in Spark-based development, demonstrate a high proficiency in SQL, and be comfortable working independently, collaboratively within a team, or leading other developers when required, coupled with strong communication skills. Requirements: We are looking for a Senior Data Engineer with extensive experience in developing ETL processes using PySpark Notebooks and Microsoft Fabric, and supporting existing legacy SQL Server environments. The ideal candidate will possess a strong background in Spark-based development, demonstrate a high proficiency in SQL, and be comfortable working independently, collaboratively within a team, or leading other developers when required, coupled with strong communication skills. The ideal candidate will possess Experience with Azure Data Services, including Azure Data Factory, Azure Synapse or similar tools,Experience of creating DAG's, implementing activities, and running Apache Airflow and Familiarity with DevOps practices, CI/CD pipelines and Azure DevOps. The ideal candidate should have: Key Responsibilities: Design, develop, and maintain ETL Notebook orchestration pipelines using PySpark and Microsoft Fabric. Working with Apache Delta Lake tables, Change Data Feed (CDF), Lakehouses and custom libraries Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver efficient data solutions. Migrate and integrate data from legacy SQL Server environments into modern data platforms. Optimize data pipelines and workflows for scalability, efficiency, and reliability. Provide technical leadership and mentorship to junior developers and other team members. Troubleshoot and resolve complex data engineering issues related to performance, data quality, and system scalability. Debugging of code, breaking down to test components, identify issues and resolve Develop, maintain, and enforce data engineering best practices, coding standards, and documentation. Conduct code reviews and provide constructive feedback to improve team productivity and code quality. Support data-driven decision-making processes by ensuring data integrity, availability, and consistency across different platforms. Qualifications : Bachelor s or Master s degree in Computer Science, Data Science, Engineering, or a related field. 10+ years of experience in data engineering, with a strong focus on ETL development using PySpark or other Spark-based tools. Proficiency in SQL with extensive experience in complex queries, performance tuning, and data modeling. Experience with Microsoft Fabric or similar cloud-based data integration platforms is a plus. Strong knowledge of data warehousing concepts, ETL frameworks, and big data processing. Familiarity with other data processing technologies (e.g., Hadoop, Hive, Kafka) is an advantage. Experience working with both structured and unstructured data sources. Excellent problem-solving skills and the ability to troubleshoot complex data engineering issues. Experience with Azure Data Services, including Azure Data Factory, Azure Synapse, or similar tools. Experience of creating DAG's, implementing activities, and running Apache Airflow Familiarity with DevOps practices, CI/CD pipelines and Azure DevOps.
Posted 3 weeks ago
6.0 - 10.0 years
7 - 11 Lacs
Greater Noida
Work from Office
Work closely with source data application teams and product owners to design, implement and support analytics solutions that provide insights to make better decisions. Implement data migration and data engineering solutions using Azure products and services: (Azure Data Lake Storage, Azure Data Factory, Azure Functions, Event Hub, Azure Stream Analytics, Azure Databricks, etc.) and traditional data warehouse tools. Perform multiple aspects involved in the development lifecycle. Design, cloud engineering (Infrastructure, network, security, and administration), ingestion, preparation, data modeling, testing, CICD pipelines, performance tuning, deployments, consumption, BI, alerting, prod support. Provide technical leadership and collaborate within a team environment as well as work independently. Be a part of a DevOps team that completely owns and supports their product. Implement batch and streaming data pipelines using cloud technologies. Leads development of coding standards, best practices and privacy and security guidelines. Mentors' others on technical and domain skills to create multi-functional teams All you'll need for success. Minimum Qualifications: Education & Prior Job Experience 1. Bachelor's degree in Computer Science, Computer Engineering, Technology, Information Systems (CIS/MIS), Engineering or related technical discipline, or equivalent experience/training 2. 3 years software solution development using agile, DevOps, operating in a product model that includes designing, developing, and implementing large-scale applications or data engineering solutions 3. 3 years Data Engineering experience using SQL 4. 2 years of cloud development (prefer Microsoft Azure) including Azure EventHub, Azure Data Factory, Azure Databricks, Azure DevOps, Azure Blob Storage, Azure Power Apps and Power BI. 5. Combination of Development, Administration & Support experience in several of the following tools/platforms required: a. Scripting: Python, PySpark, Unix, SQL b. Data Platforms: Teradata, SQL Server c. Azure Data Explorer. Administration skills are a plus d. Azure Cloud Technologies: Top 3 Mandatory Skills and Experience: SQL, Python, PySpark
Posted 3 weeks ago
9.0 - 14.0 years
55 - 60 Lacs
Bengaluru
Hybrid
Dodge Position Title: Technology Lead STG Labs Position Title: Location: Bangalore, India About Dodge Dodge Construction Network exists to deliver the comprehensive data and connections the construction industry needs to build thriving communities. Our legacy is deeply rooted in empowering our customers with transformative insights, igniting their journey towards unparalleled business expansion and success. We serve decision-makers who seek reliable growth and who value relationships built on trust and quality. By combining our proprietary data with cutting-edge software, we deliver to our customers the essential intelligence needed to excel within their respective landscapes. We propel the construction industry forward by transforming data into tangible guidance, driving unparalleled advancement. Dodge is the catalyst for modern construction. https://www.construction.com/ About Symphony Technology Group (STG) STG is a Silicon Valley (California) based private equity firm that has a long and successful track record of transforming high potential software and software-enabled services companies, as well as insights-oriented companies into definitive market leaders. The firm brings expertise, flexibility, and resources to build strategic value and unlock the potential of innovative companies. Partnering to build customer-centric, market winning portfolio companies, STG creates sustainable foundations for growth that bring value to all existing and future stakeholders. The firm is dedicated to transforming and building outstanding technology companies in partnership with world class management teams. With over $5.0 billion in assets under management, including a recently raised $2.0 billion fund. STGs expansive portfolio has consisted of more than 30 global companies. STG Labs is the incubation center for many of STG’s portfolio companies, building their engineering, professional services, and support delivery teams in India. STG Labs offers an entrepreneurial start-up environment for software and AI engineers, data scientists and analysts, project and product managers and provides a unique opportunity to work directly for a software or technology company. Based in Bangalore, STG Labs supports hybrid working. https://stg.com Roles and Responsibilities Lead the design, deployment, and management of data mart and analytics infrastructure leveraging AWS services Implement and manage CI/CD pipelines using industry-leading DevOps practices and tools Design, implement, and oversee API architecture, ensuring robust, scalable, and secure REST API development using AWS API Gateway Collaborate closely with data engineers, architects, and analysts to design highly performant and scalable data solutions. Mentor and guide engineering teams, fostering a culture of continuous learning and improvement. Optimize cloud resources for cost-efficiency, scalability, and reliability. Establish best practices and standards for AWS infrastructure, DevOps processes, API design, and data analytics workflows. Qualifications Hands-on working knowledge and experience is required in: Data Structures Memory Management Basic Algos (Search, Sort, etc) AWS Data Services: Redshift, Glue, EMR, Athena, Lake Formation, Lambda Infrastructure-as-Code Tools: Terraform, AWS CloudFormation Scripting Languages: Python, Bash, SQL DevOps Tooling: Docker, Kubernetes, Jenkins, Bitbucket - must be comfortable in CLI / terminal environments. Command Line / Terminal Environments AWS Security Best Practices Scalable Data Marts, Analytics Systems, and RESTful APIs Hands-on working knowledge and experience is preferred in: Container Orchestration: Kubernetes, EKS Data Visualization & Warehousing: Tableau, Data Warehouse Machine Learning & Big Data Pipelines Certifications Preferred : AWS Certifications (Solutions Architect Professional, DevOps Engineer) (Preferred Skill).
Posted 3 weeks ago
3.0 - 7.0 years
5 - 9 Lacs
Pune
Work from Office
We are looking for a Senior Data Platform Engineer to lead the design, development, and optimization of our data platform infrastructure. In this role, you will drive scalability, reliability, and performance across our data systems, working closely with data engineers, analysts, and product teams to enable data-driven decision-making at scale. Required Skills & Experience: Architect and implement scalable, secure, and high-performance data platforms (on AWS cloud using Databbricks). Build and manage data pipelines and ETL processes using modern data engineering tools (AWS RDS, REST APIs and, S3 based ingestions ) Monitor the Maintain the production data pipelines, work on enhancements Optimize data systems for performance, reliability, and cost efficiency. Implement data governance, quality, and observability best practices as per Freshworks standards Collaborate with cross-functional teams to support data needs. Qualifications: 1. Bachelor's/Masters degree in Computer Science, Information Technology, or related field. 2. Good exposure to Data structures and algorithms 3. Proven backend development experience using Scala, Spark or Python 4. Strong understanding of REST API development, web services, and microservices architecture. 5. Good to have experience with Kubernetes and containerized deployment. 6. Proficient in working with relational databases like MySQL, PostgreSQL, or similar platforms. 7. Solid understanding and hands-on experience with AWS cloud services. 8. Strong knowledge of code versioning tools, such as Git, Jenkins 9. Excellent problem-solving skills, critical thinking, and a keen attention to detail.
Posted 3 weeks ago
5.0 - 10.0 years
9 - 14 Lacs
Hyderabad
Work from Office
Roles and Responsibilities Lead the design, development, and maintenance of data pipelines and ETL processes architect and implement scalable data solutions using Databricks and AWS. Optimize data storage and retrieval systems using Rockset, Clickhouse, and CrateDB. Develop and maintain data APIs using FastAPI. Orchestrate and automate data workflows using Airflow. Collaborate with data scientists and analysts to support their data needs. Ensure data quality, security, and compliance across all data systems. Mentor junior data engineers and promote best practices in data engineering. Evaluate and implement new data technologies to improve the data infrastructure. Participate in cross-functional projects and provide technical leadership. Manage and optimize data storage solutions using AWS S3, implementing best practices for data lakes and data warehouses. Private and Confidential www.fissionlabs.com info@fissionlabs.com Implement and manage Databricks Unity Catalog for centralized data governance and access control across the organization. Qualifications Required Bachelor's or Master's degree in Computer Science, Engineering, or related field 5+ years of experience in data engineering, with at least 2-3 years in a lead role Strong proficiency in Python, PySpark, and SQL Extensive experience with Databricks and AWS cloud services Hands-on experience with Airflow for workflow orchestration Familiarity with FastAPI for building high-performance APIs Experience with columnar databases like Rockset, Clickhouse, and CrateDB Solid understanding of data modeling, data warehousing, and ETL processes Experience with version control systems (e.g., Git) and CI/CD pipelines Excellent problem-solving skills and ability to work in a fast-paced environment Strong communication skills and ability to work effectively in cross-functional teams Knowledge of data governance, security, and compliance best practices Proficiency in designing and implementing data lake architectures using AWS S3 Experience with Databricks Unity Catalog or similar data governance and metadata management tools Skills and Experience Required Tech Stack Databricks, Python, PySpark, SQL, Airflow, FastAPI, AWS (S3, IAM, ECR, Lambda), Rockset, Clickhouse, CrateDB Why you'll love working with us: Opportunity to work on business challenges from top global clientele with high impact. Vast opportunities for self-development, including online university access and sponsored certifications. Sponsored Tech Talks, industry events & seminars to foster innovation and learning. Generous benefits package including health insurance, retirement benefits, flexible work hours, and more. Supportive work environment with forums to explore passions beyond work. This role presents an exciting opportunity for a motivated individual to contribute to the development of cutting-edge solutions while advancing their career in a dynamic and collaborative environment.
Posted 3 weeks ago
5.0 - 10.0 years
10 - 15 Lacs
Hyderabad, Pune, Bengaluru
Work from Office
Role & responsibilities Data Bricks skillset with Pyspark , SQL Strong proficiency in pyspark and SQL Understanding of data warehousing concepts ETL processes/ Data pipeline building with ADB/ADF Experience with Azure cloud platform, knowledge of data manipulation techniques Experience working with business teams to convert the requirements into technical stories for migration Leading the technical discussions and implementing the solution Experience will multi tenant architecture and have delivered projects in Databricks + Azure combination Experience to Unity catalogue is useful
Posted 3 weeks ago
1.0 - 4.0 years
10 - 14 Lacs
Pune
Work from Office
Overview Design, develop, and maintain data pipelines and ETL/ELT processes using PySpark/Databricks/bigquery/Airflow/composer. Optimize performance for large datasets through techniques such as partitioning, indexing, and Spark optimization. Collaborate with cross-functional teams to resolve technical issues and gather requirements. Responsibilities Ensure data quality and integrity through data validation and cleansing processes. Analyze existing SQL queries, functions, and stored procedures for performance improvements. Develop database routines like procedures, functions, and views/MV. Participate in data migration projects and understand technologies like Delta Lake/warehouse/bigquery. Debug and solve complex problems in data pipelines and processes. Qualifications Bachelor’s degree in computer science, Engineering, or a related field. Strong understanding of distributed data processing platforms like Databricks and BigQuery. Proficiency in Python, PySpark, and SQL programming languages. Experience with performance optimization for large datasets. Strong debugging and problem-solving skills. Fundamental knowledge of cloud services, preferably Azure or GCP. Excellent communication and teamwork skills. Nice to Have: Experience in data migration projects. Understanding of technologies like Delta Lake/warehouse. What we offer you Transparent compensation schemes and comprehensive employee benefits, tailored to your location, ensuring your financial security, health, and overall wellbeing. Flexible working arrangements, advanced technology, and collaborative workspaces. A culture of high performance and innovation where we experiment with new ideas and take responsibility for achieving results. A global network of talented colleagues, who inspire, support, and share their expertise to innovate and deliver for our clients. Global Orientation program to kickstart your journey, followed by access to our Learning@MSCI platform, LinkedIn Learning Pro and tailored learning opportunities for ongoing skills development. Multi-directional career paths that offer professional growth and development through new challenges, internal mobility and expanded roles. We actively nurture an environment that builds a sense of inclusion belonging and connection, including eight Employee Resource Groups. All Abilities, Asian Support Network, Black Leadership Network, Climate Action Network, Hola! MSCI, Pride & Allies, Women in Tech, and Women’s Leadership Forum. At MSCI we are passionate about what we do, and we are inspired by our purpose – to power better investment decisions. You’ll be part of an industry-leading network of creative, curious, and entrepreneurial pioneers. This is a space where you can challenge yourself, set new standards and perform beyond expectations for yourself, our clients, and our industry. MSCI is a leading provider of critical decision support tools and services for the global investment community. With over 50 years of expertise in research, data, and technology, we power better investment decisions by enabling clients to understand and analyze key drivers of risk and return and confidently build more effective portfolios. We create industry-leading research-enhanced solutions that clients use to gain insight into and improve transparency across the investment process. MSCI Inc. is an equal opportunity employer. It is the policy of the firm to ensure equal employment opportunity without discrimination or harassment on the basis of race, color, religion, creed, age, sex, gender, gender identity, sexual orientation, national origin, citizenship, disability, marital and civil partnership/union status, pregnancy (including unlawful discrimination on the basis of a legally protected parental leave), veteran status, or any other characteristic protected by law. MSCI is also committed to working with and providing reasonable accommodations to individuals with disabilities. If you are an individual with a disability and would like to request a reasonable accommodation for any part of the application process, please email Disability.Assistance@msci.com and indicate the specifics of the assistance needed. Please note, this e-mail is intended only for individuals who are requesting a reasonable workplace accommodation; it is not intended for other inquiries. To all recruitment agencies MSCI does not accept unsolicited CVs/Resumes. Please do not forward CVs/Resumes to any MSCI employee, location, or website. MSCI is not responsible for any fees related to unsolicited CVs/Resumes. Note on recruitment scams We are aware of recruitment scams where fraudsters impersonating MSCI personnel may try and elicit personal information from job seekers. Read our full note on careers.msci.com
Posted 3 weeks ago
8.0 - 12.0 years
35 - 40 Lacs
Mumbai
Work from Office
JOBOVERVIEW: Part of Business Focused IT, candidate needs would be in charge of scaling up and managing an Enterprise-Wide Data Platform that would support the analytical needs of the complete Pharma business(extensible to other businesses as required). The platform should be flexible to support business operations of the future and provide story telling type of intuitive analytics. This position would be a part of the Analytics Center of Excellence. Essential Skills & Experience: BS/MS degree in computer science, mathematics, or equivalent relevant degree with 8+ years in Analytics, BI and Data Warehousing. Experience leading in a highly cross-functional environment, collaborating closely with IT, Finance & Engineering. Hands-on experience in architecting and building scalable data platform, ETL processes, distributed systems for data processing, data migration and quality. Strong familiarity and working knowledge with cloud platforms like AWS and Snowflake. Experience in building compelling data visualizations using business intelligence and data visualization tools like Tableau, BI and Qlik. Ability to develop & execute data strategy in collaboration with business leads. Excellent problem-solving, with ability to translate complex data into business recommendations for business stakeholders Excellent communication skills, with ability to explain complex and abstract technological concepts to business stakeholders. Proficiency in SQL for extraction, aggregating and processing large volume of structured/unstructured data. Experience in advanced query optimization techniques. Proficiency in data acquisition and data preparation by pulling data from various sources. Self-driven and ability to learn new, unfamiliar tools and deliver on ambiguous projects with incomplete data. Experience reviewing and providing feedback on architecture and code review. KEY ROLES/RESPONSIBILITIES: Responsible for developing and maintaining the global data marketplace (data lake) Manages the sourcing and acquisition of internal (including IT and OT) & external data sets Ensure adherence of data to both enterprise business rules, and, especially, to legal and regulatory requirements Define the data quality standards for cross functional data that is used in BI/analytics models/reports Provide input into data integration standards and the enterprise data architecture Responsible for modelling and designing the application data structure, storage and integration and leading the database analysis, design and build effort Review the database deliverables throughout development thereby ensuring quality and traceability to requirements and adherence to all quality management plans and standards Develop strategies for data acquisitions, dissemination and archival Manage the data architecture within the big data solution such as Hadoop, Cloudera, etc.. Responsible for modelling and designing the big data structure, storage, integration and leading the database analysis, design, visualization and build effort Review the database deliverables throughout development thereby ensuring quality and traceability to requirements and adherence to all quality management plans and standards Work with partners and vendors (in a multi-vendor environment) for various capabilities Continuously review the analytics stack for improvements performance improvements, reduce overall TCO through cost optimizations & better the predictive capabilities Bring in thought leadership with regards to analytics to make Piramal Pharma an analytics driven business; and help in driving business KPIs Preparation of Analytics Platform budgets for both CAPEX and OPEX for assigned initiatives and rolling out the initiatives within budget & projected timelines Drive MDM Strategy and Implementation initiative Responsible for overall delivery and customer satisfaction for Business services, interaction with business leads, project status management and reporting, implementation management, identifying further opportunities for automation within PPL Ensure IT compliance in all project rollouts as per regulatory guidelines Conduct Change Management and Impact Analysis for approved enhancements. Uphold data integrity requirements following ALCOA+ guidelines. Monitor SLAs and KPIs as agreed upon by the business, offering root-cause analysis and risk mitigation action plans when needed. Drive awareness & learning across Piramal Pharma in Enterprise Data Platform
Posted 3 weeks ago
4.0 - 7.0 years
10 - 20 Lacs
Noida, Hyderabad, Pune
Work from Office
Streaming data Technical skills requirements :- Experience- 5+ Years Solid hands-on and Solution Architecting experience in Big-Data Technologies (AWS preferred) Skills Required- - Hands on experience in: AWS Dynamo DB, EKS, Kafka, Kinesis, Glue, EMR- Hands-on experience of programming language like Scala with Spark. - Good command and working experience on Hadoop Map Reduce, HDFS, Hive, HBase, and/or No-SQL Databases - Hands on working experience on any of the data engineering analytics platform (Hortonworks Cloudera MapR AWS), AWS preferred - Hands-on experience on Data Ingestion Apache Nifi, Apache Airflow, Sqoop, and Oozie - Hands on working experience of data processing at scale with event driven systems, message queues (Kafka Flink Spark Streaming) - Data Warehouse exposure on Apache Nifi, Apache Airflow, Kylo - Operationalization of ML models on AWS (e.g. deployment, scheduling, model monitoring etc.) - Feature Engineering Data Processing to be used for Model development- Experience gathering and processing raw data at scale (including writing scripts, web scraping, calling APIs, write SQL queries, etc.) - Experience building data pipelines for structured unstructured, real-time batch, events synchronous asynchronous using MQ, Kafka, Steam processing - Hands-on working experience in analysing source system data and data flows, working with structured and unstructured data - Must be very strong in writing SQL queries.
Posted 3 weeks ago
5.0 - 10.0 years
10 - 20 Lacs
Hyderabad, Bengaluru
Hybrid
Job Description Postgres Developer The Postgres developer will be responsible for developing in Postgres platform hosted in Azure. Good data Engineering, data modeling, SQL knowledge is a must with Postgres programming background. The developerwill be responsible for providing design and development solutions for applications in the Postgres(EDB). Essential Job Functions: Understand requirements and engage with team to design and deliver projects. Design and implement Postgres EDB projects in CMS Design and develop application lifecycle utilizing EDB Postgres / Azure technologies Participate in design and planning and necessary documentation Participate in Agile ceremonies including daily standups, scrum, retrospectives, demos, code reviews. Hands on with PSQL/SQL development and Unix scripting Engage with team to develop and deliver cross functional products Key Skills Data Engineering, SQL and ETL Unix scripting Postgres DBMS Data transfer methodologies CICD Strong communication Other Responsibilities: Document and maintain project artifacts. Maintain comprehensive knowledge of industry standards, methodologies, processes, and best practices. Complete training as required for Privacy, Code of Conduct etc. Promptly report any known or suspected loss, theft or unauthorized disclosure or use of PI to the General Counsel/Chief Compliance Officer or Chief Information Officer. Adhere to the company's compliance program. Safeguard the company's intellectual property, information, and assets. Other duties as assigned. Minimum Qualifications and Job Requirements: Bachelor's degree in CS. 7 years of hands-on experience in designing and developing DB solutions 5 years of hands-on experience in Oracle or Postgres DBMS 5 years of hands-on experience in Unix scripting, SQL, Object oriented programming, ETL and unit testing Experience with Azure DevOps and CI/CD as well as agile tools and processes including JIRA, confluence.
Posted 3 weeks ago
5.0 - 7.0 years
15 - 25 Lacs
Pune, Bengaluru
Work from Office
Job Role & responsibilities: - Responsible for architecture designing, building and deploying data systems, pipelines etc Responsible for Designing and implementing agile, scalable, and cost efficiency solution on cloud data services. Responsible for Designing, Implementation, Development & Migration Migrate data from traditional database systems to Cloud environment Architect and implement ETL and data movement solutions. Technical Skill, Qualification & experience required:- 4.5-7 years of experience in Data Engineering, Azure Cloud Data Engineering, Azure Databricks, datafactory , Pyspark, SQL,Python Hands on experience in Azure Databricks, Data factory, Pyspark, SQL Proficient in Cloud Services-Azure Strong hands-on experience for working with Streaming dataset Hands-on Expertise in Data Refinement using Pyspark and Spark SQL Familiarity with building dataset using Scala. Familiarity with tools such as Jira and GitHub Experience leading agile scrum, sprint planning and review sessions Good communication and interpersonal skills Comfortable working in a multidisciplinary team within a fast-paced environment * Immediate Joiners will be preferred only
Posted 3 weeks ago
8.0 - 10.0 years
25 - 35 Lacs
Bengaluru
Work from Office
Job Title: Data Engineer (Java + Hadoop/Spark) Location: Bangalore Type: Full Time Experience: 8-12 years Notice Period Immediate Joiners to 30 Days Job Description: We are looking for a skilled Data Engineer with strong expertise in Java and hands-on experience with Hadoop or Spark. The ideal candidate will be responsible for designing, building, and maintaining scalable data pipelines and processing systems. Key Responsibilities: Develop and maintain data pipelines using Java. Work with big data technologies such as Hadoop or Spark to process large datasets. Optimize data workflows and ensure high performance and reliability. Collaborate with data scientists, analysts, and other engineers on data-related initiatives. Requirements: Strong programming skills in Java. Hands-on experience with Hadoop or Spark. Experience with data ingestion, transformation, and storage solutions. Familiarity with distributed systems and big data architecture.
Posted 3 weeks ago
7.0 - 12.0 years
7 - 17 Lacs
Bengaluru
Work from Office
About this role: Wells Fargo is seeking a Senior Lead business execution consultant In this role, you will: Act as a Business Execution advisor to leadership to drive performance and initiatives, and develop and implement information delivery or presentations to key stakeholders and senior management Lead the strategy and resolution of highly complex and unique challenges related to Business Execution that require solid analytical skills, extensive knowledge of Business Execution, and understanding of business, delivering longer term and large scale solutions Provide vision, direction, and expertise to senior leadership for implementing innovative and significant business solutions that are large scale and cross organizational Lead team meetings or steering committee to facilitate decision making and support implementation of recommendations and plans Strategically engage with all levels of professionals and managers across multiple lines of businesses and serve as an experienced advisor to the leadership Provide direction to a cross functional team using business expertise Required Qualifications: 7+ years of Business Execution, Implementation, or Strategic Planning experience, or equivalent demonstrated through one or a combination of the following: work experience, training, military experience, education Desired Qualifications: Bachelors or Masters degree in Computer Science, Information Technology, Business Administration, or a related field. Proven experience in innovation roles within the banking or financial services industry, preferably within a global capability center (GCC) or captive unit. Strong understanding of banking processes and the financial services industry. Demonstrated expertise in emerging technologies, including generative AI, agentic AI, Data Engineering, Data Mining & Visualization, machine learning, NLP, and intelligent automation. Exceptional research and analytical skills, with the ability to translate complex data into actionable insights. Excellent project management skills, with a track record of successfully leading innovation projects. Outstanding communication and presentation skills, with experience in content creation and delivering innovation roadshows. Ability to work collaboratively in a cross-functional team environment and manage relationships with diverse stakeholders. Job Expectations: We are seeking a dynamic and forward-thinking Innovation and Incubation specialist with hands on experience to join our team. In this role, you will be instrumental in identifying and researching emerging technologies, developing innovative solutions, and leading industry research & Market study to identify platforms which help in digital transformation journey within bank. Your expertise in generative AI, intelligent automation, and market analysis will be crucial in shaping the future of our banking operations. Emerging Technology Research: Stay abreast of the latest technological trends, with a focus on generative AI, agentic AI, fintech, Data Engineering, Data Mining & Visualization, machine learning, NLP/NLG, and intelligent automation, to identify opportunities for innovation within banking processes. Solution and Platform Identification: Evaluate and recommend new technologies, platforms, and solutions that align with the banks strategic objectives and have the potential to enhance operational efficiency and customer experience. Market and Industry Analysis: Conduct comprehensive market research and industry analysis to understand competitive landscapes, identify market opportunities, and inform strategic decision-making. Process Understanding and Improvement: Analyze existing banking processes to identify areas for improvement and develop innovative solutions that streamline operations and enhance service delivery. Innovation Roadshows and Content Creation: Develop and deliver engaging presentations, newsletters, and other content to communicate innovation initiatives and foster a culture of innovation within the organization. Proof of Concept Development: Lead the development and execution of proof-of-concept projects to validate the feasibility and value of new technologies and solutions. Project Management: Oversee innovation projects from inception to completion, ensuring timely delivery, effective communication, and alignment with strategic goals.
Posted 3 weeks ago
9.0 - 14.0 years
35 - 65 Lacs
Bengaluru
Hybrid
Software ARCHITECT - PRODUCT MNC - Urgent Hiring!!! Are you passionate about crafting scalable, cloud-native architectures? Do you thrive at the intersection of AI, cloud, and data platforms? Join us at Integra Connect, where we are transforming healthcare technology through intelligent solutions and innovation. LOCATION : BANGLORE - HYBRID What You'll Do: Architect end-to-end systems using Azure, Snowflake, Python, and .NET Lead AI/ML architecture and drive responsible AI practices Design scalable data platforms & ETL pipelines Mentor cross-functional engineering teams Set best practices for performance, security, and cloud optimization What Were Looking For: 8+ years in software development, 3+ years in architecture Deep expertise in Azure DevOps, AKS, Snowflake, AI/ML, and .NET/C# Experience in cloud-native architectures and healthcare systems is a plus Strong leadership, strategic thinking, and problem-solving skills At Integra Connect, you’ll be part of a team enabling specialty healthcare practices to thrive in a value-based care world — leveraging modern tech to make real impact. Hybrid | Competitive Benefits | Growth-Focused Culture Ready to architect the future of healthcare? Apply now or DM me for more info!
Posted 3 weeks ago
8.0 - 13.0 years
10 - 15 Lacs
Bengaluru
Work from Office
As a member of this team, the data engineer will be responsible for designing and expanding our existing data infrastructure, enabling easy access to data, supporting complex data analyses, and automating optimization workflows for business and marketing operations Essential Responsibilities: As a Senior Software Engineer, your responsibilities will include: Building, refining, tuning, and maintaining our real-time and batch data infrastructure Daily use technologies such as Python, Spark, Airflow, Snowflake, Hive, FastAPI, etc. Maintaining data quality and accuracy across production data systems Working with Data Analysts to develop ETL processes for analysis and reporting Working with Product Managers to design and build data products Working with our DevOps team to scale and optimize our data infrastructure Participate in architecture discussions, influence the road map, take ownership and responsibility over new projects Participating in on-call rotation in their respective time zones (be available by phone or email in case something goes wrong) Desired Characteristics: Minimum 8 years of software engineering experience. An undergraduate degree in Computer Science (or a related field) from a university where the primary language of instruction is English is strongly desired. 2+ Years of Experience/Fluency in Python Proficient with relational databases and Advanced SQL Expert in usage of services like Spark and Hive. Experience working with container-based solutions is a plus. Experience in adequate usage of any scheduler such as Apache Airflow, Apache Luigi, Chronos etc. Experience in adequate usage of cloud services (AWS) at scale Proven long term experience and enthusiasm for distributed data processing at scale, eagerness to learn new things. Expertise in designing and architecting distributed low latency and scalable solutions in either cloud and on-premises environment. Exposure to the whole software development lifecycle from inception to production and monitoring. Experience in Advertising Attribution domain is a plus Experience in agile software development processes Excellent interpersonal and communication skills
Posted 3 weeks ago
7.0 - 10.0 years
20 - 35 Lacs
Pune
Hybrid
At Medtronic you can begin a life-long career of exploration and innovation, while helping champion healthcare access and equity for all. Youll lead with purpose, breaking down barriers to innovation in a more connected, compassionate world. A Day in the Life Our Global Diabetes Capability Center in Pune is expanding to serve more people living with diabetes globally. Our state-of-the-art facility is dedicated to transforming diabetes management through innovative solutions and technologies that reduce the burden of living with diabetes. We’re a mission-driven leader in medical technology and solutions with a legacy of integrity and innovation, join our new Minimed India Hub as Senior Digital Engineer. Responsibilities may include the following and other duties may be assigned: Expertise in translating conceptual needs and business requirements into finalized architectural design. Able to manage large projects or processes that span across other collaborative teams both within and beyond Digital Technology. Operate autonomously to defines, describe, diagram and document the role and interaction of the high-level technological and human components that combine to provide cost effective and innovative solutions to meet evolving business needs. Promotes, guides and governs good architectural practice through the application of well-defined, proven technology and human interaction patterns and through architecture mentorship. Responsible for designing, developing, and maintaining scalable data pipelines, preferably using PySpark. Work with structured and unstructured data from various sources. Optimize and tune PySpark applications for performance and scalability. Deep experience supporting the full lifecycle management of the entire IT portfolio including the selection, appropriate usage, enhancement and replacement of information technology applications, infrastructure and services. Implement data quality checks and ensure data integrity. Monitor and troubleshoot data pipeline issues and ensure timely resolution. Document technical specifications and maintain comprehensive documentation for data pipelines. The ideal candidate is exposed to the fast-paced world of Big Data technology and has experience in building ETL/ELT data solutions using new and emerging technologies while maintaining stability of the platform. Required Knowledge and Experience: Have strong programming knowledge in Java, Scala, or Python or PySpark, SQL. 4-8 years of experience in data engineering, with a focus on PySpark. Proficiency in Python and Spark, with strong coding and debugging skills. Have experience in designing and building Enterprise Data solutions on AWS Cloud or Azure, or Google Cloud Platform (GCP). Experience with big data technologies such as Hadoop, Hive, and Kafka. Strong knowledge of SQL and experience with relational databases (e.g., PostgreSQL, MySQL, SQL Server). Experience with data warehousing solutions like Redshift, Snowflake, Databricks or Google Big Query. Familiarity with data lake architectures and data storage solutions. Knowledge of CI/CD pipelines and version control systems (e.g., Git). Excellent problem-solving skills and the ability to troubleshoot complex issues. Strong communication and collaboration skills, with the ability to work effectively in a team environment. Physical Job Requirements The above statements are intended to describe the general nature and level of work being performed by employees assigned to this position, but they are not an exhaustive list of all the required responsibilities and skills of this position. Regards, Ashwini Ukekar Sourcing Specialist
Posted 3 weeks ago
4.0 - 7.0 years
6 - 12 Lacs
Pune
Hybrid
A Day in the Life Were a mission-driven leader in medical technology and solutions with a legacy of integrity and innovation, join our new Minimed India Hub as Digital Engineer. We are working to improve how healthcare addresses the needs of more people, in more ways and in more places around the world. As a PySpark Data Engineer, you will be responsible for designing, developing, and maintaining data pipelines using PySpark. You will work closely with data scientists, analysts, and other stakeholders to ensure the efficient processing and analysis of large datasets, while handling complex transformations and aggregations. Responsibilities may include the following and other duties may be assigned: Design, develop, and maintain scalable and efficient ETL pipelines using PySpark. Work with structured and unstructured data from various sources. Optimize and tune PySpark applications for performance and scalability. Collaborate with data scientists and analysts to understand data requirements, review Business Requirement documents and deliver high-quality datasets. Implement data quality checks and ensure data integrity. Monitor and troubleshoot data pipeline issues and ensure timely resolution. Document technical specifications and maintain comprehensive documentation for data pipelines. Stay up to date with the latest trends and technologies in big data and distributed computing. Required Knowledge and Experience: Bachelors degree in computer science, Engineering, or a related field. 4-5 years of experience in data engineering, with a focus on PySpark. Proficiency in Python and Spark, with strong coding and debugging skills. Strong knowledge of SQL and experience with relational databases (e.g., PostgreSQL, MySQL, SQL Server). Hands-on experience with cloud platforms such as AWS, Azure, or Google Cloud Platform (GCP). Experience with data warehousing solutions like Redshift, Snowflake, Databricks or Google BigQuery. Familiarity with data lake architectures and data storage solutions. Experience with big data technologies such as Hadoop, Hive, and Kafka. Excellent problem-solving skills and the ability to troubleshoot complex issues. Strong communication and collaboration skills, with the ability to work effectively in a team environment. Preferred Skills: Experience with Databricks. Experience with orchestration tools like Apache Airflow or AWS Step Functions. Knowledge of machine learning workflows and experience working with data scientists. Understanding of data security and governance best practices. Familiarity with streaming data platforms and real-time data processing. Knowledge of CI/CD pipelines and version control systems (e.g., Git). Physical Job Requirements The above statements are intended to describe the general nature and level of work being performed by employees assigned to this position, but they are not an exhaustive list of all the required responsibilities and skills of this position. If interested, please share your updated CV on ashwini.ukekar@medtronic.com Regards, Ashwin Ukekar Sourcing Specialist Medtronic
Posted 3 weeks ago
2.0 - 4.0 years
2 - 6 Lacs
Hyderabad
Work from Office
Fusion Plus Solutions Inc is looking for Data Engineer to join our dynamic team and embark on a rewarding career journey. Liaising with coworkers and clients to elucidate the requirements for each task. Conceptualizing and generating infrastructure that allows big data to be accessed and analyzed. Reformulating existing frameworks to optimize their functioning. Testing such structures to ensure that they are fit for use. Preparing raw data for manipulation by data scientists. Detecting and correcting errors in your work. Ensuring that your work remains backed up and readily accessible to relevant coworkers. Remaining up-to-date with industry standards and technological advancements that will improve the quality of your outputs.
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
20312 Jobs | Dublin
Wipro
11977 Jobs | Bengaluru
EY
8165 Jobs | London
Accenture in India
6667 Jobs | Dublin 2
Uplers
6464 Jobs | Ahmedabad
Amazon
6352 Jobs | Seattle,WA
Oracle
5993 Jobs | Redwood City
IBM
5803 Jobs | Armonk
Capgemini
3897 Jobs | Paris,France
Tata Consultancy Services
3776 Jobs | Thane