Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
2.0 - 9.0 years
0 Lacs
chennai, tamil nadu
On-site
Tiger Analytics is a global AI and analytics consulting firm with a team of over 2800 professionals focused on using data and technology to solve complex problems that impact millions of lives worldwide. Our culture is centered around expertise, respect, and a team-first mindset. Headquartered in Silicon Valley, we have delivery centers globally and offices in various cities across India, the US, UK, Canada, and Singapore, along with a significant remote workforce. At Tiger Analytics, we are certified as a Great Place to Work. Joining our team means being at the forefront of the AI revolution, working with innovative teams that push boundaries and create inspiring solutions. We are currently looking for an Azure Big Data Engineer to join our team in Chennai, Hyderabad, or Bangalore. As a Big Data Engineer (Azure), you will be responsible for building and implementing various analytics solutions and platforms on Microsoft Azure using a range of Open Source, Big Data, and Cloud technologies. Your typical day might involve designing and building scalable data ingestion pipelines, processing structured and unstructured data, orchestrating pipelines, collaborating with teams and stakeholders, and making critical tech-related decisions. To be successful in this role, we expect you to have 4 to 9 years of total IT experience with at least 2 years in big data engineering and Microsoft Azure. You should be proficient in technologies such as Azure Data Factory (ADF), PySpark, Databricks, ADLS, Azure SQL Database, Azure Synapse Analytics, Event Hub & Streaming Analytics, Cosmos DB, and Purview. Strong coding skills in SQL, Python, or Scala/Java are essential, as well as experience with big data technologies like Hadoop, Spark, Airflow, NiFi, Kafka, Hive, Neo4J, and Elastic Search. Knowledge of file formats such as Delta Lake, Avro, Parquet, JSON, and CSV is also required. Ideally, you should have experience in building REST APIs, working on Data Lake or Lakehouse projects, supporting BI and Data Science teams, and following Agile and DevOps processes. Certifications like Data Engineering on Microsoft Azure (DP-203) or Databricks Certified Developer (DE) would be a valuable addition to your profile. At Tiger Analytics, we value diversity and inclusivity, and we encourage individuals with different skills and qualities to apply, even if they do not meet all the criteria for the role. We are committed to providing equal opportunities and fostering a culture of listening, trust, respect, and growth. Please note that the job designation and compensation will be based on your expertise and experience, and our compensation packages are competitive within the industry. If you are passionate about leveraging data and technology to drive impactful solutions, we would love to stay connected with you.,
Posted 2 days ago
1.0 - 5.0 years
0 Lacs
hyderabad, telangana
On-site
As an integral part of American Airlines Tech Hub in Hyderabad, India, you will have the opportunity to contribute to the innovative and tech-driven environment that shapes the future of travel. Your role will involve collaborating with source data application teams and product owners to develop and support analytics solutions that provide valuable insights for informed decision-making. By leveraging Azure products and services such as Azure Data Lake Storage, Azure Data Factory, and Azure Databricks, you will be responsible for implementing data migration and engineering solutions to enhance the airline's digital capabilities. Your responsibilities will encompass various aspects of the development lifecycle, including design, cloud engineering, data modeling, testing, performance tuning, and deployment. Working within a DevOps team, you will have the chance to take ownership of your product and contribute to the development of batch and streaming data pipelines using cloud technologies. Adherence to coding standards, best practices, and security guidelines will be crucial as you collaborate with a multidisciplinary team to deliver technical solutions effectively. To excel in this role, you should have a Bachelor's degree in a relevant technical discipline or equivalent experience, along with a minimum of 1 year of software solution development experience using agile methodologies. Proficiency in SQL for data analytics and prior experience with cloud development, particularly in Microsoft Azure, will be advantageous. Preferred qualifications include additional years of software development and data analytics experience, as well as familiarity with tools such as Azure EventHub, Azure Power BI, and Teradata Vantage. Your success in this position will be further enhanced by expertise in the Azure Technology stack, practical knowledge of Azure cloud services, and relevant certifications such as Azure Development Track and Spark Certification. A combination of development, administration, and support experience in various tools and platforms, including scripting languages, data platforms, and BI analytics tools, will be beneficial for your role in driving data management and governance initiatives within the organization. Effective communication skills, both verbal and written, will be essential for engaging with stakeholders across different levels of the organization. Additionally, your physical abilities should enable you to perform the essential functions of the role safely and successfully, with or without reasonable accommodations as required by law. At American Airlines, diversity and inclusion are integral to our workforce, fostering an inclusive environment where employees can thrive and contribute to the airline's success. Join us at American Airlines and embark on a journey where your technical expertise and innovative spirit will play a pivotal role in shaping the future of travel. Feel free to be yourself as you contribute to the seamless operation of the world's largest airline, caring for people on life's journey.,
Posted 2 days ago
7.0 - 11.0 years
0 Lacs
haryana
On-site
Genpact is a global professional services and solutions firm dedicated to delivering outcomes that shape the future. With a workforce of over 125,000 professionals spanning across more than 30 countries, we are fueled by our innate curiosity, entrepreneurial agility, and commitment to creating lasting value for our clients. Our purpose, the relentless pursuit of a world that works better for people, drives us to serve and transform leading enterprises, including the Fortune Global 500, leveraging our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. We are currently seeking applications for the position of Principal Consultant- Databricks Lead Developer. As a Databricks Developer in this role, you will be tasked with solving cutting-edge real-world problems to meet both functional and non-functional requirements. Responsibilities: - Keep abreast of new and emerging technologies and assess their potential application for service offerings and products. - Collaborate with architects and lead engineers to devise solutions that meet functional and non-functional requirements. - Demonstrate proficiency in understanding relevant industry trends and standards. - Showcase strong analytical and technical problem-solving skills. - Possess experience in the Data Engineering domain. Qualifications we are looking for: Minimum qualifications: - Bachelor's Degree or equivalency in CS, CE, CIS, IS, MIS, or an engineering discipline, or equivalent work experience. - <<>> years of experience in IT. - Familiarity with new and emerging technologies and their possible applications for service offerings and products. - Collaboration with architects and lead engineers to develop solutions meeting functional and non-functional requirements. - Understanding of industry trends and standards. - Strong analytical and technical problem-solving abilities. - Proficiency in either Python or Scala, preferably Python. - Experience in the Data Engineering domain. Preferred qualifications: - Knowledge of Unity catalog and basic governance. - Understanding of Databricks SQL Endpoint. - Experience with CI/CD for building Databricks job pipelines. - Exposure to migration projects for building Unified data platforms. - Familiarity with DBT, Docker, and Kubernetes. If you are a proactive individual with a passion for innovation and a strong commitment to continuous learning and upskilling, we invite you to apply for this exciting opportunity to join our team at Genpact.,
Posted 2 days ago
8.0 - 12.0 years
0 Lacs
haryana
On-site
You will be responsible for developing applications using various Microsoft and web development technologies such as ASP.Net, C#, MVC, Web Forms, Angular, SQL Server, T-SQL, and Microservices. Your expertise in big data technologies like Hadoop, Spark, Hive, Python, Databricks, etc. will be crucial for this role. With a Bachelors Degree in Computer Science or equivalent experience through higher education, you should have at least 8 years of experience in Data Engineering and/or Software Engineering. Your strong coding skills along with knowledge of infrastructure as code and automating production data and ML pipelines will be highly valued. You should be proficient in working on on-prem to cloud migration, particularly in Azure, and have hands-on experience with Azure PaaS offerings such as Synapse, ADLS, DataBricks, Event Hubs, CosmosDB, Azure ML, etc. Experience in building, governing, and scaling data warehouses/lakes/lake houses is essential for this role. Your expertise in developing and tuning stored procedures and T-SQL scripting in SQL Server, along with familiarity with various .Net development tools and products, will contribute significantly to the success of the projects. You should be adept with agile software development lifecycle and DevOps principles to ensure efficient project delivery.,
Posted 4 days ago
7.0 - 11.0 years
0 Lacs
karnataka
On-site
As a Developer contracted by Luxoft for supporting customer initiatives, your main task will involve developing solutions based on client requirements within the Telecom/network work environment. You will be responsible for utilizing technologies such as Databricks and Azure, Apache Spark, Python, SQL, and Apache Airflow to create and manage Databricks clusters for ETL processes. Integration with ADLS, Blob Storage, and efficient data ingestion from various sources including on-premises databases, cloud storage, APIs, and streaming data will also be part of your role. Moreover, you will work on handling secrets using Azure Key Vault, interacting with APIs, and gaining hands-on experience with Kafka/Azure EventHub streaming. Your expertise in data bricks delta APIs, UC catalog, and version control tools like Github will be crucial. Additionally, you will be involved in data analytics, supporting ML frameworks, and integrating with Databricks for model training. Proficiency in Python, Apache Airflow, Microsoft Azure, Databricks, SQL, ADLS, Blob storage, Kafka/Azure EventHub, and various other related skills is a must. The ideal candidate should hold a Bachelor's degree in Computer Science or a related field and possess at least 7 years of experience in development. Problem-solving skills, effective communication abilities, teamwork, and a commitment to continuous learning are essential traits for this role. Desirable skills include exposure to Snowflake, PostGre, Redis, GenAI, and a good understanding of RBAC. Proficiency in English at C2 level is required for this Senior-level position based in Bengaluru, India. This opportunity falls under the Big Data Development category within Cross Industry Solutions and is expected to be effective from 06/05/2025.,
Posted 4 days ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
Diageo's ambition is to be one of the best performing, most trusted, and respected consumer products companies in the world. The strategy is to support premiumisation in developed and emerging countries by offering a broad portfolio across different consumer occasions and price points. This approach also plays a crucial role in shaping responsible drinking trends in markets where international premium spirits are an emerging category. As a member of Diageo's Analytics & Insights team, you will be instrumental in designing, developing, and implementing analytics products to drive the company's competitive advantage and facilitate data-driven decisions. Your role will involve advancing the sophistication of analytics throughout Diageo, serving as a data evangelist to empower stakeholders, identifying meaningful insights from vast data sources, and communicating findings to drive growth, enhance consumer experiences, and optimize business processes. While the role does not entail budget ownership, understanding architecture resource costs is necessary. You will be supporting global initiatives and functions across various markets, working closely with key stakeholders to create possibilities, foster conditions for success, promote personal and professional growth, and maintain authenticity in all interactions. The purpose of the role includes owning and developing a domain-specific data visualization product portfolio, ensuring compliance with technological and business priorities, and contributing to the end-to-end build of analytics products meeting enterprise standards. You will lead agile teams in developing robust BI solutions, provide technical guidance, oversee data flow, and collaborate with internal and external partners to deliver innovative solutions. Your top accountabilities will involve technical leadership in analytics product builds, optimization of data visualization architecture, BAU support, and feedback to enhance data model standards. Business acumen is essential, particularly in working with marketing data and building relationships with stakeholders to drive data-led innovation. Required qualifications include multiple years of experience in BI solution development, a bachelor's degree in a relevant field, hands-on experience as a lead developer, proficiency in DAX & M language, knowledge of Azure architecture, and expertise in data acquisition and processing. Additionally, experience with Azure platform, technical documentation, DevOps solutions, Agile methodologies, and a willingness to deepen solution architecture skills are vital. Experience with structured and unstructured datasets, design collaboration, user experience best practices, and visualization trends are advantageous. A dynamic personality, proficiency in English, and excellent communication skills are key for success in this role.,
Posted 5 days ago
1.0 - 5.0 years
0 Lacs
karnataka
On-site
Capgemini Invent is the digital innovation, consulting, and transformation brand of the Capgemini Group, a global business line that combines market-leading expertise in strategy, technology, data science, and creative design to help CxOs envision and build what's next for their businesses. In this role, you should have developed/worked on at least one Gen AI project and have experience in data pipeline implementation with cloud providers such as AWS, Azure, or GCP. You should also be familiar with cloud storage, cloud database, cloud data warehousing, and Data lake solutions like Snowflake, BigQuery, AWS Redshift, ADLS, and S3. Additionally, a good understanding of cloud compute services, load balancing, identity management, authentication, and authorization in the cloud is essential. Your profile should include a good knowledge of infrastructure capacity sizing, costing of cloud services to drive optimized solution architecture, leading to optimal infra investment vs. performance and scaling. You should be able to contribute to making architectural choices using various cloud services and solution methodologies. Proficiency in programming using Python is required along with expertise in cloud DevOps practices such as infrastructure as code, CI/CD components, and automated deployments on the cloud. Understanding networking, security, design principles, and best practices in the cloud is also important. At Capgemini, we value flexible work arrangements to provide support for maintaining a healthy work-life balance. You will have opportunities for career growth through various career growth programs and diverse professions tailored to support you in exploring a world of opportunities. Additionally, you can equip yourself with valuable certifications in the latest technologies such as Generative AI. Capgemini is a global business and technology transformation partner with a rich heritage of over 55 years. We have a diverse team of 340,000 members in more than 50 countries, working together to accelerate the dual transition to a digital and sustainable world while creating tangible impact for enterprises and society. Trusted by clients to unlock the value of technology, we deliver end-to-end services and solutions leveraging strengths from strategy and design to engineering, fueled by market-leading capabilities in AI, cloud, and data, combined with deep industry expertise and partner ecosystem. Our global revenues in 2023 were reported at 22.5 billion.,
Posted 6 days ago
3.0 - 7.0 years
0 Lacs
hyderabad, telangana
On-site
As an Engineer, IT Data at American Airlines, you will be part of a diverse, high-performing team dedicated to technical excellence. Your primary focus will be on delivering unrivaled digital products that drive a more reliable and profitable airline. The Data Domain you will work in encompasses managing and leveraging data as a strategic asset, including data management, storage, integration, and governance. This domain also involves Machine Learning, AI, Data Science, and Business Intelligence. In this role, you will collaborate closely with source data application teams and product owners to design, implement, and support analytics solutions that provide insights to make better decisions. You will be responsible for implementing data migration and data engineering solutions using Azure products and services such as Azure Data Lake Storage, Azure Data Factory, Azure Functions, Event Hub, Azure Stream Analytics, Azure Databricks, among others, as well as traditional data warehouse tools. Your tasks will span multiple aspects of the development lifecycle, including design, cloud engineering (Infrastructure, network, security, and administration), ingestion, preparation, data modeling, testing, CICD pipelines, performance tuning, deployments, consumption, BI, alerting, and prod support. Furthermore, you will provide technical leadership within a team environment and work independently. As part of a DevOps team, you will completely own and support the product, implementing batch and streaming data pipelines using cloud technologies. Your responsibilities will also include leading the development of coding standards, best practices, and privacy and security guidelines, as well as mentoring others on technical and domain skills to create multi-functional teams. For success in this role, you will need a Bachelor's degree in Computer Science, Computer Engineering, Technology, Information Systems (CIS/MIS), Engineering, or a related technical discipline, or equivalent experience/training. You should have at least 3 years of software solution development experience using agile, DevOps, operating in a product model, as well as 3 years of data analytics experience using SQL. Additionally, a minimum of 3 years of cloud development and data lake experience, preferably in Microsoft Azure, is required. Preferred qualifications include 5+ years of software solution development experience using agile, dev ops, a product model, and 5+ years of data analytics experience using SQL. Experience in full-stack development, preferably in Azure, and familiarity with Teradata Vantage development and administration are also preferred. Airline industry experience is a plus. In terms of skills, licenses, and certifications, you should have expertise with the Azure Technology stack for data management, data ingestion, capture, processing, curation, and creating consumption layers. An Azure Development Track Certification and Spark Certification are preferred. Proficiency in several tools/platforms such as Python, Spark, Unix, SQL, Teradata, Cassandra, MongoDB, Oracle, SQL Server, ADLS, Snowflake, and more is required. Additionally, experience with Azure Cloud Technologies, CI/CD tools, BI Analytics Tool Stack, and Data Governance and Privacy tools is beneficial for this role.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
hyderabad, telangana
On-site
As an Engineer, IT Data at American Airlines, you will be part of a diverse and high-performing team dedicated to technical excellence. Your main focus will be on delivering unrivaled digital products that drive a more reliable and profitable airline. The Data Domain you will be working in refers to the area within Information Technology that focuses on managing and leveraging data as a strategic asset. This includes data management, storage, integration, and governance, leaning into Machine Learning, AI, Data Science, and Business Intelligence. In this role, you will work closely with source data application teams and product owners to design, implement, and support analytics solutions that provide insights to make better decisions. You will be responsible for implementing data migration and data engineering solutions using Azure products and services such as Azure Data Lake Storage, Azure Data Factory, Azure Functions, Event Hub, Azure Stream Analytics, Azure Databricks, etc., as well as traditional data warehouse tools. Your responsibilities will involve multiple aspects of the development lifecycle including design, cloud engineering, ingestion, preparation, data modeling, testing, CICD pipelines, performance tuning, deployments, consumption, BI, alerting, and production support. You will provide technical leadership, collaborate within a team environment, and work independently. Additionally, you will be part of a DevOps team that completely owns and supports the product, implementing batch and streaming data pipelines using cloud technologies. As an essential member of the team, you will lead the development of coding standards, best practices, privacy, and security guidelines. You will also mentor others on technical and domain skills to create multi-functional teams. Your success in this role will require a Bachelor's degree in Computer Science, Computer Engineering, Technology, Information Systems (CIS/MIS), Engineering, or a related technical discipline, or equivalent experience/training. To excel in this position, you should have at least 3 years of software solution development experience using agile, DevOps, and operating in a product model. Moreover, you should have 3+ years of data analytics experience using SQL and cloud development and data lake experience, preferably with Microsoft Azure. Preferred qualifications include 5+ years of software solution development experience, 5+ years of data analytics experience using SQL, 3+ years of full-stack development experience, and familiarity with Azure technologies. Additionally, skills, licenses, and certifications required for success in this role include expertise with the Azure Technology stack, practical direction within Azure Native cloud services, Azure Development Track Certification, Spark Certification, and a combination of Development, Administration & Support experience with various tools/platforms such as Scripting (Python, Spark, Unix, SQL), Data Platforms (Teradata, Cassandra, MongoDB, Oracle, SQL Server, ADLS, Snowflake), Azure Cloud Technologies, CI/CD tools (GitHub, Jenkins, Azure DevOps, Terraform), BI Analytics Tool Stack (Cognos, Tableau, Power BI, Alteryx, Denodo, Grafana), and Data Governance and Privacy tools (Alation, Monte Carlo, Informatica, BigID). Join us at American Airlines, where you can explore a world of possibilities, travel the world, grow your expertise, and become the best version of yourself while contributing to the transformation of technology delivery for our customers and team members worldwide.,
Posted 1 week ago
5.0 - 10.0 years
15 - 30 Lacs
Hyderabad/Secunderabad
Hybrid
Job Objective We 're looking for a skilled and passionate Data Engineer to build robust, scalable data platforms using cutting-edge technologies. If you have expertise in Databricks, Python, PySpark, Azure Data Factory, Azure Synapse, SQL Server , and a deep understanding of data modeling, orchestration, and pipeline development, this is your opportunity to make a real impact. Youll thrive in our cloud-first, innovation-driven environment, designing and optimizing end-to-end data workflows that drive meaningful business outcomes. If you're committed to high performance, clean data architecture, and continuous learning, we want to hear from you! Required Qualifications Education: BE, ME/MTech, MCA, MSc, MBA, or equivalent industry experience Experience: 5 to 10 years working with data engineering technologies ( Databricks, Azure, Python, SQL Server, PySpark, Azure Data Factory, Synapse, Delta Lake, Git, CI/CD Tech Stack, MSBI etc. ) Preferred Qualifications & Skills: Must-Have Skills: Expertise in relational & multi-dimensional database architectures Proficiency in Microsoft BI tools (SQL Server SSRS, SSAS, SSIS), Power BI , and SharePoint Strong experience in Power BI MDX, SSAS, SSIS, SSRS , Tabular & DAX Queries Deep understanding of SQL Server Tabular Model & multidimensional database design Excellent SQL-based data analysis skills Strong hands-on experience with Azure Data Factory, Databricks, PySpark/Python Nice-to-Have Skills: Exposure to AWS or GCP Experience with Lakehouse Architecture, Real-time Streaming (Kafka/Event Hubs), Infrastructure as Code (Terraform/ARM) Familiarity with Cognos, Qlik, Tableau, MDM, DQ, Data Migration MS BI, Power BI, or Azure Certifications
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
hyderabad, telangana
On-site
The Data Quality Monitoring Lead plays a crucial role in ensuring the accuracy, reliability, and integrity of data across various systems and platforms. You will lead an offshore team, establish robust data quality monitoring frameworks, and collaborate with cross-functional stakeholders to address data-related challenges effectively. Your responsibilities will include overseeing real-time monitoring of data pipelines, dashboards, and logs using tools like Log Analytics, KQL queries, and Azure Monitoring to detect anomalies promptly. You will configure alerting mechanisms for timely notifications of potential data discrepancies and collaborate with support teams to investigate and resolve system-related issues impacting data quality. Additionally, you will lead the team in identifying and categorizing data quality issues, perform root cause analysis to determine underlying causes, and collaborate with system support teams and data stewards to implement corrective measures. Developing strategies for rectifying data quality issues, designing monitoring tools, and conducting cross-system data analysis will also be part of your role. Moreover, you will evaluate existing data monitoring processes, refine monitoring tools, and promote best practices in data quality monitoring to ensure standardization across all data-related activities. You will also lead and mentor an offshore team, develop a centralized knowledge base, and serve as the primary liaison between the offshore team and the Lockton Data Quality Lead. In terms of technical skills, proficiency in data monitoring tools like Log Analytics, KQL, Azure Monitoring, and Power BI, strong command of SQL, experience in automation scripting using Python, familiarity with Azure services, and understanding of data flows involving Mulesoft and Salesforce platforms are required. Additionally, experience with Azure DevOps for issue tracking and version control is preferred. This role requires a proactive, detail-oriented individual with strong leadership and communication skills, along with a solid technical background in data monitoring, analytics, database querying, automation scripting, and Azure services.,
Posted 1 week ago
1.0 - 5.0 years
0 Lacs
hyderabad, telangana
On-site
As an American Airlines team member in the Tech Hub in Hyderabad, India, you will have the opportunity to be part of a diverse, high-performing team dedicated to technical excellence. Your primary focus will be on delivering unrivaled digital products that drive a more reliable and profitable airline. The Data Domain you'll be working in is centered around managing and leveraging data as a strategic asset, including data management, storage, integration, and governance, with a strong emphasis on Machine Learning, AI, Data Science, and Business Intelligence. In this role, you will work closely with source data application teams and product owners to design, implement, and support analytics solutions that provide valuable insights for better decision-making. You will be responsible for implementing data migration and data engineering solutions using Azure products and services such as Azure Data Lake Storage, Azure Data Factory, Azure Functions, Event Hub, Azure Stream Analytics, Azure Databricks, and more, as well as traditional data warehouse tools. Your responsibilities will include various aspects of the development lifecycle, such as design, cloud engineering, data modeling, testing, performance tuning, deployments, BI, alerting, and production support. You will collaborate within a team environment and independently to develop technical solutions. As part of a DevOps team, you will have ownership and support for the product you work on, implementing both batch and streaming data pipelines using cloud technologies. To be successful in this role, you should have a Bachelor's degree in Computer Science, Computer Engineering, Technology, Information Systems, or a related technical discipline, or equivalent experience. You should have at least 1+ years of software solution development experience using agile, DevOps, and data analytics experience using SQL. Experience with cloud development and data lake technologies, particularly in Microsoft Azure, is preferred. Preferred qualifications include additional years of experience in software solution development, data analytics, full-stack development, and specific experience with Azure technologies. Skills in scripting languages like Python, Spark, Unix, SQL, as well as expertise with the Azure Technology stack and various data platforms and BI Analytics tools are highly valued. Certifications such as Azure Development Track and Spark are preferred. Effective communication skills are essential for this role, as you will need to collaborate with team members at all levels within the organization. Physical abilities are also necessary to perform the essential functions of the position safely. American Airlines values inclusion and diversity, providing a supportive environment for all team members to reach their full potential. If you are ready to be part of a dynamic, tech-driven environment where your creativity and strengths are celebrated, join American Airlines in Hyderabad and immerse yourself in the exciting world of technological innovation. Feel free to be yourself and contribute to keeping the largest airline in the world running smoothly as we care for people on life's journey.,
Posted 1 week ago
2.0 - 9.0 years
0 Lacs
chennai, tamil nadu
On-site
Tiger Analytics is a global AI and analytics consulting firm that is at the forefront of solving complex problems using data and technology. With a team of over 2800 experts spread across the globe, we are dedicated to making a positive impact on the lives of millions worldwide. Our culture is built on expertise, respect, and collaboration, with a focus on teamwork. While our headquarters are in Silicon Valley, we have delivery centers and offices in various cities in India, the US, UK, Canada, and Singapore, as well as a significant remote workforce. As an Azure Big Data Engineer at Tiger Analytics, you will be part of a dynamic team that is driving an AI revolution. Your typical day will involve working on a variety of analytics solutions and platforms, including data lakes, modern data platforms, and data fabric solutions using Open Source, Big Data, and Cloud technologies on Microsoft Azure. Your responsibilities may include designing and building scalable data ingestion pipelines, executing high-performance data processing, orchestrating pipelines, designing exception handling mechanisms, and collaborating with cross-functional teams to bring analytical solutions to life. To excel in this role, we expect you to have 4 to 9 years of total IT experience with at least 2 years in big data engineering and Microsoft Azure. You should be well-versed in technologies such as Azure Data Factory, PySpark, Databricks, Azure SQL Database, Azure Synapse Analytics, Event Hub & Streaming Analytics, Cosmos DB, and Purview. Your passion for writing high-quality, scalable code and your ability to collaborate effectively with stakeholders are essential for success in this role. Experience with big data technologies like Hadoop, Spark, Airflow, NiFi, Kafka, Hive, and Neo4J, as well as knowledge of different file formats and REST API design, will be advantageous. At Tiger Analytics, we value diversity and inclusivity, and we encourage individuals with varying skills and backgrounds to apply. We are committed to providing equal opportunities for all our employees and fostering a culture of trust, respect, and growth. Your compensation package will be competitive and aligned with your expertise and experience. If you are looking to be part of a forward-thinking team that is pushing the boundaries of what is possible in AI and analytics, we invite you to join us at Tiger Analytics and be a part of our exciting journey towards building innovative solutions that inspire and energize.,
Posted 1 week ago
7.0 - 11.0 years
0 Lacs
haryana
On-site
As a Senior Manager specializing in Data Analytics & AI, you will be a pivotal member of the EY Data, Analytics & AI Ireland team. Your role as a Databricks Platform Architect will involve enabling clients to extract significant value from their information assets through innovative data analytics solutions. You will have the opportunity to work across various industries, collaborating with diverse teams and leading the design and implementation of data architecture strategies aligned with client goals. Your key responsibilities will include leading teams with varying skill sets in utilizing different Data and Analytics technologies, adapting your leadership style to meet client needs, creating a positive learning culture, engaging with clients to understand their data requirements, and developing data artefacts based on industry best practices. Additionally, you will assess existing data architectures, develop data migration strategies, and ensure data integrity and minimal disruption during migration activities. To qualify for this role, you must possess a strong academic background in computer science or related fields, along with at least 7 years of experience as a Data Architect or similar role in a consulting environment. Hands-on experience with cloud services, data modeling techniques, data management concepts, Python, Spark, Docker, Kubernetes, and cloud security controls is essential. Ideally, you will have the ability to effectively communicate technical concepts to non-technical stakeholders, lead the design and optimization of the Databricks platform, work closely with the data engineering team, maintain a comprehensive understanding of the data pipeline, and stay updated on new and emerging technologies in the field. EY offers a competitive remuneration package, flexible working options, career development opportunities, and a comprehensive Total Rewards package. Additionally, you will benefit from support, coaching, opportunities for skill development, and a diverse and inclusive culture that values individual contributions. If you are passionate about leveraging data to solve complex problems, drive business outcomes, and contribute to a better working world, consider joining EY as a Databricks Platform Architect. Apply now to be part of a dynamic team dedicated to innovation and excellence.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
haryana
On-site
Genpact is a global professional services and solutions firm focused on delivering outcomes that shape the future. With over 125,000 employees in more than 30 countries, we are driven by curiosity, agility, and the desire to create lasting value for our clients. Our purpose is the relentless pursuit of a world that works better for people, serving and transforming leading enterprises, including Fortune Global 500 companies, through deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. We are currently seeking applications for the position of Lead Consultant-Databricks Developer - AWS. As a Databricks Developer in this role, you will be responsible for solving cutting-edge real-world problems to meet both functional and non-functional requirements. Responsibilities: - Stay updated on new and emerging technologies and explore their potential applications for service offerings and products. - Collaborate with architects and lead engineers to design solutions that meet functional and non-functional requirements. - Demonstrate knowledge of relevant industry trends and standards. - Showcase strong analytical and technical problem-solving skills. - Possess excellent coding skills, particularly in Python or Scala, with a preference for Python. Qualifications: Minimum qualifications: - Bachelor's Degree in CS, CE, CIS, IS, MIS, or an engineering discipline, or equivalent work experience. - Stay informed about new technologies and their potential applications. - Collaborate with architects and lead engineers to develop solutions. - Demonstrate knowledge of industry trends and standards. - Exhibit strong analytical and technical problem-solving skills. - Proficient in Python or Scala coding. - Experience in the Data Engineering domain. - Completed at least 2 end-to-end projects in Databricks. Additional qualifications: - Familiarity with Delta Lake, dbConnect, db API 2.0, and Databricks workflows orchestration. - Understanding of Databricks Lakehouse concept and its implementation in enterprise environments. - Ability to create complex data pipelines. - Strong knowledge of Data structures & algorithms. - Proficiency in SQL and Spark-SQL. - Experience in performance optimization to enhance efficiency and reduce costs. - Worked on both Batch and streaming data pipelines. - Extensive knowledge of Spark and Hive data processing framework. - Experience with cloud platforms (Azure, AWS, GCP) and common services like ADLS/S3, ADF/Lambda, CosmosDB/DynamoDB, ASB/SQS, Cloud databases. - Skilled in writing unit and integration test cases. - Excellent communication skills and experience working in teams of 5 or more. - Positive attitude towards learning new skills and upskilling. - Knowledge of Unity catalog and basic governance. - Understanding of Databricks SQL Endpoint. - Experience in CI/CD to build pipelines for Databricks jobs. - Exposure to migration projects for building Unified data platforms. - Familiarity with DBT, Docker, and Kubernetes. This is a full-time position based in India-Gurugram. The job posting was on August 5, 2024, and the unposting date is set for October 4, 2024.,
Posted 2 weeks ago
5.0 - 10.0 years
5 - 7 Lacs
Bengaluru, Karnataka, India
On-site
Critical Skills to Have: Five or more years of experience in the field of information technology Has a general understanding of several software platforms and development technologies Has experience with SQL, RDBMS, Data Lakes, and Warehouses Knowledge of the Hadoop ecosystem, Azure, ADLS, Kafka, Apache Delta, and Databricks/Spark. Possessing knowledge of any data modeling tool, such as ERStudio or Erwin, is advantageous. Collaboration history with Product Managers, Technology teams, and Business Partners Strong familiarity with Agile and DevOps techniques Excellent communication skills both in writing and speaking
Posted 2 weeks ago
0.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Ready to shape the future of work At Genpact, we don&rsquot just adapt to change&mdashwe drive it. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos , our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to , our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that&rsquos shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at and on , , , and . Inviting applications for the role of Principal Consultant, Azure Data Engineer Responsibilities Strong knowledge in building Pipelines in Azure Data Factory or Azure Synapse Analytics. Knowledge in Azure Data Bricks and Azure Synapse Analytics for ingesting data through different sources Good at writing SQL Queries on SQL Database and SQL DWH. Knowledge in design, development, testing , implementation of Azure Data Stack technologies. Expert level knowledge of SQL DB & Data warehouse. Knowledge of Azure Data Lake (Blob and ADLS) is mandatory. Should be able to do perform querying in SQL Database and SQL DWH. Knowledge of Azure Data Lake is required . Should be strong in either Python or Scala Programming Languages. Experience in various ETL techniques and frameworks. Ability to both work in team and to deliver and accept peer review. Understanding Machine Learning Algorithms and Power BI is an added advantage. Experience in GenAI project Qualifications we seek in you! Minimum qualifications Graduate Preferred qualifications Personal drive and positive work ethic to deliver results within deadlines and in demanding situations. Flexibility to adapt to a variety of engagement types, working hours and work environments and locations. Excellent communication skills. Why join Genpact Be a transformation leader - Work at the cutting edge of AI, automation, and digital innovation Make an impact - Drive change for global enterprises and solve business challenges that matter Accelerate your career - Get hands-on experience, mentorship, and continuous learning opportunities Work with the best - Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.
Posted 2 weeks ago
6.0 - 11.0 years
25 - 35 Lacs
Bengaluru
Hybrid
We are hiring Azure Data Engineers for an active project-Bangalore location Interested candidates can share details on the mail with their updated resume. Total Exp? Rel exp in Azure Data Engineering? Current organization? Current location? Current fixed salary? Expected Salary? Do you have any offers? if yes mention the offer you have and reason for looking for more opportunity? Open to relocate Bangalore? Notice period? if serving/not working, mention your LWD? Do you have PF account ?
Posted 2 weeks ago
4.0 - 9.0 years
5 - 15 Lacs
Hyderabad, Pune, Bengaluru
Work from Office
Role & responsibilities A day in the life of an Infoscion As part of the Infosys delivery team, your primary role would be to interface with the client for quality assurance, issue resolution and ensuring high customer satisfaction. You will understand requirements, create and review designs, validate the architecture and ensure high levels of service offerings to clients in the technology domain. You will participate in project estimation, provide inputs for solution delivery, conduct technical risk planning, perform code reviews and unit test plan reviews. You will lead and guide your teams towards developing optimized high quality code deliverables, continual knowledge management and adherence to the organizational guidelines and processes. You would be a key contributor to building efficient programs/ systems and if you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Preferred candidate profile Primary skills: Technology->Cloud Platform->Azure Analytics Services->Azure Data Lake Azure Data Lake (ADLS) Developer/ Engineer
Posted 2 weeks ago
4.0 - 9.0 years
4 - 9 Lacs
Mumbai, Maharashtra, India
On-site
Key Responsibilities: IoT System Monitoring & Support Provide Level 2 & 3 support for IoT solutions deployed in industrial environments. Monitor Edge and Cloud-based IoT systems for performance, connectivity, and reliability issues. Develop real-time alerting mechanisms for IoT device failures, connectivity drops, or performance degradation. Troubleshooting & Issue Resolution Investigate and resolve issues related to IoT protocols (MQTT, HTTP, AMQP, OPC-UA, etc.). Debug OPC-UA tags and configurations for proper data transmission. Analyse logs from MQTT brokers (NanoMQ, EQMX MQTT, Mosquitto) and ensure message integrity. Work with Docker-based containerized workloads and troubleshoot deployment issues. Edge-to-Cloud Connectivity & Maintenance Ensure stable Edge-to-Cloud connectivity using Azure IoT Hub, Azure Event Hub, and ADLS. Support Azure-based IoT deployments, including Azure IoT Edge, Azure IIoT Framework, and Azure IoT Central. Maintain K3S or AKS (Kubernetes) clusters used for IoT edge deployments. CI/CD & DevOps Support Manage and troubleshoot CI/CD pipelines for IoT deployments in Azure DevOps. Maintain version control, deployments, and container registries (Azure Container Registry). Debug helm-based deployments and Kubernetes configurations. Documentation & Collaboration Create and maintain technical documentation for IoT architectures, troubleshooting steps, and best practices. Work closely with IoT developers, DevOps engineers, and manufacturing teams to ensure smooth IoT system operations. Train end-users and support teams on IoT monitoring tools and incident handling. Mandatory/Required Skills: Experience in IoT Support - 4+ years of experience supporting industrial IoT solutions in a production environment. Strong Troubleshooting Skills - Expertise in diagnosing and resolving issues in Edge-to-Cloud architectures. IoT & IIoT Knowledge - Hands-on experience with IoT protocols (MQTT, OPC-UA, HTTP, AMQP). MQTT Brokers - Experience working with NanoMQ, EQMX MQTT, Mosquitto. Python & Scripting - Strong Python scripting skills for debugging and automating IoT operations. Containerization - Hands-on experience with Docker, building images, deploying containers. Azure IoT Services - Experience with Azure IoT Hub, Azure Event Hub, ADLS, Azure Data Explorer. DevOps & CI/CD - Experience with Azure DevOps, CI/CD pipelines, Kubernetes (K3S or AKS). Monitoring & Alerting - Familiarity with monitoring IoT device health and cloud infrastructure. Good to Have Qualifications: Experience with Neuron, ekuiper for IoT data processing. Working experience with OPC UA Servers or Prosys simulators. Hands-on experience with Azure IoT Edge, Azure IoT Central, Azure Arc, and Azure Edge Essentials. Familiarity with Rancher for Kubernetes cluster management. Experience in Manufacturing or Industrial IoT environments.
Posted 3 weeks ago
0.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
Genpact (NYSE: G) is a global professional services and solutions firm delivering outcomes that shape the future. Our 125,000+ people across 30+ countries are driven by our innate curiosity, entrepreneurial agility, and desire to create lasting value for clients. Powered by our purpose - the relentless pursuit of a world that works better for people - we serve and transform leading enterprises, including the Fortune Global 500, with our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. Inviting applications for the role of Lead Consultant- Databricks Developer ! In this role, the Databricks Developer is responsible for solving the real world cutting edge problem to meet both functional and non-functional requirements. Responsibilities Maintains close awareness of new and emerging technologies and their potential application for service offerings and products. Work with architect and lead engineers for solutions to meet functional and non-functional requirements. Demonstrated knowledge of relevant industry trends and standards. Demonstrate strong analytical and technical problem-solving skills. Must have experience in Data Engineering domain . Qualifications we seek in you! Minimum qualifications Bachelor&rsquos Degree or equivalency (CS, CE, CIS, IS, MIS, or engineering discipline) or equivalent work experience. Maintains close awareness of new and emerging technologies and their potential application for service offerings and products. Work with architect and lead engineers for solutions to meet functional and non-functional requirements. Demonstrated knowledge of relevant industry trends and standards. Demonstrate strong analytical and technical problem-solving skills. Must have excellent coding skills either Python or Scala, preferably Python. Must have experience in Data Engineering domain . Must have implemented at least 2 project end-to-end in Databricks. Must have at least experience on databricks which consists of various components as below Delta lake dbConnect db API 2.0 Databricks workflows orchestration Must be well versed with Databricks Lakehouse concept and its implementation in enterprise environments. Must have good understanding to create complex data pipeline Must have good knowledge of Data structure & algorithms. Must be strong in SQL and sprak-sql . Must have strong performance optimization skills to improve efficiency and reduce cost . Must have worked on both Batch and streaming data pipeline . Must have extensive knowledge of Spark and Hive data processing framework. Must have worked on any cloud (Azure, AWS, GCP) and most common services like ADLS/S3, ADF/Lambda, CosmosDB /DynamoDB, ASB/SQS, Cloud databases. Must be strong in writing unit test case and integration test Must have strong communication skills and have worked on the team of size 5 plus Must have great attitude towards learning new skills and upskilling the existing skills. Preferred Qualifications Good to have Unity catalog and basic governance knowledge. Good to have Databricks SQL Endpoint understanding. Good To have CI/CD experience to build the pipeline for Databricks jobs. Good to have if worked on migration project to build Unified data platform. Good to have knowledge of DBT. Good to have knowledge of docker and Kubernetes. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. For more information, visit . Follow us on Twitter, Facebook, LinkedIn, and YouTube. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training .
Posted 3 weeks ago
0.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Genpact (NYSE: G) is a global professional services and solutions firm delivering outcomes that shape the future. Our 125,000+ people across 30+ countries are driven by our innate curiosity, entrepreneurial agility, and desire to create lasting value for clients. Powered by our purpose - the relentless pursuit of a world that works better for people - we serve and transform leading enterprises, including the Fortune Global 500, with our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI . Inviting applications for the role of Lead Consultant- Databricks Developer ! In this role, the Databricks Developer is responsible for solving the real world cutting edge problem to meet both functional and non-functional requirements. Responsibilities Maintains close awareness of new and emerging technologies and their potential application for service offerings and products. Work with architect and lead engineers for solutions to meet functional and non-functional requirements. Demonstrated knowledge of relevant industry trends and standards. Demonstrate strong analytical and technical problem-solving skills. Must have experience in Data Engineering domain . Qualifications we seek in you! Minimum qualifications Bachelor&rsquos Degree or equivalency (CS, CE, CIS, IS, MIS, or engineering discipline) or equivalent work experience. Maintains close awareness of new and emerging technologies and their potential application for service offerings and products. Work with architect and lead engineers for solutions to meet functional and non-functional requirements. Demonstrated knowledge of relevant industry trends and standards. Demonstrate strong analytical and technical problem-solving skills. Must have excellent coding skills either Python or Scala, preferably Python. Must have experience in Data Engineering domain . Must have implemented at least 2 project end-to-end in Databricks. Must have at least experience on databricks which consists of various components as below Delta lake dbConnect db API 2.0 Databricks workflows orchestration Must be well versed with Databricks Lakehouse concept and its implementation in enterprise environments. Must have good understanding to create complex data pipeline Must have good knowledge of Data structure & algorithms. Must be strong in SQL and sprak-sql . Must have strong performance optimization skills to improve efficiency and reduce cost . Must have worked on both Batch and streaming data pipeline . Must have extensive knowledge of Spark and Hive data processing framework. Must have worked on any cloud (Azure, AWS, GCP) and most common services like ADLS/S3, ADF/Lambda, CosmosDB /DynamoDB, ASB/SQS, Cloud databases. Must be strong in writing unit test case and integration test Must have strong communication skills and have worked on the team of size 5 plus Must have great attitude towards learning new skills and upskilling the existing skills. Preferred Qualifications Good to have Unity catalog and basic governance knowledge. Good to have Databricks SQL Endpoint understanding. Good To have CI/CD experience to build the pipeline for Databricks jobs. Good to have if worked on migration project to build Unified data platform. Good to have knowledge of DBT. Good to have knowledge of docker and Kubernetes. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. For more information, visit . Follow us on Twitter, Facebook, LinkedIn, and YouTube. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training .
Posted 3 weeks ago
0.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Genpact (NYSE: G) is a global professional services and solutions firm delivering outcomes that shape the future. Our 125,000+ people across 30+ countries are driven by our innate curiosity, entrepreneurial agility, and desire to create lasting value for clients. Powered by our purpose - the relentless pursuit of a world that works better for people - we serve and transform leading enterprises, including the Fortune Global 500, with our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. Inviting applications for the role of Lead Consultant- Databricks Developer ! In this role, the Databricks Developer is responsible for solving the real world cutting edge problem to meet both functional and non-functional requirements. Responsibilities Maintains close awareness of new and emerging technologies and their potential application for service offerings and products. Work with architect and lead engineers for solutions to meet functional and non-functional requirements. Demonstrated knowledge of relevant industry trends and standards. Demonstrate strong analytical and technical problem-solving skills. Must have experience in Data Engineering domain . Qualifications we seek in you! Minimum qualifications Bachelor&rsquos Degree or equivalency (CS, CE, CIS, IS, MIS, or engineering discipline) or equivalent work experience. Maintains close awareness of new and emerging technologies and their potential application for service offerings and products. Work with architect and lead engineers for solutions to meet functional and non-functional requirements. Demonstrated knowledge of relevant industry trends and standards. Demonstrate strong analytical and technical problem-solving skills. Must have excellent coding skills either Python or Scala, preferably Python. Must have experience in Data Engineering domain . Must have implemented at least 2 project end-to-end in Databricks. Must have at least experience on databricks which consists of various components as below Delta lake dbConnect db API 2.0 Databricks workflows orchestration Must be well versed with Databricks Lakehouse concept and its implementation in enterprise environments. Must have good understanding to create complex data pipeline Must have good knowledge of Data structure & algorithms. Must be strong in SQL and sprak-sql . Must have strong performance optimization skills to improve efficiency and reduce cost . Must have worked on both Batch and streaming data pipeline . Must have extensive knowledge of Spark and Hive data processing framework. Must have worked on any cloud (Azure, AWS, GCP) and most common services like ADLS/S3, ADF/Lambda, CosmosDB /DynamoDB, ASB/SQS, Cloud databases. Must be strong in writing unit test case and integration test Must have strong communication skills and have worked on the team of size 5 plus Must have great attitude towards learning new skills and upskilling the existing skills. Preferred Qualifications Good to have Unity catalog and basic governance knowledge. Good to have Databricks SQL Endpoint understanding. Good To have CI/CD experience to build the pipeline for Databricks jobs. Good to have if worked on migration project to build Unified data platform. Good to have knowledge of DBT. Good to have knowledge of docker and Kubernetes. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. For more information, visit . Follow us on Twitter, Facebook, LinkedIn, and YouTube. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training .
Posted 3 weeks ago
0.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Genpact (NYSE: G) is a global professional services and solutions firm delivering outcomes that shape the future. Our 125,000+ people across 30+ countries are driven by our innate curiosity, entrepreneurial agility, and desire to create lasting value for clients. Powered by our purpose - the relentless pursuit of a world that works better for people - we serve and transform leading enterprises, including the Fortune Global 500, with our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. Inviting applications for the role of Lead Consultant- Databricks Developer ! In this role, the Databricks Developer is responsible for solving the real world cutting edge problem to meet both functional and non-functional requirements. Responsibilities Maintains close awareness of new and emerging technologies and their potential application for service offerings and products. Work with architect and lead engineers for solutions to meet functional and non-functional requirements. Demonstrated knowledge of relevant industry trends and standards. Demonstrate strong analytical and technical problem-solving skills. Must have experience in Data Engineering domain . Qualifications we seek in you! Minimum qualifications Bachelor&rsquos Degree or equivalency (CS, CE, CIS, IS, MIS, or engineering discipline) or equivalent work experience. Maintains close awareness of new and emerging technologies and their potential application for service offerings and products. Work with architect and lead engineers for solutions to meet functional and non-functional requirements. Demonstrated knowledge of relevant industry trends and standards. Demonstrate strong analytical and technical problem-solving skills. Must have excellent coding skills either Python or Scala, preferably Python. Must have experience in Data Engineering domain . Must have implemented at least 2 project end-to-end in Databricks. Must have at least experience on databricks which consists of various components as below Delta lake dbConnect db API 2.0 Databricks workflows orchestration Must be well versed with Databricks Lakehouse concept and its implementation in enterprise environments. Must have good understanding to create complex data pipeline Must have good knowledge of Data structure & algorithms. Must be strong in SQL and sprak-sql. Must have strong performance optimization skills to improve efficiency and reduce cost. Must have worked on both Batch and streaming data pipeline. Must have extensive knowledge of Spark and Hive data processing framework. Must have worked on any cloud (Azure, AWS, GCP) and most common services like ADLS/S3, ADF/Lambda, CosmosDB/DynamoDB, ASB/SQS, Cloud databases. Must be strong in writing unit test case and integration test Must have strong communication skills and have worked on the team of size 5 plus Must have great attitude towards learning new skills and upskilling the existing skills. Preferred Qualifications Good to have Unity catalog and basic governance knowledge. Good to have Databricks SQL Endpoint understanding. Good To have CI/CD experience to build the pipeline for Databricks jobs. Good to have if worked on migration project to build Unified data platform. Good to have knowledge of DBT. Good to have knowledge of docker and Kubernetes. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. For more information, visit . Follow us on Twitter, Facebook, LinkedIn, and YouTube. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.
Posted 3 weeks ago
6.0 - 10.0 years
30 - 35 Lacs
Bengaluru
Work from Office
We are seeking an experienced PySpark Developer / Data Engineer to design, develop, and optimize big data processing pipelines using Apache Spark and Python (PySpark). The ideal candidate should have expertise in distributed computing, ETL workflows, data lake architectures, and cloud-based big data solutions. Key Responsibilities: Develop and optimize ETL/ELT data pipelines using PySpark on distributed computing platforms (Hadoop, Databricks, EMR, HDInsight). Work with structured and unstructured data to perform data transformation, cleansing, and aggregation. Implement data lake and data warehouse solutions on AWS (S3, Glue, Redshift), Azure (ADLS, Synapse), or GCP (BigQuery, Dataflow). Optimize PySpark jobs for performance tuning, partitioning, and caching strategies. Design and implement real-time and batch data processing solutions. Integrate data pipelines with Kafka, Delta Lake, Iceberg, or Hudi for streaming and incremental updates. Ensure data security, governance, and compliance with industry best practices. Work with data scientists and analysts to prepare and process large-scale datasets for machine learning models. Collaborate with DevOps teams to deploy, monitor, and scale PySpark jobs using CI/CD pipelines, Kubernetes, and containerization. Perform unit testing and validation to ensure data integrity and reliability. Required Skills & Qualifications: 6+ years of experience in big data processing, ETL, and data engineering. Strong hands-on experience with PySpark (Apache Spark with Python). Expertise in SQL, DataFrame API, and RDD transformations. Experience with big data platforms (Hadoop, Hive, HDFS, Spark SQL). Knowledge of cloud data processing services (AWS Glue, EMR, Databricks, Azure Synapse, GCP Dataflow). Proficiency in writing optimized queries, partitioning, and indexing for performance tuning. Experience with workflow orchestration tools like Airflow, Oozie, or Prefect. Familiarity with containerization and deployment using Docker, Kubernetes, and CI/CD pipelines. Strong understanding of data governance, security, and compliance (GDPR, HIPAA, CCPA, etc.). Excellent problem-solving, debugging, and performance optimization skills.
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
31458 Jobs | Dublin
Wipro
16542 Jobs | Bengaluru
EY
10788 Jobs | London
Accenture in India
10711 Jobs | Dublin 2
Amazon
8660 Jobs | Seattle,WA
Uplers
8559 Jobs | Ahmedabad
IBM
7988 Jobs | Armonk
Oracle
7535 Jobs | Redwood City
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi
Capgemini
6091 Jobs | Paris,France