Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 10.0 years
7 - 12 Lacs
Bengaluru
Work from Office
Project Role : Data Platform Engineer Project Role Description : Assists with the data platform blueprint and design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have skills : Snowflake Data Warehouse, Functional Testing Good to have skills : NAMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Platform Engineer, you will assist with the data platform blueprint and design, encompassing the relevant data platform components. Your typical day will involve collaborating with Integration Architects and Data Architects to ensure cohesive integration between systems and data models, while also engaging in discussions to refine and enhance the overall data architecture. You will be involved in various stages of the data platform lifecycle, ensuring that all components work harmoniously to support the organization's data needs and objectives. Key Responsibilities:a Overall 12+ of data experience including 5+ years on Snowflake and 3+ years on DBT (Core and Cloud)b Played a key role in DBT related discussions with teams and clients to understand business problems and solutioning requirementsc As a DBT SME liaise with clients on business/ technology/ data transformation programs; orchestrate the implementation of planned initiatives and realization of business outcomes d Spearhead team to translate business goals/ challenges into practical data transformation and technology roadmaps and data architecture designs e Strong experience in designing, architecting and managing(admin) Snowflake solutions and deploying data analytics solutions in Snowflake.f Strong inclination for practice building that includes spearheading thought leadership discussions, managing team activities. Technical Experience:a Strong Experience working as a Snowflake on Cloud DBT Data Architect with thorough knowledge of different servicesb Ability to architect solutions from OnPrem to cloud and create end to end data pipelines using DBT c Excellent process knowledge in one or more of the following areas:Finance, Healthcare, Customer Experienced Experience in working on Client Proposals (RFP's), Estimation, POCs, POVs on new Snowflake featurese DBT (Core and Cloud) end to end migration experience that includes DBT migration - Refactoring SQL for modularity, DBT modeling experience (.sql or .py files credbt job scheduling on atleast 2 projectsf Knowledge of Jinja template language (Macros) would be added advantageg Knowledge of Special features like DBT documentation, semantic layers creation, webhooks etc.h DBT and cloud certification is important.i Develop, fine-tune, and integrate LLM models (OpenAI, Anthropic, Mistral, etc.) into enterprise workflows via Cortex AI.j Deploy AI Agents capable of reasoning, tool use, chaining, and task orchestration for knowledge retrieval and decision support.k Guide the creation and management of GenAI assets like prompts, embeddings, semantic indexes, agents, and custom bots.l Collaborate with data engineers, ML engineers, and leadership team to translate business use cases into GenAI-driven solutions.m Provide mentorship and technical leadership to a small team of engineers working on GenAI initiatives.n Stay current with advancements in Snowflake, LLMs, and generative AI frameworks to continuously enhance solution capabilities.o Should have good understanding of SQL, Python. Also, the architectural concepts of Snowflake should be clear. Professional Attributes:a client management, stakeholder management, collaboration, interpersonal and relationship building skillsb Ability to create innovative solutions for key business challengesc Eagerness to learn and develop self on an ongoing basisd Structured communication written, verbal and presentational. Educational Qualification:a MBA Technology/Data related specializations/ MCA/ Advanced Degrees in STEM Qualification 15 years full time education
Posted 1 day ago
5.0 - 10.0 years
7 - 12 Lacs
Bengaluru
Work from Office
Project Role : Data Platform Engineer Project Role Description : Assists with the data platform blueprint and design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have skills : Snowflake Data Warehouse, Manual Testing Good to have skills : NAMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Platform Engineer, you will assist with the data platform blueprint and design, encompassing the relevant data platform components. Your typical day will involve collaborating with Integration Architects and Data Architects to ensure cohesive integration between systems and data models, while also engaging in discussions to refine and enhance the overall data architecture. You will be involved in various stages of the data platform lifecycle, ensuring that all components work harmoniously to support the organization's data needs and objectives. Key Responsibilities:a Overall 12+ of data experience including 5+ years on Snowflake and 3+ years on DBT (Core and Cloud)b Played a key role in DBT related discussions with teams and clients to understand business problems and solutioning requirementsc As a DBT SME liaise with clients on business/ technology/ data transformation programs; orchestrate the implementation of planned initiatives and realization of business outcomes d Spearhead team to translate business goals/ challenges into practical data transformation and technology roadmaps and data architecture designs e Strong experience in designing, architecting and managing(admin) Snowflake solutions and deploying data analytics solutions in Snowflake.f Strong inclination for practice building that includes spearheading thought leadership discussions, managing team activities. Technical Experience:a Strong Experience working as a Snowflake on Cloud DBT Data Architect with thorough knowledge of different servicesb Ability to architect solutions from OnPrem to cloud and create end to end data pipelines using DBT c Excellent process knowledge in one or more of the following areas:Finance, Healthcare, Customer Experienced Experience in working on Client Proposals (RFP's), Estimation, POCs, POVs on new Snowflake featurese DBT (Core and Cloud) end to end migration experience that includes DBT migration - Refactoring SQL for modularity, DBT modeling experience (.sql or .py files credbt job scheduling on atleast 2 projectsf Knowledge of Jinja template language (Macros) would be added advantageg Knowledge of Special features like DBT documentation, semantic layers creation, webhooks etc.h DBT and cloud certification is important.i Develop, fine-tune, and integrate LLM models (OpenAI, Anthropic, Mistral, etc.) into enterprise workflows via Cortex AI.j Deploy AI Agents capable of reasoning, tool use, chaining, and task orchestration for knowledge retrieval and decision support.k Guide the creation and management of GenAI assets like prompts, embeddings, semantic indexes, agents, and custom bots.l Collaborate with data engineers, ML engineers, and leadership team to translate business use cases into GenAI-driven solutions.m Provide mentorship and technical leadership to a small team of engineers working on GenAI initiatives.n Stay current with advancements in Snowflake, LLMs, and generative AI frameworks to continuously enhance solution capabilities.o Should have good understanding of SQL, Python. Also, the architectural concepts of Snowflake should be clear. Professional Attributes:a client management, stakeholder management, collaboration, interpersonal and relationship building skillsb Ability to create innovative solutions for key business challengesc Eagerness to learn and develop self on an ongoing basisd Structured communication written, verbal and presentational. Educational Qualification:a MBA Technology/Data related specializations/ MCA/ Advanced Degrees in STEM Qualification 15 years full time education
Posted 1 day ago
5.0 - 10.0 years
7 - 12 Lacs
Bengaluru
Work from Office
Project Role : Data Platform Engineer Project Role Description : Assists with the data platform blueprint and design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have skills : Snowflake Data Warehouse Good to have skills : Data EngineeringMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Platform Engineer, you will assist with the data platform blueprint and design, encompassing the relevant data platform components. Your typical day will involve collaborating with Integration Architects and Data Architects to ensure cohesive integration between systems and data models, while also engaging in discussions to refine and enhance the overall data architecture. You will be involved in various stages of the data platform lifecycle, ensuring that all components work harmoniously to support the organization's data needs and objectives.Key Responsibilities:a Overall 12+ of data experience including 5+ years on Snowflake and 3+ years on DBT (Core and Cloud)b Played a key role in DBT related discussions with teams and clients to understand business problems and solutioning requirementsc As a DBT SME liaise with clients on business/ technology/ data transformation programs; orchestrate the implementation of planned initiatives and realization of business outcomes d Spearhead team to translate business goals/ challenges into practical data transformation and technology roadmaps and data architecture designs e Strong experience in designing, architecting and managing(admin) Snowflake solutions and deploying data analytics solutions in Snowflake.f Strong inclination for practice building that includes spearheading thought leadership discussions, managing team activities. Technical Experience:a Strong Experience working as a Snowflake on Cloud DBT Data Architect with thorough knowledge of different servicesb Ability to architect solutions from OnPrem to cloud and create end to end data pipelines using DBT c Excellent process knowledge in one or more of the following areas:Finance, Healthcare, Customer Experienced Experience in working on Client Proposals (RFP's), Estimation, POCs, POVs on new Snowflake featurese DBT (Core and Cloud) end to end migration experience that includes DBT migration - Refactoring SQL for modularity, DBT modeling experience (.sql or .py files credbt job scheduling on at least 2 projectsf Knowledge of Jinja template language (Macros) would be added advantageg Knowledge of Special features like DBT documentation, semantic layers creation, webhooks etc.h DBT and cloud certification is important.i Develop, fine-tune, and integrate LLM models (OpenAI, Anthropic, Mistral, etc.) into enterprise workflows via Cortex AI.j Deploy AI Agents capable of reasoning, tool use, chaining, and task orchestration for knowledge retrieval and decision support.k Guide the creation and management of GenAI assets like prompts, embeddings, semantic indexes, agents, and custom bots.l Collaborate with data engineers, ML engineers, and leadership team to translate business use cases into GenAI-driven solutions.m Provide mentorship and technical leadership to a small team of engineers working on GenAI initiatives.n Stay current with advancements in Snowflake, LLMs, and generative AI frameworks to continuously enhance solution capabilities.o Should have good understanding of SQL, Python. Also, the architectural concepts of Snowflake should be clear. Professional Attributes:a client management, stakeholder management, collaboration, interpersonal and relationship building skillsb Ability to create innovative solutions for key business challengesc Eagerness to learn and develop self on an ongoing basis Educational Qualification:a MBA Technology/Data related specializations/ MCA/ Advanced Degrees in STEM Qualification 15 years full time education
Posted 1 day ago
4.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
About Agoda Agoda is an online travel booking platform for accommodations, flights, and more. We build and deploy cutting-edge technology that connects travelers with a global network of 4.7M hotels and holiday properties worldwide, plus flights, activities, and more . Based in Asia and part of Booking Holdings, our 7,100+ employees representing 95+ nationalities in 27 markets foster a work environment rich in diversity, creativity, and collaboration. We innovate through a culture of experimentation and ownership, enhancing the ability for our customers to experience the world. Our Purpose – Bridging the World Through Travel We believe travel allows people to enjoy, learn and experience more of the amazing world we live in. It brings individuals and cultures closer together, fostering empathy, understanding and happiness. We are a skillful, driven and diverse team from across the globe, united by a passion to make an impact. Harnessing our innovative technologies and strong partnerships, we aim to make travel easy and rewarding for everyone. Get to Know Our Team The Data department , based in Bangkok , oversees all of Agoda’s data-related requirements. Our ultimate goal is to enable and increase the use of data in the company through creative approaches and the implementation of powerful resources such as operational and analytical databases, queue systems, BI tools, and data science technology. We hire the brightest minds from around the world to take on this challenge and equip them with the knowledge and tools that contribute to their personal growth and success while supporting our company’s culture of diversity and experimentation. The role the Data team plays at Agoda is critical as business users, product managers, engineers, and many others rely on us to empower their decision making. We are equally dedicated to our customers by improving their search experience with faster results and protecting them from any fraudulent activities. Data is interesting only when you have enough of it, and we have plenty. This is what drives up the challenge as part of the Data department, but also the reward. The Opportunity Please note -The role will be based in Bangkok. We are looking for ambitious and agile data scientists that would like to seize the opportunity to work on some of the most challenging productive machine learning and big data platforms worldwide, processing some 600B events every day and making some 5B predictions. As part of the Data Science and Machine Learning (AI/ML) team you will be exposed to real-world challenges such as: dynamic pricing, predicting customer intents in real time, ranking search results to maximize lifetime value, classifying and deep learning content and personalization signals from unstructured data such as images and text, making personalized recommendations, innovating algorithm-supported promotions and products for supply partners, discovering insights from big data, and innovating the user experience. To tackle these challenges, you will have the opportunity to work on one of the world’s largest ML infrastructure employing dozens of GPUs working in parallel, 30K+ CPU cores and 150TB of memory. In This Role, You’ll Get to Design, code, experiment and implement models and algorithms to maximize customer experience, supply side value, business outcomes, and infrastructure readiness Mine a big data of hundreds of millions of customers and more than 600M daily user generated events, supplier and pricing data, and discover actionable insights to drive improvements and innovation Work with developers and a variety of business owners to deliver daily results with the best quality Research discover and harness new ideas that can make a difference What You’ll Need To Succeed 4+ years hands-on data science experience Excellent understanding of AI/ML/DL and Statistics, as well as coding proficiency using related open source libraries and frameworks Significant proficiency in SQL and languages like Python, PySpark and/or Scala Can lead, work independently as well as play a key role in a team Good communication and interpersonal skills for working in a multicultural work environment It’s Great if You Have PhD or MSc in Computer Science / Operations Research / Statistics or other quantitative fields Experience in NLP, image processing and/or recommendation systems Hands on experience in data engineering, working with big data framework like Spark/Hadoop Experience in data science for e-commerce and/or OTA We welcome both local and international applications for this role. Full visa sponsorship and relocation assistance available for eligible candidates. #sanfrancisco #sanjose #losangeles #sandiego #oakland #denver #miami #orlando #atlanta #chicago #boston #detroit #newyork #portland #philadelphia #dallas #houston #austin #seattle #sydney #melbourne #perth #toronto #vancouver #montreal #shanghai #beijing #shenzhen #prague #Brno #Ostrava #cairo #alexandria #giza #estonia #paris #berlin #munich #hamburg #stuttgart #cologne #frankfurt #hongkong #budapest #jakarta #bali #dublin #telaviv #milan #rome #venice #florence #naples #turin #palermo #bologna #tokyo #osaka #kualalumpur #malta #amsterdam #oslo #manila #warsaw #krakow #doha #alrayyan #riyadh #jeddah #mecca #medina #singapore #seoul #barcelona #madrid #stockholm #zurich #taipei #tainan #taichung #kaohsiung #bangkok #Phuket #istanbul #london #manchester #edinburgh #hcmc #hanoi #lodz #wroclaw #poznan #katowice #rio #salvador #newdelhi #bangalore #bandung #yokohama #nagoya #okinawa #fukuoka #jerusalem #mumbai #bengalulu #hyderabad #pune # #IT #4 Equal Opportunity Employer At Agoda, we pride ourselves on being a company represented by people of all different backgrounds and orientations. We prioritize attracting diverse talent and cultivating an inclusive environment that encourages collaboration and innovation. Employment at Agoda is based solely on a person’s merit and qualifications. We are committed to providing equal employment opportunity regardless of sex, age, race, color, national origin, religion, marital status, pregnancy, sexual orientation, gender identity, disability, citizenship, veteran or military status, and other legally protected characteristics. We will keep your application on file so that we can consider you for future vacancies and you can always ask to have your details removed from the file. For more details please read our privacy policy . Disclaimer We do not accept any terms or conditions, nor do we recognize any agency’s representation of a candidate, from unsolicited third-party or agency submissions. If we receive unsolicited or speculative CVs, we reserve the right to contact and hire the candidate directly without any obligation to pay a recruitment fee.
Posted 1 day ago
3.0 - 7.0 years
5 - 9 Lacs
Pune
Work from Office
Snowflake Data Engineer1 were looking for a candidate who has a strong background in data technologies such as SQL Server, Snowflake, and similar platforms. In addition to that, they should bring experience in at least one other programming language, with a proficiency in Python being a key requirement.The ideal candidate should also have Exposure to DevOps pipelines within a data engineering context At least a high-level understanding of AWS services and how they fit into modern data architectures A proactive mindsetsomeone who is motivated to take initiative and contribute beyond assigned tasks
Posted 1 day ago
3.0 - 7.0 years
5 - 9 Lacs
Hyderabad, Pune
Work from Office
Snowflake data engineer1 As mentioned earlier, for this role, were looking for a candidate who has a strong background in data technologies such as SQL Server, Snowflake, and similar platforms. In addition to that, they should bring experience in at least one other programming language, with a proficiency in Python being a key requirement.The ideal candidate should also have Exposure to DevOps pipelines within a data engineering context At least a high-level understanding of AWS services and how they fit into modern data architectures A proactive mindsetsomeone who is motivated to take initiative and contribute beyond assigned tasks
Posted 1 day ago
0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology Your Role And Responsibilities As a Data Engineer at IBM, you'll play a vital role in the development, design of application, provide regular support/guidance to project teams on complex coding, issue resolution and execution. Your Primary Responsibilities Include Lead the design and construction of new solutions using the latest technologies, always looking to add business value and meet user requirements. Strive for continuous improvements by testing the build solution and working under an agile framework. Discover and implement the latest technologies trends to maximize and build creative solutions Preferred Education Master's Degree Required Technical And Professional Expertise Experience with Apache Spark (PySpark): In-depth knowledge of Spark’s architecture, core APIs, and PySpark for distributed data processing. Big Data Technologies: Familiarity with Hadoop, HDFS, Kafka, and other big data tools. Data Engineering Skills: Strong understanding of ETL pipelines, data modeling, and data warehousing concepts. Strong proficiency in Python: Expertise in Python programming with a focus on data processing and manipulation. Data Processing Frameworks: Knowledge of data processing libraries such as Pandas, NumPy. SQL Proficiency: Experience writing optimized SQL queries for large-scale data analysis and transformation. Cloud Platforms: Experience working with cloud platforms like AWS, Azure, or GCP, including using cloud storage systems Preferred Technical And Professional Experience Define, drive, and implement an architecture strategy and standards for end-to-end monitoring. Partner with the rest of the technology teams including application development, enterprise architecture, testing services, network engineering, Good to have detection and prevention tools for Company products and Platform and customer-facing
Posted 1 day ago
3.0 - 8.0 years
11 - 16 Lacs
Bengaluru
Work from Office
As a Data Engineer , you are required to Design, build, and maintain data pipelines that efficiently process and transport data from various sources to storage systems or processing environments while ensuring data integrity, consistency, and accuracy across the entire data pipeline. Integrate data from different systems, often involving data cleaning, transformation (ETL), and validation. Design the structure of databases and data storage systems, including the design of schemas, tables, and relationships between datasets to enable efficient querying. Work closely with data scientists, analysts, and other stakeholders to understand their data needs and ensure that the data is structured in a way that makes it accessible and usable. Stay up-to-date with the latest trends and technologies in the data engineering space, such as new data storage solutions, processing frameworks, and cloud technologies. Evaluate and implement new tools to improve data engineering processes. Qualification Bachelor's or Master's in Computer Science & Engineering, or equivalent. Professional Degree in Data Science, Engineering is desirable. Experience level At least3- 5years hands-on experience in Data Engineering Desired Knowledge & Experience Spark: Spark 3.x, RDD/DataFrames/SQL, Batch/Structured Streaming Knowing Spark internalsCatalyst/Tungsten/Photon Databricks: Workflows, SQL Warehouses/Endpoints, DLT, Pipelines, Unity, Autoloader IDE: IntelliJ/Pycharm, Git, Azure Devops, Github Copilot Test: pytest, Great Expectations CI/CD Yaml Azure Pipelines, Continuous Delivery, Acceptance Testing Big Data Design: Lakehouse/Medallion Architecture, Parquet/Delta, Partitioning, Distribution, Data Skew, Compaction Languages: Python/Functional Programming (FP) SQL TSQL/Spark SQL/HiveQL Storage Data Lake and Big Data Storage Design additionally it is helpful to know basics of: Data Pipelines ADF/Synapse Pipelines/Oozie/Airflow Languages: Scala, Java NoSQL :Cosmos, Mongo, Cassandra Cubes SSAS (ROLAP, HOLAP, MOLAP), AAS, Tabular Model SQL Server TSQL, Stored Procedures Hadoop HDInsight/MapReduce/HDFS/YARN/Oozie/Hive/HBase/Ambari/Ranger/Atlas/Kafka Data Catalog Azure Purview, Apache Atlas, Informatica Required Soft skills & Other Capabilities Great attention to detail and good analytical abilities. Good planning and organizational skills Collaborative approach to sharing ideas and finding solutions Ability to work independently and also in a global team environment.
Posted 1 day ago
8.0 - 13.0 years
13 - 17 Lacs
Noida
Work from Office
Join Us in Transforming Healthcare with the Power of Data & AI At Innovaccer, were on a advanced Healthcare Intelligence Platform ever created. Grounded in an AI-first design philosophy, our platform turns complex health data into real-time intelligence, empowering healthcare systems to make faster, smarter decisions. We are building a unified , end-to-end data platform that spans Data Acquisition & Integration, Master Data Management , Data Classification & Governance , Advanced Analytics & AI Studio , App Marketplace , AI-as-BI capabilities, etc. All of this is powered by an Agent-first approach , enabling customers to build solutions dynamically and at scale. Youll have the opportunity to define and develop platform capabilities that help healthcare organizations tackle some of the industrys most pressing challenges, such as Kidney disease management, Clinical trials optimization for pharmaceutical companies, Supply chain intelligence for pharmacies, and many more real-world applications. Were looking for talented engineers and platform thinkers who thrive on solving large-scale, complex, and meaningful problems. If youre excited about working at the intersection of healthcare, AI, and cutting-edge platform engineering, wed love to hear from you. About the Role We are looking for a Staff Engineer to design and develop highly scalable, low-latency data platforms and processing engines. This role is ideal for engineers who enjoy building core systems and infrastructure that enable mission-critical analytics at scale. Youll work on solving some of the toughest data engineering challenges in healthcare. A Day in the Life Architect, design, and build scalable data tools and frameworks. Collaborate with cross-functional teams to ensure data compliance, security, and usability. Lead initiatives around metadata management, data lineage, and data cataloging. Define and evangelize standards and best practices across data engineering teams. Own the end-to-end lifecycle of tooling from prototyping to production deployment. Mentor and guide junior engineers and contribute to technical leadership across the organization. Drive innovation in privacy-by-design, regulatory compliance (e.g., HIPAA), and data observability solutions. What You Need 8+ years of experience in software engineering with strong experience building distributed systems. Proficient in backend development (Python, Java, or Scala or Go) and familiar with RESTful API design. Expertise in modern data stacks: Kafka, Spark, Airflow, Snowflake etc. Experience with open-source data governance frameworks like Apache Atlas, Amundsen, or DataHub is a big plus. Familiarity with cloud platforms (AWS, Azure, GCP) and their native data governance offerings. Bachelor's or Masters degree in Computer Science, Engineering, or a related field.
Posted 1 day ago
7.0 - 12.0 years
14 - 18 Lacs
Noida
Work from Office
Who We Are Build a brighter future while learning and growing with a Siemens company at the intersection of technology, community and s ustainability. Our global team of innovators is always looking to create meaningful solutions to some of the toughest challenges facing our world. Find out how far your passion can take you. What you need * BS in an Engineering or Science discipline, or equivalent experience * 7+ years of software/data engineering experience using Java, Scala, and/or Python, with at least 5 years' experience in a data focused role * Experience in data integration (ETL/ELT) development using multiple languages (e.g., Java, Scala, Python, PySpark, SparkSQL) * Experience building and maintaining data pipelines supporting a variety of integration patterns (batch, replication/CD C, event streaming) and data lake/warehouse in production environments * Experience with AWS-based data services technologies (e.g., Kinesis, Glue, RDS, Athena, etc.) and Snowflake CDW * Experience of working in the larger initiatives building and rationalizing large scale data environments with a large variety of data pipelines, possibly with internal and external partner integrations, would be a plus * Willingness to experiment and learn new approaches and technology applications * Knowledge and experience with various relational databases and demonstrable proficiency in SQL and supporting analytics uses and users * Knowledge of software engineering and agile development best practices * Excellent written and verbal communication skills The Brightly culture Were guided by a vision of community that serves the ambitions and wellbeing of all people, and our professional communities are no exception. We model that ideal every day by being supportive, collaborative partners to one another, conscientiousl y making space for our colleagues to grow and thrive. Our passionate team is driven to create a future where smarter infrastructure protects the environments that shape and connect us all. That brighter future starts with us.
Posted 1 day ago
5.0 - 10.0 years
15 - 19 Lacs
Bengaluru
Work from Office
Overview The Gen AI engineer will be part of a team which designs, builds, and operates the AI services of Siemens Healthineers (SHS). The ideal candidate for this role will have experience with AI services. The role will require the candidate to develop and maintain artificial intelligence systems and applications that help businesses and organizations solve complex problems. The role also requires expertise in machine learning, deep learning, natural language processing, computer vision, and other AI technologies. Task and Responsibilities Generative AI Engineer is responsible for designing, architecting, and development of AI product/service. The main responsibilities include Designing, developing, and deploying Azure-based AI solutions, including machine learning models, cognitive services, and data analytics solutions. Collaborating with cross-functional teams, such as data scientists, business analysts, and developers, to design and implement AI solutions that meet business requirements. Building and training machine learning models using Azure Machine Learning, and tuning models for optimal performance. Developing and deploying custom AI models using Azure Cognitive Services, such as speech recognition, language understanding, and computer vision. Creating data pipelines to collect, process, and prepare data for analysis and modeling using Azure data services, such as Azure Data Factory and Azure Databricks. Implementing data analytics solutions using Azure Synapse Analytics or other Azure data services. Deploying and managing Azure services and resources using Azure DevOps or other deployment tools. Monitoring and troubleshooting deployed solutions to ensure optimal performance and reliability. Ensuring compliance with security and regulatory requirements related to AI solutions. Staying up-to-date with the latest Azure AI technologies and industry developments, and sharing knowledge and best practices with the team. Qualifications Overall 5+ years combined experience in IT and recent 3 years as AI engineer. Bachelor's or master's degree in computer science, information technology, or a related field. Experience in designing, developing, and delivering successful AI Services. Experience with cloud computing technologies, such Azure. Relevant industry certifications, such as Microsoft CertifiedAzure AI Engineer, Azure Solution Architect etc. is a plus. Excellent written and verbal communication skills to collaborate with cross-functional teams and communicate technical information to non-technical stakeholders. Technical skills Proficiency in programming languagesYou should be proficient in programming language such as Python and R. Experience of Azure AI servicesYou should have experience with Azure AI services such as Azure Machine Learning, Azure Cognitive Services, and Azure Databricks. Data handling and processingYou should be proficient in data handling and processing techniques such as data cleaning, data normalization, and feature extraction. Knowledge of SQL, NoSQL, and big data technologies such as Hadoop and Spark are also beneficial. Experience with Cloud platformYou should have a good understanding of cloud computing concepts and experience working with Azure services such as containers, Kubernetes, Web Apps, Azure Front Door, CDN, Web Application firewalls etc. DevOps and CI/CDYou should be familiar with DevOps practices and CI/CD pipelines. This includes tools such as Azure DevOps & Git. Security and complianceYou should be aware of security and compliance considerations when building and deploying AI models in the cloud. This includes knowledge of Azure security services, compliance frameworks such as HIPAA and GDPR, and best practices for securing data and applications in the cloud. Machine learning algorithms and frameworksKnowledge of machine learning algorithms and frameworks such as TensorFlow, Keras, PyTorch, and Scikit-learn is a plus.
Posted 1 day ago
3.0 years
8 - 30 Lacs
Pune, Maharashtra, India
On-site
Industry & Sector: Enterprise cloud consulting and data analytics services provider delivering large-scale AWS lakehouse, real-time analytics and AI solutions for global clients. Role & Responsibilities Design, build and optimise high-volume data pipelines on AWS using Spark (Scala) and Glue. Develop reusable ETL frameworks that ingest structured and semi-structured data into S3-based data lakes. Tune Spark jobs for cost, latency and scalability; implement partitioning, caching and checkpointing best practices. Collaborate with data scientists to productionise feature engineering and model-output pipelines. Automate deployment via CloudFormation or Terraform and integrate monitoring with CloudWatch and Prometheus. Champion coding standards, peer reviews and knowledge sharing across the data engineering guild. Skills & Qualifications Must-Have: 3+ years building Spark applications in Scala on AWS; hands-on with Glue, EMR, S3, IAM and Step Functions; proficient in SQL, data modelling and partitioning strategies; version control with Git and CI/CD pipelines (CodePipeline, Jenkins or similar). Preferred: Experience with Delta Lake or Iceberg table formats; knowledge of Python for orchestration tasks; exposure to streaming (Kafka, Kinesis) and near-real-time processing; certification in AWS Data Analytics or Solutions Architect. Benefits & Culture Highlights On-site, engineer-first culture with dedicated R&D sprints and tech conference sponsorship. Performance-linked bonuses and accelerated promotion paths for high impact. Collaborative workspace with wellness programs, hack days and flexible leave policy. Skills: emr,s3,data engineering,python,iceberg,sql,kafka,ci/cd,aws data engineer (spark scala),aws,codepipeline,jenkins,data modelling,git,delta lake,scala,iam,step functions,devops,spark,apache spark,kinesis,glue,partitioning strategies
Posted 1 day ago
4.0 - 5.0 years
10 - 15 Lacs
Pune
Work from Office
Hello Visionary! We empower our people to stay resilient and relevant in a constantly changing world. Were looking for people who are always searching for creative ways to grow and learn. People who want to make a real impact, now and in the future. Does that sound like youThen it seems like youd make an outstanding addition to our vibrant team. Siemens Mobility is an independent run company of Siemens AG. Its core business includes rail vehicles, rail automation and electrification solutions, turnkey systems, intelligent road traffic technology and related services. In Mobility, we help our customers meet the need for hard-working mobility solutions. Were making the lives of people who travel easier and more enjoyable while constantly developing new, intelligent mobility solutions! We are looking forEmbedded Linux Engineer- Train IT Youll make a difference by You will be part of the Engineering team for new and exciting software applications in our trains. Your mission will be to customize Linux image of our Train IT platform for specific train and integrate applications such as train server, train to ground communication, passenger information, passenger counting or CCTV. This role requires a wide range of technical skills and a desire to find out how things work and why. Be a member of the international engineering team Configure and customize Debian Linux image for deployment to the train Customize applications and configure devices such as network switches and special devices according to the system architecture of the train Integrate these applications and devices with other systems in the train Cooperate with software test team Provide technical support in your area of expertise Desired Skills: Minimum 4-5 years of Experience in software development. Experience with Linux as power user or administrator Experience with configuration of managed switches Good knowledge of TCP/IP Understanding of network protocols like DHCP, RADIUS, DNS, multicast, SSL/TLS Experience with issue tracking tools such as JIRA or Redmine Highly organized and self-motivated Hands-on, problem-solving mentality Experience in the railway industry. Long term interest in the IT domain, passion for IT German language Python programming Fluent English Join us and be yourself! Make your mark in our exciting world at Siemens. This role is based in Pune. You might be required to visit other locations within India and outside. In return, you'll get the chance to work with teams impacting - and the shape of things to come. Find out more about mobility athttps://new.siemens.com/global/en/products/mobility.html and about Siemens careers at
Posted 1 day ago
2.0 - 7.0 years
4 - 7 Lacs
Bengaluru
Work from Office
Educational Bachelor of Engineering Service Line Information Systems Responsibilities To design and implement real-time data streaming pipelines. Using Apache Kafka expertise and making event-driven architectures and for stream processingExperience in cloud like Azure Preferred Skills: Technology-Java-Apache-Kafka Technology-Java-Core Java Technology-Big Data - Data Processing-Spark-Apache Flink
Posted 1 day ago
5.0 - 10.0 years
6 - 11 Lacs
Bengaluru
Work from Office
Educational Bachelor of Engineering Service Line Data & Analytics Unit Responsibilities Strong knowledge in Python ML, AI ML, LLM(Large Lang Model),Gen AI Preferred Skills: Technology-Machine Learning-Generative AI
Posted 1 day ago
5.0 - 8.0 years
6 - 11 Lacs
Bengaluru
Work from Office
Educational Bachelor of Engineering Service Line Data & Analytics Unit Responsibilities Strong knowledge in Python ML, AI ML, LLM(Large Lang Model),Gen AI Preferred Skills: Technology-Open System-Shell scripting-Bash Scripting Technology-Machine Learning-Python Technology-Machine Learning-Generative AI Technology-Machine Learning-AI/ML Solution Architecture and Design-traditional ai ml
Posted 1 day ago
5.0 - 8.0 years
8 - 12 Lacs
Bengaluru
Work from Office
Educational Bachelor of Engineering Service Line Data & Analytics Unit Responsibilities A day in the life of an Infoscion As part of the Infosys delivery team, your primary role would be to interface with the client for quality assurance, issue resolution and ensuring high customer satisfaction. You will understand requirements, create and review designs, validate the architecture and ensure high levels of service offerings to clients in the technology domain. You will participate in project estimation, provide inputs for solution delivery, conduct technical risk planning, perform code reviews and unit test plan reviews. You will lead and guide your teams towards developing optimized high quality code deliverables, continual knowledge management and adherence to the organizational guidelines and processes. You would be a key contributor to building efficient programs/ systems and if you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you!If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Additional Responsibilities: Knowledge of more than one technology Basics of Architecture and Design fundamentals Knowledge of Testing tools Knowledge of agile methodologies Understanding of Project life cycle activities on development and maintenance projects Understanding of one or more Estimation methodologies, Knowledge of Quality processes Basics of business domain to understand the business requirements Analytical abilities, Strong Technical Skills, Good communication skills Good understanding of the technology and domain Ability to demonstrate a sound understanding of software quality assurance principles, SOLID design principles and modelling methods Awareness of latest technologies and trends Excellent problem solving, analytical and debugging skills Technical and Professional : Primary skills:Technology-Analytics - Packages-Python - Big Data,Technology-Big Data - Data Processing-Spark,Technology-ETL & Data Quality-ETL & Data Quality - ALL,Technology-Machine Learning-Python Preferred Skills: Technology-Analytics - Packages-Python - Big Data Technology-ETL & Data Quality-ETL & Data Quality - ALL Technology-Big Data - Data Processing-Spark-SparkSQL Technology-Machine Learning-Python
Posted 1 day ago
9.0 - 11.0 years
13 - 17 Lacs
Pune
Work from Office
Educational Bachelor of Engineering,Bachelor Of Technology Service Line Enterprise Package Application Services Responsibilities A day in the life of an Infoscion As part of the Infosys consulting team, your primary role would be to lead the engagement effort of providing high-quality and value-adding consulting solutions to customers at different stages- from problem definition to diagnosis to solution design, development and deployment. You will review the proposals prepared by consultants, provide guidance, and analyze the solutions defined for the client business problems to identify any potential risks and issues. You will identify change Management requirements and propose a structured approach to client for managing the change using multiple communication mechanisms. You will also coach and create a vision for the team, provide subject matter training for your focus areas, motivate and inspire team members through effective and timely feedback and recognition for high performance. You would be a key contributor in unit-level and organizational initiatives with an objective of providing high-quality, value-adding consulting solutions to customers adhering to the guidelines and processes of the organization. If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Additional Responsibilities: Design, develop, and maintain scalable data pipelines on Databricks using PySpark Collaborate with data analysts and scientists to understand data requirements and deliver solutions Optimize and troubleshoot existing data pipelines for performance and reliability Ensure data quality and integrity across various data sources Implement data security and compliance best practices Monitor data pipeline performance and conduct necessary maintenance and updates Document data pipeline processes and technical specificationsLocation of posting - Infosys Ltd. is committed to ensuring you have the best experience throughout your journey with us. We currently have open positions in a number of locations across India - Bangalore, Pune, Hyderabad, Mysore, Kolkata, Chennai, Chandigarh, Trivandrum, Indore, Nagpur, Mangalore, Noida, Bhubaneswar, Coimbatore, Mumbai, Jaipur, Hubli, Vizag. While we work in accordance with business requirements, we shall strive to offer you the location of your choice, where possible. Technical and Professional : Must have 9+ years of experience in data engineering Proficiency with Databricks and Apache Spark Strong SQL skills and experience with relational databases Experience with big data technologies (e.g., Hadoop, Kafka) Knowledge of data warehousing concepts and ETL processes Experience with CI/CD tools, particularly Jenkins Excellent problem-solving and analytical skills Solid understanding of big data fundamentals and experience with Apache Spark Familiarity with cloud platforms (e.g., AWS, Azure) Experience with version control systems (e.g., BitBucket) Understanding of DevOps principles and tools (e.g., CI/CD, Jenkins) Databricks certification is a plus Preferred Skills: Technology-Big Data-Big Data - ALL Technology-Cloud Integration-Azure Data Factory (ADF) Technology-Cloud Platform-AWS Data Analytics-AWS Data Exchange
Posted 1 day ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Join our digital revolution in NatWest Digital X In everything we do, we work to one aim. To make digital experiences which are effortless and secure. So we organise ourselves around three principles: engineer, protect, and operate. We engineer simple solutions, we protect our customers, and we operate smarter. Our people work differently depending on their jobs and needs. From hybrid working to flexible hours, we have plenty of options that help our people to thrive. This role is based in India and as such all normal working days must be carried out in India. Job Description Join us as a Solution Architect This is an opportunity for an experienced Solution Architect to help us define the high-level technical architecture and design for a key data analytics and insights platform that powers the personalised customer engagement initiatives of the business You’ll define and communicate a shared technical and architectural vision of end-to-end designs that may span multiple platforms and domains Take on this exciting new challenge and hone your technical capabilities while advancing your career and building your network across the bank We're offering this role at vice president level What you'll do We’ll look to you to influence and promote the collaboration across platform and domain teams on the solution delivery. Partnering with platform and domain teams, you’ll elaborate the solution and its interfaces, validating technology assumptions, evaluating implementation alternatives, and creating the continuous delivery pipeline. You’ll also provide analysis of options and deliver end-to-end solution designs using the relevant building blocks, as well as producing designs for features that allow frequent incremental delivery of customer value. On top of this, you’ll be: Owning the technical design and architecture development that aligns with bank-wide enterprise architecture principles, security standards, and regulatory requirements Participating in activities to shape requirements, validating designs and prototypes to deliver change that aligns with the target architecture Promoting adaptive design practices to drive collaboration of feature teams around a common technical vision using continuous feedback Making recommendations of potential impacts to existing and prospective customers of the latest technology and customer trends Engaging with the wider architecture community within the bank to ensure alignment with enterprise standards Presenting solutions to governance boards and design review forums to secure approvals Maintaining up-to-date architectural documentation to support audits and risk assessment The skills you'll need As a Solution Architect, you’ll bring expert knowledge of application architecture, and in business data or infrastructure architecture with working knowledge of industry architecture frameworks such as TOGAF or ArchiMate. You’ll also need an understanding of Agile and contemporary methodologies with experience of working in Agile teams. A certification in cloud solutions like AWS Solution Architect is desirable while an awareness of agentic AI based application architectures using LLMs like OpenAI and agentic frameworks like LangGraph, CrewAI will be advantageous. Furthermore, you’ll need: Strong experience in solution design, enterprise architecture patterns, and cloud-native applications including the ability to produce multiple views to highlight different architectural concerns A familiarity with understanding big data processing in the banking industry Hands-on experience in AWS services, including but not limited to S3, Lambda, EMR, DynamoDB and API Gateway An understanding of big data processing using frameworks or platforms like Spark, EMR, Kafka, Apache Flink or similar Knowledge of real-time data processing, event-driven architectures, and microservices Conceptual understanding of data modelling and analytics, machine learning or deep-learning models The ability to communicate complex technical concepts clearly to peers and leadership level colleagues
Posted 1 day ago
2.0 - 7.0 years
5 - 9 Lacs
Bengaluru
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NAMinimum 2 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will be responsible for designing, building, and configuring applications to meet business process and application requirements. You will play a crucial role in developing solutions that align with organizational goals and objectives. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Collaborate with cross-functional teams to define, design, and ship new features.- Develop high-quality software design and architecture.- Identify, prioritize, and execute tasks in the software development life cycle.- Conduct software analysis, programming, testing, and debugging.- Create technical documentation for reference and reporting. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform.- Strong understanding of cloud-based data analytics platforms.- Experience with big data technologies such as Apache Spark and Hadoop.- Knowledge of programming languages like Python, Scala, or SQL.- Hands-on experience in developing and deploying data pipelines.- Familiarity with data modeling and database design principles. Additional Information:- The candidate should have a minimum of 2 years of experience in Databricks Unified Data Analytics Platform.- This position is based at our Bengaluru office.- A 15 years full-time education is required. Qualification 15 years full time education
Posted 1 day ago
15.0 - 20.0 years
5 - 9 Lacs
Bengaluru
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : AWS Glue Good to have skills : Microsoft SQL Server, Python (Programming Language), Data EngineeringMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education:Developing a customer insights platform that will provide an ID graph and Digital customer view to help drive improvements in marketing decisions. Responsibilities:Design, build, and maintain data pipelines using AWS services (Glue, Neptune, S3).Participate in code reviews, testing, and optimization of data pipelines.Collaborate with stakeholders to understand data requirements and translate into technical solutions.:Proven experience as a Senior Data Engineer / Data Architect, or similar role.Knowledge of data governance and security practices.Extensive experience with data lake technologies (NiFi, Spark, Hive Metastore, Object Storage, Delta Lake Framework)Extensive experience with AWS cloud services, including AWS Glue, Neptune, S3 and LambdaExperience with AWS Neptune or other graph database technologies.Experience in data modelling and design.Experience with event driven architectureExperience with PythonExperience with SQLStrong problem-solving skills and attention to detail.Excellent communication and teamwork skills. Nice to have:Experience with observability solutions (Splunk, New Relic)Experience with Infrastructure as Code (Terraform, CloudFormation)Experience with CICD (Jenkins)Experience with KubernetesFamiliarity with data visualization tools.Support Engineer Similar skills as the above, but with more of a support focus, able to troubleshoot, patch and upgrade, minor enhancements and fixes to the infrastructure and pipelines. Experience with observability, cloudwatch, new relic and monitoring. Qualification 15 years full time education
Posted 1 day ago
7.0 - 9.0 years
4 - 8 Lacs
Bengaluru
Work from Office
Educational Bachelor of Engineering Service Line Data & Analytics Unit Responsibilities A day in the life of an Infoscion As part of the Infosys delivery team, your primary role would be to interface with the client for quality assurance, issue resolution and ensuring high customer satisfaction. You will understand requirements, create and review designs, validate the architecture and ensure high levels of service offerings to clients in the technology domain. You will participate in project estimation, provide inputs for solution delivery, conduct technical risk planning, perform code reviews and unit test plan reviews. You will lead and guide your teams towards developing optimized high quality code deliverables, continual knowledge management and adherence to the organizational guidelines and processes. You would be a key contributor to building efficient programs/ systems and if you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you!If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Additional Responsibilities: Knowledge of more than one technology Basics of Architecture and Design fundamentals Knowledge of Testing tools Knowledge of agile methodologies Understanding of Project life cycle activities on development and maintenance projects Understanding of one or more Estimation methodologies, Knowledge of Quality processes Basics of business domain to understand the business requirements Analytical abilities, Strong Technical Skills, Good communication skills Good understanding of the technology and domain Ability to demonstrate a sound understanding of software quality assurance principles, SOLID design principles and modelling methods Awareness of latest technologies and trends Excellent problem solving, analytical and debugging skills Technical and Professional : Technology-Big Data - Data Processing-SparkScala developer Preferred Skills: Technology-Functional Programming-Scala
Posted 1 day ago
3.0 - 5.0 years
3 - 7 Lacs
Bengaluru
Work from Office
Educational Bachelor of Engineering,BTech,BSc,BCA,MCA,MTech,MSc Service Line Data & Analytics Unit Responsibilities A day in the life of an Infoscion As part of the Infosys delivery team, your primary role would be to ensure effective Design, Development, Validation and Support activities, to assure that our clients are satisfied with the high levels of service in the technology domain. You will gather the requirements and specifications to understand the client requirements in a detailed manner and translate the same into system requirements. You will play a key role in the overall estimation of work requirements to provide the right information on project estimations to Technology Leads and Project Managers. You would be a key contributor to building efficient programs/ systems and if you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you!If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Additional Responsibilities: Knowledge of design principles and fundamentals of architecture Understanding of performance engineering Knowledge of quality processes and estimation techniques Basic understanding of project domain Ability to translate functional / nonfunctional requirements to systems requirements Ability to design and code complex programs Ability to write test cases and scenarios based on the specifications Good understanding of SDLC and agile methodologies Awareness of latest technologies and trends Logical thinking and problem solving skills along with an ability to collaborate Technical and Professional : Primary skills:Technology-Data on Cloud-DataStore-Snowflake Preferred Skills: Technology-Data on Cloud-DataStore-Snowflake
Posted 1 day ago
5.0 - 8.0 years
4 - 8 Lacs
Bengaluru
Work from Office
Educational Bachelor of Engineering,BCom,BSc,MSc,MCA Service Line Data & Analytics Unit Responsibilities A day in the life of an Infoscion As part of the Infosys delivery team, your primary role would be to interface with the client for quality assurance, issue resolution and ensuring high customer satisfaction. You will understand requirements, create and review designs, validate the architecture and ensure high levels of service offerings to clients in the technology domain. You will participate in project estimation, provide inputs for solution delivery, conduct technical risk planning, perform code reviews and unit test plan reviews. You will lead and guide your teams towards developing optimized high quality code deliverables, continual knowledge management and adherence to the organizational guidelines and processes. You would be a key contributor to building efficient programs/ systems and if you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you!If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Additional Responsibilities: Knowledge of more than one technology Basics of Architecture and Design fundamentals Knowledge of Testing tools Knowledge of agile methodologies Understanding of Project life cycle activities on development and maintenance projects Understanding of one or more Estimation methodologies, Knowledge of Quality processes Basics of business domain to understand the business requirements Analytical abilities, Strong Technical Skills, Good communication skills Good understanding of the technology and domain Ability to demonstrate a sound understanding of software quality assurance principles, SOLID design principles and modelling methods Awareness of latest technologies and trends Excellent problem solving, analytical and debugging skills Technical and Professional : Primary skills:Technology-Big Data - Data Processing-Spark Preferred Skills: Technology-Big Data - Data Processing-Spark-SparkSQL
Posted 1 day ago
3.0 years
8 - 30 Lacs
Chennai, Tamil Nadu, India
On-site
Azure Databricks Engineer Industry & Sector: We are a fast-growing cloud data and analytics consultancy serving global enterprises across finance, retail, and manufacturing. Our teams design high-throughput lakehouse platforms, predictive analytics, and AI services on Microsoft Azure, unlocking data-driven decisions at scale. Role & Responsibilities Design, develop, and optimise end-to-end data pipelines on Azure Databricks using PySpark/Scala and Delta Lake. Build scalable ETL workflows to ingest structured and semi-structured data from Azure Data Lake, SQL, and API sources. Implement lakehouse architectures, partitioning, and performance tuning to ensure sub-second query response. Collaborate with Data Scientists to prepare feature stores and accelerate model training and inference. Automate deployment with Azure DevOps, ARM/Bicep, and Databricks CLI for secure, repeatable releases. Monitor pipeline health, cost, and governance, applying best practices for security, lineage, and data quality. Skills & Qualifications Must-Have 3+ years building large-scale Spark or Databricks workloads in production. Expert hands-on with PySpark/Scala, Delta Lake, and SQL optimisation. Deep knowledge of Azure services—Data Lake Storage Gen2, Data Factory/Synapse, Key Vault, and Event Hub. Proficiency in CI/CD, Git, and automated testing for data engineering. Understanding of data modelling, partitioning, and performance tuning strategies. Preferred Exposure to MLflow, feature store design, or predictive model serving. Experience implementing role-based access controls and GDPR/PCI compliance on Azure. Certification: Microsoft DP-203 or Databricks Data Engineer Professional. Benefits & Culture Work on cutting-edge Azure Databricks projects with Fortune 500 clients. Flat, learning-centric culture that funds certifications and conference passes. Hybrid leave policy, comprehensive health cover, and performance bonuses. Skills: performance tuning,pyspark,event hub,sql,ci/cd,data factory,automated testing,key vault,azure data lake storage gen2,data modelling,azure databricks,git,delta lake,scala,devops,sql optimisation,spark,data synapse
Posted 1 day ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
20312 Jobs | Dublin
Wipro
11977 Jobs | Bengaluru
EY
8165 Jobs | London
Accenture in India
6667 Jobs | Dublin 2
Uplers
6464 Jobs | Ahmedabad
Amazon
6352 Jobs | Seattle,WA
Oracle
5993 Jobs | Redwood City
IBM
5803 Jobs | Armonk
Capgemini
3897 Jobs | Paris,France
Tata Consultancy Services
3776 Jobs | Thane