Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Summary Position Summary Job Role: Data Engineer – Gen AI Data Pipeline- Consultant Are you looking to work at a place that builds robust, high-quality software solutions? ‘Deloitte Consulting’ is the answer. As an Analyst/Consultant/Engineer at Deloitte Consulting, you will be responsible for quality assurance on large-scale complex software solutions at enterprise level. These applications are often high-volume mission critical systems that would provide you an exposure with end-to-end functional and domain knowledge. You will work with business, functional and technical teams on the project located across shores. You will be responsible to independently lead a team, mentor team members, and drive all test deliverables across project life cycle. You will be involved in end-to-end delivery of the project from testing strategy, estimations, planning, execution and reporting. Work you’ll do A Cloud Data Engineer will be responsible for following activities: Participate in application architecture and design discussions. Work with team leads in defining solution Design for development. Analyze business/functional requirements and develop data processing pipelines for them. Perform unit testing and participate in integration in collaboration with other team members. Perform peer code reviews and ensure its alignment with pre-defined architectural standards, guidelines, best practices, and meet quality standards. Work on defects\bugs and help other team members. Understand and comply with the established agile development methodology. Participate in various Agile ceremonies like – scrum meetings, sprint planning’s etc. Proactively identify opportunities for code/process/design improvements. Participate in customer support activities for existing clients using Converge Health’s existing platform\products. The Team Deloitte Consulting LLP’s Technology Consulting practice is dedicated to helping our clients build tomorrow by solving today’s complex business problems involving strategy, procurement, design, delivery, and assurance of technology solutions. Our service areas include analytics and information management, delivery, cyber risk services, and technical strategy and architecture, as well as the spectrum of digital strategy, design, and development services offered by Deloitte Digital. Learn more about our Technology Consulting practice on www.deloitte.com. Qualifications And Experience Required: Education: B.E./B.Tech/M.C.A./M.Sc. Data Engineering Principles: Proficient in data warehousing concepts. Experienced with ETL (Extract, Transform, Load) processes. Skilled in SQL and handling data in JSON and other semi-structured formats. Hands-on experience with Python for data processing tasks. Big Data and Cloud Platforms: Experience with Big Data technologies on cloud platforms such as AWS or Cloudera. AWS Cloud Platform:Working experience on the AWS cloud platform. AWS Data Pipeline: Knowledgeable in building data pipelines on AWS using services like Lambda, S3, Athena, Kinesis, etc. Performance Tuning: Proficient in performance tuning on various RDBMS (Relational Database Management Systems) such as Oracle, SQL Server, Redshift, Impala, etc. Data Modeling Concepts: Good understanding of dimensional, relational, or hybrid data modeling. Continuous Integration Tools: Experience with CI tools such as Jenkins. Proficient with GIT version control systems. Familiar with issue tracking tools like JIRA. Agile Development: Familiar with Agile development methodologies. Generative AI Experience: Retrieval-Augmented Generation (RAG): Must have Implemented RAG techniques to enhance data retrieval and improve the relevance of generated content. Integrated RAG models with existing data pipelines to optimize information retrieval processes. Vector Databases: Utilizing vector databases for efficient storage and retrieval of high-dimensional data. Knowledge on vector search algorithms to enhance the performance of AI-driven applications. Large Language Models (LLMs): Experience with deploying and fine-tuning large language models for various NLP tasks. Integrated LLMs into data processing workflows to automate and enhance data analysis. LangChain: Knowledge on LangChain’s for building and managing complex data workflows. Experience in scalable data pipelines using LangChain to streamline data processing and integration tasks. Efficiency Improvements : Should have implementation experience to reduce data processing times by optimizing ETL workflows and leveraging cloud-native solutions. Data Engineering Principles: Proficient in data warehousing concepts. Experienced with ETL (Extract, Transform, Load) processes. Skilled in SQL and handling data in JSON and other semi-structured formats. Hands-on experience with Python for data processing tasks. Big Data and Cloud Platforms: Experience with Big Data technologies on cloud platforms such as AWS or Cloudera. AWS Cloud Platform:Working experience on the AWS cloud platform. AWS Data Pipeline: Knowledgeable in building data pipelines on AWS using services like Lambda, S3, Athena, Kinesis, etc. Performance Tuning: Proficient in performance tuning on various RDBMS (Relational Database Management Systems) such as Oracle, SQL Server, Redshift, Impala, etc. Data Modeling Concepts: Good understanding of dimensional, relational, or hybrid data modeling. Continuous Integration Tools: Experience with CI tools such as Jenkins. Proficient with GIT version control systems. Familiar with issue tracking tools like JIRA. Agile Development: Familiar with Agile development methodologies. Generative AI Experience: Retrieval-Augmented Generation (RAG): Must have Implemented RAG techniques to enhance data retrieval and improve the relevance of generated content. Integrated RAG models with existing data pipelines to optimize information retrieval processes. Vector Databases: Utilizing vector databases for efficient storage and retrieval of high-dimensional data. Knowledge on vector search algorithms to enhance the performance of AI-driven applications. Large Language Models (LLMs): Experience with deploying and fine-tuning large language models for various NLP tasks. Integrated LLMs into data processing workflows to automate and enhance data analysis. LangChain: Knowledge on LangChain’s for building and managing complex data workflows. Experience in scalable data pipelines using LangChain to streamline data processing and integration tasks. Efficiency Improvements : Should have implementation experience to reduce data processing times by optimizing ETL workflows and leveraging cloud-native solutions. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India . Benefits To Help You Thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 303074
Posted 1 week ago
0 years
0 Lacs
Kolkata, West Bengal, India
On-site
Summary Position Summary Job Role: Data Engineer – Gen AI Data Pipeline- Consultant Are you looking to work at a place that builds robust, high-quality software solutions? ‘Deloitte Consulting’ is the answer. As an Analyst/Consultant/Engineer at Deloitte Consulting, you will be responsible for quality assurance on large-scale complex software solutions at enterprise level. These applications are often high-volume mission critical systems that would provide you an exposure with end-to-end functional and domain knowledge. You will work with business, functional and technical teams on the project located across shores. You will be responsible to independently lead a team, mentor team members, and drive all test deliverables across project life cycle. You will be involved in end-to-end delivery of the project from testing strategy, estimations, planning, execution and reporting. Work you’ll do A Cloud Data Engineer will be responsible for following activities: Participate in application architecture and design discussions. Work with team leads in defining solution Design for development. Analyze business/functional requirements and develop data processing pipelines for them. Perform unit testing and participate in integration in collaboration with other team members. Perform peer code reviews and ensure its alignment with pre-defined architectural standards, guidelines, best practices, and meet quality standards. Work on defects\bugs and help other team members. Understand and comply with the established agile development methodology. Participate in various Agile ceremonies like – scrum meetings, sprint planning’s etc. Proactively identify opportunities for code/process/design improvements. Participate in customer support activities for existing clients using Converge Health’s existing platform\products. The Team Deloitte Consulting LLP’s Technology Consulting practice is dedicated to helping our clients build tomorrow by solving today’s complex business problems involving strategy, procurement, design, delivery, and assurance of technology solutions. Our service areas include analytics and information management, delivery, cyber risk services, and technical strategy and architecture, as well as the spectrum of digital strategy, design, and development services offered by Deloitte Digital. Learn more about our Technology Consulting practice on www.deloitte.com. Qualifications And Experience Required: Education: B.E./B.Tech/M.C.A./M.Sc. Data Engineering Principles: Proficient in data warehousing concepts. Experienced with ETL (Extract, Transform, Load) processes. Skilled in SQL and handling data in JSON and other semi-structured formats. Hands-on experience with Python for data processing tasks. Big Data and Cloud Platforms: Experience with Big Data technologies on cloud platforms such as AWS or Cloudera. AWS Cloud Platform:Working experience on the AWS cloud platform. AWS Data Pipeline: Knowledgeable in building data pipelines on AWS using services like Lambda, S3, Athena, Kinesis, etc. Performance Tuning: Proficient in performance tuning on various RDBMS (Relational Database Management Systems) such as Oracle, SQL Server, Redshift, Impala, etc. Data Modeling Concepts: Good understanding of dimensional, relational, or hybrid data modeling. Continuous Integration Tools: Experience with CI tools such as Jenkins. Proficient with GIT version control systems. Familiar with issue tracking tools like JIRA. Agile Development: Familiar with Agile development methodologies. Generative AI Experience: Retrieval-Augmented Generation (RAG): Must have Implemented RAG techniques to enhance data retrieval and improve the relevance of generated content. Integrated RAG models with existing data pipelines to optimize information retrieval processes. Vector Databases: Utilizing vector databases for efficient storage and retrieval of high-dimensional data. Knowledge on vector search algorithms to enhance the performance of AI-driven applications. Large Language Models (LLMs): Experience with deploying and fine-tuning large language models for various NLP tasks. Integrated LLMs into data processing workflows to automate and enhance data analysis. LangChain: Knowledge on LangChain’s for building and managing complex data workflows. Experience in scalable data pipelines using LangChain to streamline data processing and integration tasks. Efficiency Improvements : Should have implementation experience to reduce data processing times by optimizing ETL workflows and leveraging cloud-native solutions. Data Engineering Principles: Proficient in data warehousing concepts. Experienced with ETL (Extract, Transform, Load) processes. Skilled in SQL and handling data in JSON and other semi-structured formats. Hands-on experience with Python for data processing tasks. Big Data and Cloud Platforms: Experience with Big Data technologies on cloud platforms such as AWS or Cloudera. AWS Cloud Platform:Working experience on the AWS cloud platform. AWS Data Pipeline: Knowledgeable in building data pipelines on AWS using services like Lambda, S3, Athena, Kinesis, etc. Performance Tuning: Proficient in performance tuning on various RDBMS (Relational Database Management Systems) such as Oracle, SQL Server, Redshift, Impala, etc. Data Modeling Concepts: Good understanding of dimensional, relational, or hybrid data modeling. Continuous Integration Tools: Experience with CI tools such as Jenkins. Proficient with GIT version control systems. Familiar with issue tracking tools like JIRA. Agile Development: Familiar with Agile development methodologies. Generative AI Experience: Retrieval-Augmented Generation (RAG): Must have Implemented RAG techniques to enhance data retrieval and improve the relevance of generated content. Integrated RAG models with existing data pipelines to optimize information retrieval processes. Vector Databases: Utilizing vector databases for efficient storage and retrieval of high-dimensional data. Knowledge on vector search algorithms to enhance the performance of AI-driven applications. Large Language Models (LLMs): Experience with deploying and fine-tuning large language models for various NLP tasks. Integrated LLMs into data processing workflows to automate and enhance data analysis. LangChain: Knowledge on LangChain’s for building and managing complex data workflows. Experience in scalable data pipelines using LangChain to streamline data processing and integration tasks. Efficiency Improvements : Should have implementation experience to reduce data processing times by optimizing ETL workflows and leveraging cloud-native solutions. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India . Benefits To Help You Thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 303074
Posted 1 week ago
5.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
Summary Position Summary Job Role: Data Engineer – Consultant Offering cusomer-tailored services and deep industry insights, at Deloitte Consulting LLP we help clients tackle their most complex challenges enabling them to seize new growth opportunities, reduce costs, improe efficiencies and stay ahead of customer demand . Developing and executing our clients’ strategic vision, we help them dramatically improve their business performance across a broad range of specialties – enterprise model desi, global business services, outsourcing, real estate, and location strategy. Our Deloitte Innovations and Platforms teams are working on delivering innovate cloud-based solutions across a range of domains and industries ( e.g. supply chain management, banking/insurance, CPG, retail, etc.) . It is a fast-paced, innovative and exciting environment . Our teams are following an agile development approach and work with the latest technologies across a wide range of cloud technologies, commercial options and open source . We are building and bringing solutions to market which we are hosting and operating for our clients. Data Engineer As a Data Engineer, you will be responsible for designing, developing, and maintaining our data pipelines and infrastructure. You will work closely with data scientists, analysts, and other stakeholders to ensure data is accessible, reliable, and optimized for performance. Work you’ll do Design, build and support scalable data pipelines, systems, and APIs using python, spark and snowflake ecosystems. Use distributed computing frameworks (primarily PySpark , snowpark ), graph-based and other cutting-edge technologies to resolve identities at scale Lead cross-functional initiatives and collaborate with multiple, distributed teams Produce high quality code that is robust, efficient, testable and easy to maintain Deliver operational automation and tooling to minimize repeated manual tasks Participate in code reviews, architectural decisions, give actionable feedback, and mentor junior team members Influence product roadmap and help cross-functional teams to identify data opportunities to drive impact Team Converge’s cloud-based suite of software solutions, combined with Deloitte’s integrated technology ecosystem, enable financial institutions to deliver the security, digital convenience, and personalization customers expect today. With regulatory experience in financial services, strategy, and implementation, we help our clients offer an exceptional customer experience, expand product offerings, acquire new customers, reduce customer acquisition cost, and deliver strong ROI goals on their technology investment. For more information visit: https://www2.deloitte.com/us/en/pages/consulting/solutions/converge/converge-prosperity.html Prior Experience: 5 to 8 years of experience in Data engineering . 3 to 5 years of experience in Data engineering . Skills/Project Experience - Required : 2+ years of software development or data engineering experience i n Python (preferred) , Spark (preferred) , snowpark , snowflake or equivalent technologies Experience designing and building highly scalable data pipelines (using Airflow, Luigi, etc.) Knowledge and experience of working with large datasets Proven track record of working with c loud technologies ( GCP, Azure, AWS , etc. ) Experience with developing or consuming web interfaces (REST API) Experience with modern software development practices, leveraging CI/CD and containerization such as Docker Self-driven with a passion for learning and implementing new technologies A history of working collaboratively with a cross-functional team of engineers, data scientists and product managers Good to Have Experience with distributed computing or big data frameworks (Apache Spark, Apache Flink, etc.) Experience with or interest in implementing graph-based technologies Knowledge of or interest in d ata s cience & m achine l earning Experience with b ackend infrastructure and how to architect data pipelines Knowledge of systemdesign and distributed systems Experience working in a p roduct e ngineering environment Experience with data warehouses ( BigQuery , Redshift etc. ) Location: Hyderabad/Bengaluru /Gurgaon /Kolkata/Pune Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India . Benefits To Help You Thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 302301
Posted 1 week ago
5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Summary Position Summary Job Role: Data Engineer – Consultant Offering cusomer-tailored services and deep industry insights, at Deloitte Consulting LLP we help clients tackle their most complex challenges enabling them to seize new growth opportunities, reduce costs, improe efficiencies and stay ahead of customer demand . Developing and executing our clients’ strategic vision, we help them dramatically improve their business performance across a broad range of specialties – enterprise model desi, global business services, outsourcing, real estate, and location strategy. Our Deloitte Innovations and Platforms teams are working on delivering innovate cloud-based solutions across a range of domains and industries ( e.g. supply chain management, banking/insurance, CPG, retail, etc.) . It is a fast-paced, innovative and exciting environment . Our teams are following an agile development approach and work with the latest technologies across a wide range of cloud technologies, commercial options and open source . We are building and bringing solutions to market which we are hosting and operating for our clients. Data Engineer As a Data Engineer, you will be responsible for designing, developing, and maintaining our data pipelines and infrastructure. You will work closely with data scientists, analysts, and other stakeholders to ensure data is accessible, reliable, and optimized for performance. Work you’ll do Design, build and support scalable data pipelines, systems, and APIs using python, spark and snowflake ecosystems. Use distributed computing frameworks (primarily PySpark , snowpark ), graph-based and other cutting-edge technologies to resolve identities at scale Lead cross-functional initiatives and collaborate with multiple, distributed teams Produce high quality code that is robust, efficient, testable and easy to maintain Deliver operational automation and tooling to minimize repeated manual tasks Participate in code reviews, architectural decisions, give actionable feedback, and mentor junior team members Influence product roadmap and help cross-functional teams to identify data opportunities to drive impact Team Converge’s cloud-based suite of software solutions, combined with Deloitte’s integrated technology ecosystem, enable financial institutions to deliver the security, digital convenience, and personalization customers expect today. With regulatory experience in financial services, strategy, and implementation, we help our clients offer an exceptional customer experience, expand product offerings, acquire new customers, reduce customer acquisition cost, and deliver strong ROI goals on their technology investment. For more information visit: https://www2.deloitte.com/us/en/pages/consulting/solutions/converge/converge-prosperity.html Prior Experience: 5 to 8 years of experience in Data engineering . 3 to 5 years of experience in Data engineering . Skills/Project Experience - Required : 2+ years of software development or data engineering experience i n Python (preferred) , Spark (preferred) , snowpark , snowflake or equivalent technologies Experience designing and building highly scalable data pipelines (using Airflow, Luigi, etc.) Knowledge and experience of working with large datasets Proven track record of working with c loud technologies ( GCP, Azure, AWS , etc. ) Experience with developing or consuming web interfaces (REST API) Experience with modern software development practices, leveraging CI/CD and containerization such as Docker Self-driven with a passion for learning and implementing new technologies A history of working collaboratively with a cross-functional team of engineers, data scientists and product managers Good to Have Experience with distributed computing or big data frameworks (Apache Spark, Apache Flink, etc.) Experience with or interest in implementing graph-based technologies Knowledge of or interest in d ata s cience & m achine l earning Experience with b ackend infrastructure and how to architect data pipelines Knowledge of systemdesign and distributed systems Experience working in a p roduct e ngineering environment Experience with data warehouses ( BigQuery , Redshift etc. ) Location: Hyderabad/Bengaluru /Gurgaon /Kolkata/Pune Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India . Benefits To Help You Thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 302301
Posted 1 week ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Summary Position Summary Job Role: Data Engineer – Gen AI Data Pipeline- Consultant Are you looking to work at a place that builds robust, high-quality software solutions? ‘Deloitte Consulting’ is the answer. As an Analyst/Consultant/Engineer at Deloitte Consulting, you will be responsible for quality assurance on large-scale complex software solutions at enterprise level. These applications are often high-volume mission critical systems that would provide you an exposure with end-to-end functional and domain knowledge. You will work with business, functional and technical teams on the project located across shores. You will be responsible to independently lead a team, mentor team members, and drive all test deliverables across project life cycle. You will be involved in end-to-end delivery of the project from testing strategy, estimations, planning, execution and reporting. Work you’ll do A Cloud Data Engineer will be responsible for following activities: Participate in application architecture and design discussions. Work with team leads in defining solution Design for development. Analyze business/functional requirements and develop data processing pipelines for them. Perform unit testing and participate in integration in collaboration with other team members. Perform peer code reviews and ensure its alignment with pre-defined architectural standards, guidelines, best practices, and meet quality standards. Work on defects\bugs and help other team members. Understand and comply with the established agile development methodology. Participate in various Agile ceremonies like – scrum meetings, sprint planning’s etc. Proactively identify opportunities for code/process/design improvements. Participate in customer support activities for existing clients using Converge Health’s existing platform\products. The Team Deloitte Consulting LLP’s Technology Consulting practice is dedicated to helping our clients build tomorrow by solving today’s complex business problems involving strategy, procurement, design, delivery, and assurance of technology solutions. Our service areas include analytics and information management, delivery, cyber risk services, and technical strategy and architecture, as well as the spectrum of digital strategy, design, and development services offered by Deloitte Digital. Learn more about our Technology Consulting practice on www.deloitte.com. Qualifications And Experience Required: Education: B.E./B.Tech/M.C.A./M.Sc. Data Engineering Principles: Proficient in data warehousing concepts. Experienced with ETL (Extract, Transform, Load) processes. Skilled in SQL and handling data in JSON and other semi-structured formats. Hands-on experience with Python for data processing tasks. Big Data and Cloud Platforms: Experience with Big Data technologies on cloud platforms such as AWS or Cloudera. AWS Cloud Platform:Working experience on the AWS cloud platform. AWS Data Pipeline: Knowledgeable in building data pipelines on AWS using services like Lambda, S3, Athena, Kinesis, etc. Performance Tuning: Proficient in performance tuning on various RDBMS (Relational Database Management Systems) such as Oracle, SQL Server, Redshift, Impala, etc. Data Modeling Concepts: Good understanding of dimensional, relational, or hybrid data modeling. Continuous Integration Tools: Experience with CI tools such as Jenkins. Proficient with GIT version control systems. Familiar with issue tracking tools like JIRA. Agile Development: Familiar with Agile development methodologies. Generative AI Experience: Retrieval-Augmented Generation (RAG): Must have Implemented RAG techniques to enhance data retrieval and improve the relevance of generated content. Integrated RAG models with existing data pipelines to optimize information retrieval processes. Vector Databases: Utilizing vector databases for efficient storage and retrieval of high-dimensional data. Knowledge on vector search algorithms to enhance the performance of AI-driven applications. Large Language Models (LLMs): Experience with deploying and fine-tuning large language models for various NLP tasks. Integrated LLMs into data processing workflows to automate and enhance data analysis. LangChain: Knowledge on LangChain’s for building and managing complex data workflows. Experience in scalable data pipelines using LangChain to streamline data processing and integration tasks. Efficiency Improvements : Should have implementation experience to reduce data processing times by optimizing ETL workflows and leveraging cloud-native solutions. Data Engineering Principles: Proficient in data warehousing concepts. Experienced with ETL (Extract, Transform, Load) processes. Skilled in SQL and handling data in JSON and other semi-structured formats. Hands-on experience with Python for data processing tasks. Big Data and Cloud Platforms: Experience with Big Data technologies on cloud platforms such as AWS or Cloudera. AWS Cloud Platform:Working experience on the AWS cloud platform. AWS Data Pipeline: Knowledgeable in building data pipelines on AWS using services like Lambda, S3, Athena, Kinesis, etc. Performance Tuning: Proficient in performance tuning on various RDBMS (Relational Database Management Systems) such as Oracle, SQL Server, Redshift, Impala, etc. Data Modeling Concepts: Good understanding of dimensional, relational, or hybrid data modeling. Continuous Integration Tools: Experience with CI tools such as Jenkins. Proficient with GIT version control systems. Familiar with issue tracking tools like JIRA. Agile Development: Familiar with Agile development methodologies. Generative AI Experience: Retrieval-Augmented Generation (RAG): Must have Implemented RAG techniques to enhance data retrieval and improve the relevance of generated content. Integrated RAG models with existing data pipelines to optimize information retrieval processes. Vector Databases: Utilizing vector databases for efficient storage and retrieval of high-dimensional data. Knowledge on vector search algorithms to enhance the performance of AI-driven applications. Large Language Models (LLMs): Experience with deploying and fine-tuning large language models for various NLP tasks. Integrated LLMs into data processing workflows to automate and enhance data analysis. LangChain: Knowledge on LangChain’s for building and managing complex data workflows. Experience in scalable data pipelines using LangChain to streamline data processing and integration tasks. Efficiency Improvements : Should have implementation experience to reduce data processing times by optimizing ETL workflows and leveraging cloud-native solutions. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India . Benefits To Help You Thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 303074
Posted 1 week ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Summary Position Summary Job Role: Data Engineer – Gen AI Data Pipeline- Consultant Are you looking to work at a place that builds robust, high-quality software solutions? ‘Deloitte Consulting’ is the answer. As an Analyst/Consultant/Engineer at Deloitte Consulting, you will be responsible for quality assurance on large-scale complex software solutions at enterprise level. These applications are often high-volume mission critical systems that would provide you an exposure with end-to-end functional and domain knowledge. You will work with business, functional and technical teams on the project located across shores. You will be responsible to independently lead a team, mentor team members, and drive all test deliverables across project life cycle. You will be involved in end-to-end delivery of the project from testing strategy, estimations, planning, execution and reporting. Work you’ll do A Cloud Data Engineer will be responsible for following activities: Participate in application architecture and design discussions. Work with team leads in defining solution Design for development. Analyze business/functional requirements and develop data processing pipelines for them. Perform unit testing and participate in integration in collaboration with other team members. Perform peer code reviews and ensure its alignment with pre-defined architectural standards, guidelines, best practices, and meet quality standards. Work on defects\bugs and help other team members. Understand and comply with the established agile development methodology. Participate in various Agile ceremonies like – scrum meetings, sprint planning’s etc. Proactively identify opportunities for code/process/design improvements. Participate in customer support activities for existing clients using Converge Health’s existing platform\products. The Team Deloitte Consulting LLP’s Technology Consulting practice is dedicated to helping our clients build tomorrow by solving today’s complex business problems involving strategy, procurement, design, delivery, and assurance of technology solutions. Our service areas include analytics and information management, delivery, cyber risk services, and technical strategy and architecture, as well as the spectrum of digital strategy, design, and development services offered by Deloitte Digital. Learn more about our Technology Consulting practice on www.deloitte.com. Qualifications And Experience Required: Education: B.E./B.Tech/M.C.A./M.Sc. Data Engineering Principles: Proficient in data warehousing concepts. Experienced with ETL (Extract, Transform, Load) processes. Skilled in SQL and handling data in JSON and other semi-structured formats. Hands-on experience with Python for data processing tasks. Big Data and Cloud Platforms: Experience with Big Data technologies on cloud platforms such as AWS or Cloudera. AWS Cloud Platform:Working experience on the AWS cloud platform. AWS Data Pipeline: Knowledgeable in building data pipelines on AWS using services like Lambda, S3, Athena, Kinesis, etc. Performance Tuning: Proficient in performance tuning on various RDBMS (Relational Database Management Systems) such as Oracle, SQL Server, Redshift, Impala, etc. Data Modeling Concepts: Good understanding of dimensional, relational, or hybrid data modeling. Continuous Integration Tools: Experience with CI tools such as Jenkins. Proficient with GIT version control systems. Familiar with issue tracking tools like JIRA. Agile Development: Familiar with Agile development methodologies. Generative AI Experience: Retrieval-Augmented Generation (RAG): Must have Implemented RAG techniques to enhance data retrieval and improve the relevance of generated content. Integrated RAG models with existing data pipelines to optimize information retrieval processes. Vector Databases: Utilizing vector databases for efficient storage and retrieval of high-dimensional data. Knowledge on vector search algorithms to enhance the performance of AI-driven applications. Large Language Models (LLMs): Experience with deploying and fine-tuning large language models for various NLP tasks. Integrated LLMs into data processing workflows to automate and enhance data analysis. LangChain: Knowledge on LangChain’s for building and managing complex data workflows. Experience in scalable data pipelines using LangChain to streamline data processing and integration tasks. Efficiency Improvements : Should have implementation experience to reduce data processing times by optimizing ETL workflows and leveraging cloud-native solutions. Data Engineering Principles: Proficient in data warehousing concepts. Experienced with ETL (Extract, Transform, Load) processes. Skilled in SQL and handling data in JSON and other semi-structured formats. Hands-on experience with Python for data processing tasks. Big Data and Cloud Platforms: Experience with Big Data technologies on cloud platforms such as AWS or Cloudera. AWS Cloud Platform:Working experience on the AWS cloud platform. AWS Data Pipeline: Knowledgeable in building data pipelines on AWS using services like Lambda, S3, Athena, Kinesis, etc. Performance Tuning: Proficient in performance tuning on various RDBMS (Relational Database Management Systems) such as Oracle, SQL Server, Redshift, Impala, etc. Data Modeling Concepts: Good understanding of dimensional, relational, or hybrid data modeling. Continuous Integration Tools: Experience with CI tools such as Jenkins. Proficient with GIT version control systems. Familiar with issue tracking tools like JIRA. Agile Development: Familiar with Agile development methodologies. Generative AI Experience: Retrieval-Augmented Generation (RAG): Must have Implemented RAG techniques to enhance data retrieval and improve the relevance of generated content. Integrated RAG models with existing data pipelines to optimize information retrieval processes. Vector Databases: Utilizing vector databases for efficient storage and retrieval of high-dimensional data. Knowledge on vector search algorithms to enhance the performance of AI-driven applications. Large Language Models (LLMs): Experience with deploying and fine-tuning large language models for various NLP tasks. Integrated LLMs into data processing workflows to automate and enhance data analysis. LangChain: Knowledge on LangChain’s for building and managing complex data workflows. Experience in scalable data pipelines using LangChain to streamline data processing and integration tasks. Efficiency Improvements : Should have implementation experience to reduce data processing times by optimizing ETL workflows and leveraging cloud-native solutions. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India . Benefits To Help You Thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 303074
Posted 1 week ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Summary Position Summary Job Role: Data Engineer – Consultant Offering cusomer-tailored services and deep industry insights, at Deloitte Consulting LLP we help clients tackle their most complex challenges enabling them to seize new growth opportunities, reduce costs, improe efficiencies and stay ahead of customer demand . Developing and executing our clients’ strategic vision, we help them dramatically improve their business performance across a broad range of specialties – enterprise model desi, global business services, outsourcing, real estate, and location strategy. Our Deloitte Innovations and Platforms teams are working on delivering innovate cloud-based solutions across a range of domains and industries ( e.g. supply chain management, banking/insurance, CPG, retail, etc.) . It is a fast-paced, innovative and exciting environment . Our teams are following an agile development approach and work with the latest technologies across a wide range of cloud technologies, commercial options and open source . We are building and bringing solutions to market which we are hosting and operating for our clients. Data Engineer As a Data Engineer, you will be responsible for designing, developing, and maintaining our data pipelines and infrastructure. You will work closely with data scientists, analysts, and other stakeholders to ensure data is accessible, reliable, and optimized for performance. Work you’ll do Design, build and support scalable data pipelines, systems, and APIs using python, spark and snowflake ecosystems. Use distributed computing frameworks (primarily PySpark , snowpark ), graph-based and other cutting-edge technologies to resolve identities at scale Lead cross-functional initiatives and collaborate with multiple, distributed teams Produce high quality code that is robust, efficient, testable and easy to maintain Deliver operational automation and tooling to minimize repeated manual tasks Participate in code reviews, architectural decisions, give actionable feedback, and mentor junior team members Influence product roadmap and help cross-functional teams to identify data opportunities to drive impact Team Converge’s cloud-based suite of software solutions, combined with Deloitte’s integrated technology ecosystem, enable financial institutions to deliver the security, digital convenience, and personalization customers expect today. With regulatory experience in financial services, strategy, and implementation, we help our clients offer an exceptional customer experience, expand product offerings, acquire new customers, reduce customer acquisition cost, and deliver strong ROI goals on their technology investment. For more information visit: https://www2.deloitte.com/us/en/pages/consulting/solutions/converge/converge-prosperity.html Prior Experience: 5 to 8 years of experience in Data engineering . 3 to 5 years of experience in Data engineering . Skills/Project Experience - Required : 2+ years of software development or data engineering experience i n Python (preferred) , Spark (preferred) , snowpark , snowflake or equivalent technologies Experience designing and building highly scalable data pipelines (using Airflow, Luigi, etc.) Knowledge and experience of working with large datasets Proven track record of working with c loud technologies ( GCP, Azure, AWS , etc. ) Experience with developing or consuming web interfaces (REST API) Experience with modern software development practices, leveraging CI/CD and containerization such as Docker Self-driven with a passion for learning and implementing new technologies A history of working collaboratively with a cross-functional team of engineers, data scientists and product managers Good to Have Experience with distributed computing or big data frameworks (Apache Spark, Apache Flink, etc.) Experience with or interest in implementing graph-based technologies Knowledge of or interest in d ata s cience & m achine l earning Experience with b ackend infrastructure and how to architect data pipelines Knowledge of systemdesign and distributed systems Experience working in a p roduct e ngineering environment Experience with data warehouses ( BigQuery , Redshift etc. ) Location: Hyderabad/Bengaluru /Gurgaon /Kolkata/Pune Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India . Benefits To Help You Thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 302301
Posted 1 week ago
5.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Summary Position Summary Job Role: Data Engineer – Consultant Offering cusomer-tailored services and deep industry insights, at Deloitte Consulting LLP we help clients tackle their most complex challenges enabling them to seize new growth opportunities, reduce costs, improe efficiencies and stay ahead of customer demand . Developing and executing our clients’ strategic vision, we help them dramatically improve their business performance across a broad range of specialties – enterprise model desi, global business services, outsourcing, real estate, and location strategy. Our Deloitte Innovations and Platforms teams are working on delivering innovate cloud-based solutions across a range of domains and industries ( e.g. supply chain management, banking/insurance, CPG, retail, etc.) . It is a fast-paced, innovative and exciting environment . Our teams are following an agile development approach and work with the latest technologies across a wide range of cloud technologies, commercial options and open source . We are building and bringing solutions to market which we are hosting and operating for our clients. Data Engineer As a Data Engineer, you will be responsible for designing, developing, and maintaining our data pipelines and infrastructure. You will work closely with data scientists, analysts, and other stakeholders to ensure data is accessible, reliable, and optimized for performance. Work you’ll do Design, build and support scalable data pipelines, systems, and APIs using python, spark and snowflake ecosystems. Use distributed computing frameworks (primarily PySpark , snowpark ), graph-based and other cutting-edge technologies to resolve identities at scale Lead cross-functional initiatives and collaborate with multiple, distributed teams Produce high quality code that is robust, efficient, testable and easy to maintain Deliver operational automation and tooling to minimize repeated manual tasks Participate in code reviews, architectural decisions, give actionable feedback, and mentor junior team members Influence product roadmap and help cross-functional teams to identify data opportunities to drive impact Team Converge’s cloud-based suite of software solutions, combined with Deloitte’s integrated technology ecosystem, enable financial institutions to deliver the security, digital convenience, and personalization customers expect today. With regulatory experience in financial services, strategy, and implementation, we help our clients offer an exceptional customer experience, expand product offerings, acquire new customers, reduce customer acquisition cost, and deliver strong ROI goals on their technology investment. For more information visit: https://www2.deloitte.com/us/en/pages/consulting/solutions/converge/converge-prosperity.html Prior Experience: 5 to 8 years of experience in Data engineering . 3 to 5 years of experience in Data engineering . Skills/Project Experience - Required : 2+ years of software development or data engineering experience i n Python (preferred) , Spark (preferred) , snowpark , snowflake or equivalent technologies Experience designing and building highly scalable data pipelines (using Airflow, Luigi, etc.) Knowledge and experience of working with large datasets Proven track record of working with c loud technologies ( GCP, Azure, AWS , etc. ) Experience with developing or consuming web interfaces (REST API) Experience with modern software development practices, leveraging CI/CD and containerization such as Docker Self-driven with a passion for learning and implementing new technologies A history of working collaboratively with a cross-functional team of engineers, data scientists and product managers Good to Have Experience with distributed computing or big data frameworks (Apache Spark, Apache Flink, etc.) Experience with or interest in implementing graph-based technologies Knowledge of or interest in d ata s cience & m achine l earning Experience with b ackend infrastructure and how to architect data pipelines Knowledge of systemdesign and distributed systems Experience working in a p roduct e ngineering environment Experience with data warehouses ( BigQuery , Redshift etc. ) Location: Hyderabad/Bengaluru /Gurgaon /Kolkata/Pune Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India . Benefits To Help You Thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 302301
Posted 1 week ago
0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Summary Position Summary Job Role: Data Engineer – Gen AI Data Pipeline- Consultant Are you looking to work at a place that builds robust, high-quality software solutions? ‘Deloitte Consulting’ is the answer. As an Analyst/Consultant/Engineer at Deloitte Consulting, you will be responsible for quality assurance on large-scale complex software solutions at enterprise level. These applications are often high-volume mission critical systems that would provide you an exposure with end-to-end functional and domain knowledge. You will work with business, functional and technical teams on the project located across shores. You will be responsible to independently lead a team, mentor team members, and drive all test deliverables across project life cycle. You will be involved in end-to-end delivery of the project from testing strategy, estimations, planning, execution and reporting. Work you’ll do A Cloud Data Engineer will be responsible for following activities: Participate in application architecture and design discussions. Work with team leads in defining solution Design for development. Analyze business/functional requirements and develop data processing pipelines for them. Perform unit testing and participate in integration in collaboration with other team members. Perform peer code reviews and ensure its alignment with pre-defined architectural standards, guidelines, best practices, and meet quality standards. Work on defects\bugs and help other team members. Understand and comply with the established agile development methodology. Participate in various Agile ceremonies like – scrum meetings, sprint planning’s etc. Proactively identify opportunities for code/process/design improvements. Participate in customer support activities for existing clients using Converge Health’s existing platform\products. The Team Deloitte Consulting LLP’s Technology Consulting practice is dedicated to helping our clients build tomorrow by solving today’s complex business problems involving strategy, procurement, design, delivery, and assurance of technology solutions. Our service areas include analytics and information management, delivery, cyber risk services, and technical strategy and architecture, as well as the spectrum of digital strategy, design, and development services offered by Deloitte Digital. Learn more about our Technology Consulting practice on www.deloitte.com. Qualifications And Experience Required: Education: B.E./B.Tech/M.C.A./M.Sc. Data Engineering Principles: Proficient in data warehousing concepts. Experienced with ETL (Extract, Transform, Load) processes. Skilled in SQL and handling data in JSON and other semi-structured formats. Hands-on experience with Python for data processing tasks. Big Data and Cloud Platforms: Experience with Big Data technologies on cloud platforms such as AWS or Cloudera. AWS Cloud Platform:Working experience on the AWS cloud platform. AWS Data Pipeline: Knowledgeable in building data pipelines on AWS using services like Lambda, S3, Athena, Kinesis, etc. Performance Tuning: Proficient in performance tuning on various RDBMS (Relational Database Management Systems) such as Oracle, SQL Server, Redshift, Impala, etc. Data Modeling Concepts: Good understanding of dimensional, relational, or hybrid data modeling. Continuous Integration Tools: Experience with CI tools such as Jenkins. Proficient with GIT version control systems. Familiar with issue tracking tools like JIRA. Agile Development: Familiar with Agile development methodologies. Generative AI Experience: Retrieval-Augmented Generation (RAG): Must have Implemented RAG techniques to enhance data retrieval and improve the relevance of generated content. Integrated RAG models with existing data pipelines to optimize information retrieval processes. Vector Databases: Utilizing vector databases for efficient storage and retrieval of high-dimensional data. Knowledge on vector search algorithms to enhance the performance of AI-driven applications. Large Language Models (LLMs): Experience with deploying and fine-tuning large language models for various NLP tasks. Integrated LLMs into data processing workflows to automate and enhance data analysis. LangChain: Knowledge on LangChain’s for building and managing complex data workflows. Experience in scalable data pipelines using LangChain to streamline data processing and integration tasks. Efficiency Improvements : Should have implementation experience to reduce data processing times by optimizing ETL workflows and leveraging cloud-native solutions. Data Engineering Principles: Proficient in data warehousing concepts. Experienced with ETL (Extract, Transform, Load) processes. Skilled in SQL and handling data in JSON and other semi-structured formats. Hands-on experience with Python for data processing tasks. Big Data and Cloud Platforms: Experience with Big Data technologies on cloud platforms such as AWS or Cloudera. AWS Cloud Platform:Working experience on the AWS cloud platform. AWS Data Pipeline: Knowledgeable in building data pipelines on AWS using services like Lambda, S3, Athena, Kinesis, etc. Performance Tuning: Proficient in performance tuning on various RDBMS (Relational Database Management Systems) such as Oracle, SQL Server, Redshift, Impala, etc. Data Modeling Concepts: Good understanding of dimensional, relational, or hybrid data modeling. Continuous Integration Tools: Experience with CI tools such as Jenkins. Proficient with GIT version control systems. Familiar with issue tracking tools like JIRA. Agile Development: Familiar with Agile development methodologies. Generative AI Experience: Retrieval-Augmented Generation (RAG): Must have Implemented RAG techniques to enhance data retrieval and improve the relevance of generated content. Integrated RAG models with existing data pipelines to optimize information retrieval processes. Vector Databases: Utilizing vector databases for efficient storage and retrieval of high-dimensional data. Knowledge on vector search algorithms to enhance the performance of AI-driven applications. Large Language Models (LLMs): Experience with deploying and fine-tuning large language models for various NLP tasks. Integrated LLMs into data processing workflows to automate and enhance data analysis. LangChain: Knowledge on LangChain’s for building and managing complex data workflows. Experience in scalable data pipelines using LangChain to streamline data processing and integration tasks. Efficiency Improvements : Should have implementation experience to reduce data processing times by optimizing ETL workflows and leveraging cloud-native solutions. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India . Benefits To Help You Thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 303074
Posted 1 week ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Summary Position Summary Job Role: Data Engineer – Gen AI Data Pipeline- Consultant Are you looking to work at a place that builds robust, high-quality software solutions? ‘Deloitte Consulting’ is the answer. As an Analyst/Consultant/Engineer at Deloitte Consulting, you will be responsible for quality assurance on large-scale complex software solutions at enterprise level. These applications are often high-volume mission critical systems that would provide you an exposure with end-to-end functional and domain knowledge. You will work with business, functional and technical teams on the project located across shores. You will be responsible to independently lead a team, mentor team members, and drive all test deliverables across project life cycle. You will be involved in end-to-end delivery of the project from testing strategy, estimations, planning, execution and reporting. Work you’ll do A Cloud Data Engineer will be responsible for following activities: Participate in application architecture and design discussions. Work with team leads in defining solution Design for development. Analyze business/functional requirements and develop data processing pipelines for them. Perform unit testing and participate in integration in collaboration with other team members. Perform peer code reviews and ensure its alignment with pre-defined architectural standards, guidelines, best practices, and meet quality standards. Work on defects\bugs and help other team members. Understand and comply with the established agile development methodology. Participate in various Agile ceremonies like – scrum meetings, sprint planning’s etc. Proactively identify opportunities for code/process/design improvements. Participate in customer support activities for existing clients using Converge Health’s existing platform\products. The Team Deloitte Consulting LLP’s Technology Consulting practice is dedicated to helping our clients build tomorrow by solving today’s complex business problems involving strategy, procurement, design, delivery, and assurance of technology solutions. Our service areas include analytics and information management, delivery, cyber risk services, and technical strategy and architecture, as well as the spectrum of digital strategy, design, and development services offered by Deloitte Digital. Learn more about our Technology Consulting practice on www.deloitte.com. Qualifications And Experience Required: Education: B.E./B.Tech/M.C.A./M.Sc. Data Engineering Principles: Proficient in data warehousing concepts. Experienced with ETL (Extract, Transform, Load) processes. Skilled in SQL and handling data in JSON and other semi-structured formats. Hands-on experience with Python for data processing tasks. Big Data and Cloud Platforms: Experience with Big Data technologies on cloud platforms such as AWS or Cloudera. AWS Cloud Platform:Working experience on the AWS cloud platform. AWS Data Pipeline: Knowledgeable in building data pipelines on AWS using services like Lambda, S3, Athena, Kinesis, etc. Performance Tuning: Proficient in performance tuning on various RDBMS (Relational Database Management Systems) such as Oracle, SQL Server, Redshift, Impala, etc. Data Modeling Concepts: Good understanding of dimensional, relational, or hybrid data modeling. Continuous Integration Tools: Experience with CI tools such as Jenkins. Proficient with GIT version control systems. Familiar with issue tracking tools like JIRA. Agile Development: Familiar with Agile development methodologies. Generative AI Experience: Retrieval-Augmented Generation (RAG): Must have Implemented RAG techniques to enhance data retrieval and improve the relevance of generated content. Integrated RAG models with existing data pipelines to optimize information retrieval processes. Vector Databases: Utilizing vector databases for efficient storage and retrieval of high-dimensional data. Knowledge on vector search algorithms to enhance the performance of AI-driven applications. Large Language Models (LLMs): Experience with deploying and fine-tuning large language models for various NLP tasks. Integrated LLMs into data processing workflows to automate and enhance data analysis. LangChain: Knowledge on LangChain’s for building and managing complex data workflows. Experience in scalable data pipelines using LangChain to streamline data processing and integration tasks. Efficiency Improvements : Should have implementation experience to reduce data processing times by optimizing ETL workflows and leveraging cloud-native solutions. Data Engineering Principles: Proficient in data warehousing concepts. Experienced with ETL (Extract, Transform, Load) processes. Skilled in SQL and handling data in JSON and other semi-structured formats. Hands-on experience with Python for data processing tasks. Big Data and Cloud Platforms: Experience with Big Data technologies on cloud platforms such as AWS or Cloudera. AWS Cloud Platform:Working experience on the AWS cloud platform. AWS Data Pipeline: Knowledgeable in building data pipelines on AWS using services like Lambda, S3, Athena, Kinesis, etc. Performance Tuning: Proficient in performance tuning on various RDBMS (Relational Database Management Systems) such as Oracle, SQL Server, Redshift, Impala, etc. Data Modeling Concepts: Good understanding of dimensional, relational, or hybrid data modeling. Continuous Integration Tools: Experience with CI tools such as Jenkins. Proficient with GIT version control systems. Familiar with issue tracking tools like JIRA. Agile Development: Familiar with Agile development methodologies. Generative AI Experience: Retrieval-Augmented Generation (RAG): Must have Implemented RAG techniques to enhance data retrieval and improve the relevance of generated content. Integrated RAG models with existing data pipelines to optimize information retrieval processes. Vector Databases: Utilizing vector databases for efficient storage and retrieval of high-dimensional data. Knowledge on vector search algorithms to enhance the performance of AI-driven applications. Large Language Models (LLMs): Experience with deploying and fine-tuning large language models for various NLP tasks. Integrated LLMs into data processing workflows to automate and enhance data analysis. LangChain: Knowledge on LangChain’s for building and managing complex data workflows. Experience in scalable data pipelines using LangChain to streamline data processing and integration tasks. Efficiency Improvements : Should have implementation experience to reduce data processing times by optimizing ETL workflows and leveraging cloud-native solutions. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India . Benefits To Help You Thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 303074
Posted 1 week ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Summary Position Summary Job Role: Data Engineer – Consultant Offering cusomer-tailored services and deep industry insights, at Deloitte Consulting LLP we help clients tackle their most complex challenges enabling them to seize new growth opportunities, reduce costs, improe efficiencies and stay ahead of customer demand . Developing and executing our clients’ strategic vision, we help them dramatically improve their business performance across a broad range of specialties – enterprise model desi, global business services, outsourcing, real estate, and location strategy. Our Deloitte Innovations and Platforms teams are working on delivering innovate cloud-based solutions across a range of domains and industries ( e.g. supply chain management, banking/insurance, CPG, retail, etc.) . It is a fast-paced, innovative and exciting environment . Our teams are following an agile development approach and work with the latest technologies across a wide range of cloud technologies, commercial options and open source . We are building and bringing solutions to market which we are hosting and operating for our clients. Data Engineer As a Data Engineer, you will be responsible for designing, developing, and maintaining our data pipelines and infrastructure. You will work closely with data scientists, analysts, and other stakeholders to ensure data is accessible, reliable, and optimized for performance. Work you’ll do Design, build and support scalable data pipelines, systems, and APIs using python, spark and snowflake ecosystems. Use distributed computing frameworks (primarily PySpark , snowpark ), graph-based and other cutting-edge technologies to resolve identities at scale Lead cross-functional initiatives and collaborate with multiple, distributed teams Produce high quality code that is robust, efficient, testable and easy to maintain Deliver operational automation and tooling to minimize repeated manual tasks Participate in code reviews, architectural decisions, give actionable feedback, and mentor junior team members Influence product roadmap and help cross-functional teams to identify data opportunities to drive impact Team Converge’s cloud-based suite of software solutions, combined with Deloitte’s integrated technology ecosystem, enable financial institutions to deliver the security, digital convenience, and personalization customers expect today. With regulatory experience in financial services, strategy, and implementation, we help our clients offer an exceptional customer experience, expand product offerings, acquire new customers, reduce customer acquisition cost, and deliver strong ROI goals on their technology investment. For more information visit: https://www2.deloitte.com/us/en/pages/consulting/solutions/converge/converge-prosperity.html Prior Experience: 5 to 8 years of experience in Data engineering . 3 to 5 years of experience in Data engineering . Skills/Project Experience - Required : 2+ years of software development or data engineering experience i n Python (preferred) , Spark (preferred) , snowpark , snowflake or equivalent technologies Experience designing and building highly scalable data pipelines (using Airflow, Luigi, etc.) Knowledge and experience of working with large datasets Proven track record of working with c loud technologies ( GCP, Azure, AWS , etc. ) Experience with developing or consuming web interfaces (REST API) Experience with modern software development practices, leveraging CI/CD and containerization such as Docker Self-driven with a passion for learning and implementing new technologies A history of working collaboratively with a cross-functional team of engineers, data scientists and product managers Good to Have Experience with distributed computing or big data frameworks (Apache Spark, Apache Flink, etc.) Experience with or interest in implementing graph-based technologies Knowledge of or interest in d ata s cience & m achine l earning Experience with b ackend infrastructure and how to architect data pipelines Knowledge of systemdesign and distributed systems Experience working in a p roduct e ngineering environment Experience with data warehouses ( BigQuery , Redshift etc. ) Location: Hyderabad/Bengaluru /Gurgaon /Kolkata/Pune Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India . Benefits To Help You Thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 302301
Posted 1 week ago
5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Summary Position Summary Job Role: Data Engineer – Consultant Offering cusomer-tailored services and deep industry insights, at Deloitte Consulting LLP we help clients tackle their most complex challenges enabling them to seize new growth opportunities, reduce costs, improe efficiencies and stay ahead of customer demand . Developing and executing our clients’ strategic vision, we help them dramatically improve their business performance across a broad range of specialties – enterprise model desi, global business services, outsourcing, real estate, and location strategy. Our Deloitte Innovations and Platforms teams are working on delivering innovate cloud-based solutions across a range of domains and industries ( e.g. supply chain management, banking/insurance, CPG, retail, etc.) . It is a fast-paced, innovative and exciting environment . Our teams are following an agile development approach and work with the latest technologies across a wide range of cloud technologies, commercial options and open source . We are building and bringing solutions to market which we are hosting and operating for our clients. Data Engineer As a Data Engineer, you will be responsible for designing, developing, and maintaining our data pipelines and infrastructure. You will work closely with data scientists, analysts, and other stakeholders to ensure data is accessible, reliable, and optimized for performance. Work you’ll do Design, build and support scalable data pipelines, systems, and APIs using python, spark and snowflake ecosystems. Use distributed computing frameworks (primarily PySpark , snowpark ), graph-based and other cutting-edge technologies to resolve identities at scale Lead cross-functional initiatives and collaborate with multiple, distributed teams Produce high quality code that is robust, efficient, testable and easy to maintain Deliver operational automation and tooling to minimize repeated manual tasks Participate in code reviews, architectural decisions, give actionable feedback, and mentor junior team members Influence product roadmap and help cross-functional teams to identify data opportunities to drive impact Team Converge’s cloud-based suite of software solutions, combined with Deloitte’s integrated technology ecosystem, enable financial institutions to deliver the security, digital convenience, and personalization customers expect today. With regulatory experience in financial services, strategy, and implementation, we help our clients offer an exceptional customer experience, expand product offerings, acquire new customers, reduce customer acquisition cost, and deliver strong ROI goals on their technology investment. For more information visit: https://www2.deloitte.com/us/en/pages/consulting/solutions/converge/converge-prosperity.html Prior Experience: 5 to 8 years of experience in Data engineering . 3 to 5 years of experience in Data engineering . Skills/Project Experience - Required : 2+ years of software development or data engineering experience i n Python (preferred) , Spark (preferred) , snowpark , snowflake or equivalent technologies Experience designing and building highly scalable data pipelines (using Airflow, Luigi, etc.) Knowledge and experience of working with large datasets Proven track record of working with c loud technologies ( GCP, Azure, AWS , etc. ) Experience with developing or consuming web interfaces (REST API) Experience with modern software development practices, leveraging CI/CD and containerization such as Docker Self-driven with a passion for learning and implementing new technologies A history of working collaboratively with a cross-functional team of engineers, data scientists and product managers Good to Have Experience with distributed computing or big data frameworks (Apache Spark, Apache Flink, etc.) Experience with or interest in implementing graph-based technologies Knowledge of or interest in d ata s cience & m achine l earning Experience with b ackend infrastructure and how to architect data pipelines Knowledge of systemdesign and distributed systems Experience working in a p roduct e ngineering environment Experience with data warehouses ( BigQuery , Redshift etc. ) Location: Hyderabad/Bengaluru /Gurgaon /Kolkata/Pune Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India . Benefits To Help You Thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 302301
Posted 1 week ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
Summary Position Summary Job Role: Data Engineer – Gen AI Data Pipeline- Consultant Are you looking to work at a place that builds robust, high-quality software solutions? ‘Deloitte Consulting’ is the answer. As an Analyst/Consultant/Engineer at Deloitte Consulting, you will be responsible for quality assurance on large-scale complex software solutions at enterprise level. These applications are often high-volume mission critical systems that would provide you an exposure with end-to-end functional and domain knowledge. You will work with business, functional and technical teams on the project located across shores. You will be responsible to independently lead a team, mentor team members, and drive all test deliverables across project life cycle. You will be involved in end-to-end delivery of the project from testing strategy, estimations, planning, execution and reporting. Work you’ll do A Cloud Data Engineer will be responsible for following activities: Participate in application architecture and design discussions. Work with team leads in defining solution Design for development. Analyze business/functional requirements and develop data processing pipelines for them. Perform unit testing and participate in integration in collaboration with other team members. Perform peer code reviews and ensure its alignment with pre-defined architectural standards, guidelines, best practices, and meet quality standards. Work on defects\bugs and help other team members. Understand and comply with the established agile development methodology. Participate in various Agile ceremonies like – scrum meetings, sprint planning’s etc. Proactively identify opportunities for code/process/design improvements. Participate in customer support activities for existing clients using Converge Health’s existing platform\products. The Team Deloitte Consulting LLP’s Technology Consulting practice is dedicated to helping our clients build tomorrow by solving today’s complex business problems involving strategy, procurement, design, delivery, and assurance of technology solutions. Our service areas include analytics and information management, delivery, cyber risk services, and technical strategy and architecture, as well as the spectrum of digital strategy, design, and development services offered by Deloitte Digital. Learn more about our Technology Consulting practice on www.deloitte.com. Qualifications And Experience Required: Education: B.E./B.Tech/M.C.A./M.Sc. Data Engineering Principles: Proficient in data warehousing concepts. Experienced with ETL (Extract, Transform, Load) processes. Skilled in SQL and handling data in JSON and other semi-structured formats. Hands-on experience with Python for data processing tasks. Big Data and Cloud Platforms: Experience with Big Data technologies on cloud platforms such as AWS or Cloudera. AWS Cloud Platform:Working experience on the AWS cloud platform. AWS Data Pipeline: Knowledgeable in building data pipelines on AWS using services like Lambda, S3, Athena, Kinesis, etc. Performance Tuning: Proficient in performance tuning on various RDBMS (Relational Database Management Systems) such as Oracle, SQL Server, Redshift, Impala, etc. Data Modeling Concepts: Good understanding of dimensional, relational, or hybrid data modeling. Continuous Integration Tools: Experience with CI tools such as Jenkins. Proficient with GIT version control systems. Familiar with issue tracking tools like JIRA. Agile Development: Familiar with Agile development methodologies. Generative AI Experience: Retrieval-Augmented Generation (RAG): Must have Implemented RAG techniques to enhance data retrieval and improve the relevance of generated content. Integrated RAG models with existing data pipelines to optimize information retrieval processes. Vector Databases: Utilizing vector databases for efficient storage and retrieval of high-dimensional data. Knowledge on vector search algorithms to enhance the performance of AI-driven applications. Large Language Models (LLMs): Experience with deploying and fine-tuning large language models for various NLP tasks. Integrated LLMs into data processing workflows to automate and enhance data analysis. LangChain: Knowledge on LangChain’s for building and managing complex data workflows. Experience in scalable data pipelines using LangChain to streamline data processing and integration tasks. Efficiency Improvements : Should have implementation experience to reduce data processing times by optimizing ETL workflows and leveraging cloud-native solutions. Data Engineering Principles: Proficient in data warehousing concepts. Experienced with ETL (Extract, Transform, Load) processes. Skilled in SQL and handling data in JSON and other semi-structured formats. Hands-on experience with Python for data processing tasks. Big Data and Cloud Platforms: Experience with Big Data technologies on cloud platforms such as AWS or Cloudera. AWS Cloud Platform:Working experience on the AWS cloud platform. AWS Data Pipeline: Knowledgeable in building data pipelines on AWS using services like Lambda, S3, Athena, Kinesis, etc. Performance Tuning: Proficient in performance tuning on various RDBMS (Relational Database Management Systems) such as Oracle, SQL Server, Redshift, Impala, etc. Data Modeling Concepts: Good understanding of dimensional, relational, or hybrid data modeling. Continuous Integration Tools: Experience with CI tools such as Jenkins. Proficient with GIT version control systems. Familiar with issue tracking tools like JIRA. Agile Development: Familiar with Agile development methodologies. Generative AI Experience: Retrieval-Augmented Generation (RAG): Must have Implemented RAG techniques to enhance data retrieval and improve the relevance of generated content. Integrated RAG models with existing data pipelines to optimize information retrieval processes. Vector Databases: Utilizing vector databases for efficient storage and retrieval of high-dimensional data. Knowledge on vector search algorithms to enhance the performance of AI-driven applications. Large Language Models (LLMs): Experience with deploying and fine-tuning large language models for various NLP tasks. Integrated LLMs into data processing workflows to automate and enhance data analysis. LangChain: Knowledge on LangChain’s for building and managing complex data workflows. Experience in scalable data pipelines using LangChain to streamline data processing and integration tasks. Efficiency Improvements : Should have implementation experience to reduce data processing times by optimizing ETL workflows and leveraging cloud-native solutions. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India . Benefits To Help You Thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 303074
Posted 1 week ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Company Size Mid-Sized Experience Required 3 - 6 years Working Days 5 days/week Office Location Karnataka, Bengaluru Role & Responsibilities Hopscotch is looking for a passionate Data Engineer to join our team. You will work closely with other teams like data analytics, marketing, data science and individual product teams to specify, validate, prototype, scale, and deploy data pipelines features and data architecture. Ability to work in a fast-paced startup mindset. Should be able to manage all aspects of data extraction transfer and load activities. Develop data pipelines that make data available across platforms. Should be comfortable in executing ETL (Extract, Transform and Load) processes which include data ingestion, data cleaning and curation into a data warehouse, database, or data platform. Work on various aspects of the AI/ML ecosystem – data modeling, data and ML pipelines. Work closely with Devops and senior Architect to come up with scalable system and model architectures for enabling real-time and batch services. Ideal Candidate 3+ years of experience as a data engineer or data scientist with a focus on data engineering and ETL jobs. Well versed with the concept of Data warehousing, Data Modelling and/or Data Analysis. Experience using & building pipelines and performing ETL with industry-standard best practices on Redshift (more than 2+ years). Ability to troubleshoot and solve performance issues with data ingestion, data processing & query execution on Redshift. Good understanding of orchestration tools like Airflow. Strong Python and SQL coding skills. Strong Experience in distributed systems like spark. Experience with AWS Data and ML Technologies (AWS Glue,MWAA, Data Pipeline,EMR,Athena, Redshift,Lambda etc). Solid hands on with various data extraction techniques like CDC or Time/batch based and the related tools (Debezium, AWS DMS, Kafka Connect, etc) for near real time and batch data extraction. Perks, Benefits and Work Culture Work with cutting-edge technologies on high-impact systems. Be part of a collaborative and technically driven team. Enjoy flexible work options and a culture that values learning. Competitive salary, benefits, and growth opportunities. Skills: ml,data,aws,sql,aws emr,aws glue,load,data warehousing,data analysis,aws dms,airflow,data engineering,data extraction,etl,debezium,spark,python,kafka connect,redshift,data pipeline,pipelines,aws mwaa,aws athena,data modeling,aws lambda
Posted 1 week ago
5.0 years
10 - 30 Lacs
Mumbai Metropolitan Region
On-site
ETL DataStage Developer About The Opportunity A rapidly-expanding information technology consulting provider in the data integration & analytics space partners with Fortune 500 enterprises to modernise legacy warehouses and architect scalable, real-time data platforms. Operating at the intersection of professional services and enterprise software implementation, we deliver high-throughput ETL solutions that turn raw operational data into actionable business intelligence. Based on-site at a flagship client location in India, you will engineer resilient IBM DataStage pipelines that power critical reporting and analytics workloads. Role & Responsibilities Design, develop, and optimise DataStage ETL jobs for batch and near-real-time data ingestion. Analyse source-to-target mappings, create technical specs, and ensure end-to-end data lineage and quality. Tune performance through partitioning, parallel processing, and efficient stage design to meet SLA-driven loads. Collaborate with data architects, DBAs, and business analysts to integrate warehouses, marts, and downstream BI tools. Implement unit testing, peer reviews, and migration scripts to UAT and production following DevOps best practices. Monitor daily jobs, troubleshoot failures, and implement permanent fixes to guarantee high availability. Skills & Qualifications Must-Have 5+ years hands-on IBM DataStage development in enterprise environments. Strong SQL expertise across Oracle, DB2, or Teradata with proven query optimisation. Solid understanding of data warehousing concepts, dimensional modelling, and ETL best practices. Proficiency in Unix/Linux shell scripting for job scheduling and automation. Experience with version control and CI/CD tools (Git, Jenkins, etc.). Preferred Exposure to cloud data platforms (AWS Redshift, Azure Synapse, or GCP BigQuery). Knowledge of DataStage Parallel Extender and big data connectors. Familiarity with Data Quality or MDM tools such as Informatica IDQ. Benefits & Culture Highlights Client-facing projects offering ownership, visibility, and rapid career growth. Continuous learning budget covering certifications in DataStage and cloud ETL technologies. Collaborative, high-performance culture with merit-based rewards and on-site amenities. Skills: db2,sql,performance tuning,unix/linux shell scripting,git,teradata,datastage,oracle,ibm datastage,etl,data warehousing,jenkins
Posted 1 week ago
5.0 years
10 - 30 Lacs
Chennai, Tamil Nadu, India
On-site
ETL DataStage Developer About The Opportunity A rapidly-expanding information technology consulting provider in the data integration & analytics space partners with Fortune 500 enterprises to modernise legacy warehouses and architect scalable, real-time data platforms. Operating at the intersection of professional services and enterprise software implementation, we deliver high-throughput ETL solutions that turn raw operational data into actionable business intelligence. Based on-site at a flagship client location in India, you will engineer resilient IBM DataStage pipelines that power critical reporting and analytics workloads. Role & Responsibilities Design, develop, and optimise DataStage ETL jobs for batch and near-real-time data ingestion. Analyse source-to-target mappings, create technical specs, and ensure end-to-end data lineage and quality. Tune performance through partitioning, parallel processing, and efficient stage design to meet SLA-driven loads. Collaborate with data architects, DBAs, and business analysts to integrate warehouses, marts, and downstream BI tools. Implement unit testing, peer reviews, and migration scripts to UAT and production following DevOps best practices. Monitor daily jobs, troubleshoot failures, and implement permanent fixes to guarantee high availability. Skills & Qualifications Must-Have 5+ years hands-on IBM DataStage development in enterprise environments. Strong SQL expertise across Oracle, DB2, or Teradata with proven query optimisation. Solid understanding of data warehousing concepts, dimensional modelling, and ETL best practices. Proficiency in Unix/Linux shell scripting for job scheduling and automation. Experience with version control and CI/CD tools (Git, Jenkins, etc.). Preferred Exposure to cloud data platforms (AWS Redshift, Azure Synapse, or GCP BigQuery). Knowledge of DataStage Parallel Extender and big data connectors. Familiarity with Data Quality or MDM tools such as Informatica IDQ. Benefits & Culture Highlights Client-facing projects offering ownership, visibility, and rapid career growth. Continuous learning budget covering certifications in DataStage and cloud ETL technologies. Collaborative, high-performance culture with merit-based rewards and on-site amenities. Skills: db2,sql,performance tuning,unix/linux shell scripting,git,teradata,datastage,oracle,ibm datastage,etl,data warehousing,jenkins
Posted 1 week ago
5.0 years
10 - 30 Lacs
Chennai, Tamil Nadu, India
On-site
Industry & Sector A fast-growing IT services and data analytics consultancy serving Fortune 500 clients across banking, retail and healthcare relies on high-quality data pipelines to fuel BI and AI initiatives. We are hiring an on-site ETL Test Engineer to ensure the reliability, accuracy and performance of these mission-critical workloads. Role & Responsibilities Develop and maintain end-to-end test plans for data extraction, transformation and loading processes across multiple databases and cloud platforms. Create reusable SQL queries and automation scripts to validate data completeness, integrity and historical load accuracy at scale. Set up data-comparison, reconciliation and performance tests within Azure DevOps or Jenkins CI pipelines for nightly builds. Collaborate with data engineers to debug mapping issues, optimise jobs and drive defect resolution through Jira. Document test artefacts, traceability matrices and sign-off reports to support regulatory and audit requirements. Champion best practices for ETL quality, mentoring junior testers on automation frameworks and agile rituals. Skills & Qualifications Must-Have 5 years specialised in ETL testing within enterprise data warehouse or lakehouse environments. Hands-on proficiency in advanced SQL, joins, window functions and data profiling. Experience testing Informatica PowerCenter, SSIS, Talend or similar ETL tools. Exposure to big data ecosystems such as Hadoop or Spark and cloud warehouses like Snowflake or Redshift. Automation skills using Python or Shell with Selenium or pytest frameworks integrated into CI/CD. Strong defect management and stakeholder communication skills. Preferred Knowledge of BI visualisation layer testing with Tableau or Power BI. Performance benchmarking for batch loads and CDC streams. ISTQB or equivalent testing certification. Benefits & Culture Highlights Work on high-impact data programmes for global brands using modern cloud technologies. Clear career ladder with sponsored certifications and internal hackathons. Collaborative, merit-driven culture that values innovation and continuous learning. Location: On-site, India. Candidates must be willing to work from client premises and collaborate closely with cross-functional teams. Skills: sql,shell,informatica powercenter,defect tracking,stakeholder communication,agile,pytest,redshift,ssis,defect management,snowflake,selenium,talend,python,etl testing,test automation,spark,hadoop,data warehousing
Posted 1 week ago
5.0 years
10 - 30 Lacs
Bengaluru, Karnataka, India
On-site
ETL DataStage Developer About The Opportunity A rapidly-expanding information technology consulting provider in the data integration & analytics space partners with Fortune 500 enterprises to modernise legacy warehouses and architect scalable, real-time data platforms. Operating at the intersection of professional services and enterprise software implementation, we deliver high-throughput ETL solutions that turn raw operational data into actionable business intelligence. Based on-site at a flagship client location in India, you will engineer resilient IBM DataStage pipelines that power critical reporting and analytics workloads. Role & Responsibilities Design, develop, and optimise DataStage ETL jobs for batch and near-real-time data ingestion. Analyse source-to-target mappings, create technical specs, and ensure end-to-end data lineage and quality. Tune performance through partitioning, parallel processing, and efficient stage design to meet SLA-driven loads. Collaborate with data architects, DBAs, and business analysts to integrate warehouses, marts, and downstream BI tools. Implement unit testing, peer reviews, and migration scripts to UAT and production following DevOps best practices. Monitor daily jobs, troubleshoot failures, and implement permanent fixes to guarantee high availability. Skills & Qualifications Must-Have 5+ years hands-on IBM DataStage development in enterprise environments. Strong SQL expertise across Oracle, DB2, or Teradata with proven query optimisation. Solid understanding of data warehousing concepts, dimensional modelling, and ETL best practices. Proficiency in Unix/Linux shell scripting for job scheduling and automation. Experience with version control and CI/CD tools (Git, Jenkins, etc.). Preferred Exposure to cloud data platforms (AWS Redshift, Azure Synapse, or GCP BigQuery). Knowledge of DataStage Parallel Extender and big data connectors. Familiarity with Data Quality or MDM tools such as Informatica IDQ. Benefits & Culture Highlights Client-facing projects offering ownership, visibility, and rapid career growth. Continuous learning budget covering certifications in DataStage and cloud ETL technologies. Collaborative, high-performance culture with merit-based rewards and on-site amenities. Skills: db2,sql,performance tuning,unix/linux shell scripting,git,teradata,datastage,oracle,ibm datastage,etl,data warehousing,jenkins
Posted 1 week ago
5.0 years
10 - 30 Lacs
Bengaluru, Karnataka, India
On-site
Industry & Sector A fast-growing IT services and data analytics consultancy serving Fortune 500 clients across banking, retail and healthcare relies on high-quality data pipelines to fuel BI and AI initiatives. We are hiring an on-site ETL Test Engineer to ensure the reliability, accuracy and performance of these mission-critical workloads. Role & Responsibilities Develop and maintain end-to-end test plans for data extraction, transformation and loading processes across multiple databases and cloud platforms. Create reusable SQL queries and automation scripts to validate data completeness, integrity and historical load accuracy at scale. Set up data-comparison, reconciliation and performance tests within Azure DevOps or Jenkins CI pipelines for nightly builds. Collaborate with data engineers to debug mapping issues, optimise jobs and drive defect resolution through Jira. Document test artefacts, traceability matrices and sign-off reports to support regulatory and audit requirements. Champion best practices for ETL quality, mentoring junior testers on automation frameworks and agile rituals. Skills & Qualifications Must-Have 5 years specialised in ETL testing within enterprise data warehouse or lakehouse environments. Hands-on proficiency in advanced SQL, joins, window functions and data profiling. Experience testing Informatica PowerCenter, SSIS, Talend or similar ETL tools. Exposure to big data ecosystems such as Hadoop or Spark and cloud warehouses like Snowflake or Redshift. Automation skills using Python or Shell with Selenium or pytest frameworks integrated into CI/CD. Strong defect management and stakeholder communication skills. Preferred Knowledge of BI visualisation layer testing with Tableau or Power BI. Performance benchmarking for batch loads and CDC streams. ISTQB or equivalent testing certification. Benefits & Culture Highlights Work on high-impact data programmes for global brands using modern cloud technologies. Clear career ladder with sponsored certifications and internal hackathons. Collaborative, merit-driven culture that values innovation and continuous learning. Location: On-site, India. Candidates must be willing to work from client premises and collaborate closely with cross-functional teams. Skills: sql,shell,informatica powercenter,defect tracking,stakeholder communication,agile,pytest,redshift,ssis,defect management,snowflake,selenium,talend,python,etl testing,test automation,spark,hadoop,data warehousing
Posted 1 week ago
5.0 years
10 - 30 Lacs
Hyderabad, Telangana, India
On-site
ETL DataStage Developer About The Opportunity A rapidly-expanding information technology consulting provider in the data integration & analytics space partners with Fortune 500 enterprises to modernise legacy warehouses and architect scalable, real-time data platforms. Operating at the intersection of professional services and enterprise software implementation, we deliver high-throughput ETL solutions that turn raw operational data into actionable business intelligence. Based on-site at a flagship client location in India, you will engineer resilient IBM DataStage pipelines that power critical reporting and analytics workloads. Role & Responsibilities Design, develop, and optimise DataStage ETL jobs for batch and near-real-time data ingestion. Analyse source-to-target mappings, create technical specs, and ensure end-to-end data lineage and quality. Tune performance through partitioning, parallel processing, and efficient stage design to meet SLA-driven loads. Collaborate with data architects, DBAs, and business analysts to integrate warehouses, marts, and downstream BI tools. Implement unit testing, peer reviews, and migration scripts to UAT and production following DevOps best practices. Monitor daily jobs, troubleshoot failures, and implement permanent fixes to guarantee high availability. Skills & Qualifications Must-Have 5+ years hands-on IBM DataStage development in enterprise environments. Strong SQL expertise across Oracle, DB2, or Teradata with proven query optimisation. Solid understanding of data warehousing concepts, dimensional modelling, and ETL best practices. Proficiency in Unix/Linux shell scripting for job scheduling and automation. Experience with version control and CI/CD tools (Git, Jenkins, etc.). Preferred Exposure to cloud data platforms (AWS Redshift, Azure Synapse, or GCP BigQuery). Knowledge of DataStage Parallel Extender and big data connectors. Familiarity with Data Quality or MDM tools such as Informatica IDQ. Benefits & Culture Highlights Client-facing projects offering ownership, visibility, and rapid career growth. Continuous learning budget covering certifications in DataStage and cloud ETL technologies. Collaborative, high-performance culture with merit-based rewards and on-site amenities. Skills: db2,sql,performance tuning,unix/linux shell scripting,git,teradata,datastage,oracle,ibm datastage,etl,data warehousing,jenkins
Posted 1 week ago
5.0 years
10 - 30 Lacs
Hyderabad, Telangana, India
On-site
Industry & Sector A fast-growing IT services and data analytics consultancy serving Fortune 500 clients across banking, retail and healthcare relies on high-quality data pipelines to fuel BI and AI initiatives. We are hiring an on-site ETL Test Engineer to ensure the reliability, accuracy and performance of these mission-critical workloads. Role & Responsibilities Develop and maintain end-to-end test plans for data extraction, transformation and loading processes across multiple databases and cloud platforms. Create reusable SQL queries and automation scripts to validate data completeness, integrity and historical load accuracy at scale. Set up data-comparison, reconciliation and performance tests within Azure DevOps or Jenkins CI pipelines for nightly builds. Collaborate with data engineers to debug mapping issues, optimise jobs and drive defect resolution through Jira. Document test artefacts, traceability matrices and sign-off reports to support regulatory and audit requirements. Champion best practices for ETL quality, mentoring junior testers on automation frameworks and agile rituals. Skills & Qualifications Must-Have 5 years specialised in ETL testing within enterprise data warehouse or lakehouse environments. Hands-on proficiency in advanced SQL, joins, window functions and data profiling. Experience testing Informatica PowerCenter, SSIS, Talend or similar ETL tools. Exposure to big data ecosystems such as Hadoop or Spark and cloud warehouses like Snowflake or Redshift. Automation skills using Python or Shell with Selenium or pytest frameworks integrated into CI/CD. Strong defect management and stakeholder communication skills. Preferred Knowledge of BI visualisation layer testing with Tableau or Power BI. Performance benchmarking for batch loads and CDC streams. ISTQB or equivalent testing certification. Benefits & Culture Highlights Work on high-impact data programmes for global brands using modern cloud technologies. Clear career ladder with sponsored certifications and internal hackathons. Collaborative, merit-driven culture that values innovation and continuous learning. Location: On-site, India. Candidates must be willing to work from client premises and collaborate closely with cross-functional teams. Skills: sql,shell,informatica powercenter,defect tracking,stakeholder communication,agile,pytest,redshift,ssis,defect management,snowflake,selenium,talend,python,etl testing,test automation,spark,hadoop,data warehousing
Posted 1 week ago
5.0 years
10 - 30 Lacs
Mumbai Metropolitan Region
On-site
Industry & Sector A fast-growing IT services and data analytics consultancy serving Fortune 500 clients across banking, retail and healthcare relies on high-quality data pipelines to fuel BI and AI initiatives. We are hiring an on-site ETL Test Engineer to ensure the reliability, accuracy and performance of these mission-critical workloads. Role & Responsibilities Develop and maintain end-to-end test plans for data extraction, transformation and loading processes across multiple databases and cloud platforms. Create reusable SQL queries and automation scripts to validate data completeness, integrity and historical load accuracy at scale. Set up data-comparison, reconciliation and performance tests within Azure DevOps or Jenkins CI pipelines for nightly builds. Collaborate with data engineers to debug mapping issues, optimise jobs and drive defect resolution through Jira. Document test artefacts, traceability matrices and sign-off reports to support regulatory and audit requirements. Champion best practices for ETL quality, mentoring junior testers on automation frameworks and agile rituals. Skills & Qualifications Must-Have 5 years specialised in ETL testing within enterprise data warehouse or lakehouse environments. Hands-on proficiency in advanced SQL, joins, window functions and data profiling. Experience testing Informatica PowerCenter, SSIS, Talend or similar ETL tools. Exposure to big data ecosystems such as Hadoop or Spark and cloud warehouses like Snowflake or Redshift. Automation skills using Python or Shell with Selenium or pytest frameworks integrated into CI/CD. Strong defect management and stakeholder communication skills. Preferred Knowledge of BI visualisation layer testing with Tableau or Power BI. Performance benchmarking for batch loads and CDC streams. ISTQB or equivalent testing certification. Benefits & Culture Highlights Work on high-impact data programmes for global brands using modern cloud technologies. Clear career ladder with sponsored certifications and internal hackathons. Collaborative, merit-driven culture that values innovation and continuous learning. Location: On-site, India. Candidates must be willing to work from client premises and collaborate closely with cross-functional teams. Skills: sql,shell,informatica powercenter,defect tracking,stakeholder communication,agile,pytest,redshift,ssis,defect management,snowflake,selenium,talend,python,etl testing,test automation,spark,hadoop,data warehousing
Posted 1 week ago
15.0 years
0 Lacs
Hyderabad, Telangana, India
Remote
Working with Us Challenging. Meaningful. Life-changing. Those aren't words that are usually associated with a job. But working at Bristol Myers Squibb is anything but usual. Here, uniquely interesting work happens every day, in every department. From optimizing a production line to the latest breakthroughs in cell therapy, this is work that transforms the lives of patients, and the careers of those who do it. You'll get the chance to grow and thrive through opportunities uncommon in scale and scope, alongside high-achieving teams. Take your career farther than you thought possible. Bristol Myers Squibb recognizes the importance of balance and flexibility in our work environment. We offer a wide variety of competitive benefits, services and programs that provide our employees with the resources to pursue their goals, both at work and in their personal lives. Read more: careers.bms.com/working-with-us . Position Summary The Senior Manager, Senior Solution Engineer, Drug Development Information Technology (DDIT) will be part of the product team committed to bridge the gap between technology and business needs within the Clinical Data Ecosystem (CDE) that primarily delivers technology strategy and solutions for clinical trial execution, Global Biostatistics and Data Sciences, Clinical Data Management (Clinical Analytics, Site Selection, Feasibility, Real Word Evidence) The role is based out of our Hyderabad office, and is part of the Research and Development (R&D) BI&T data team that delivers data and analytics capabilities for across DD. You will be specifically supporting our Clinical Data Filing and Sharing (CDFS) Product line. This is a critical role that supports systems necessary for BMS' direct value change; regulated analysis and reporting for every trial in BMS Desired Candidate Characteristics:" Have a strong commitment to a career in technology with a passion for healthcare" Ability to understand the needs of the business and commitment to deliver the best user experience and adoption" Able to collaborate across multiple teams"" Demonstrated leadership experience Excellent communication skills" Innovative and inquisitive nature to ask questions, offer bold ideas and challenge the status quo" Agility to learn new tools and processes" As the candidate grows in their role, they will get additional training and there is opportunity to expand responsibilities and exposure to additional areas withing Drug Development. This includes working with Data Product Leads, providing input and innovation opportunities to modernize with cutting edge technologies (Agentic AI, advanced automation, visualization and application development techniques). Key Responsibilities Architect and lead the evolution of the Statistical Computing Environment (SCE) platform to support modern clinical trial requirements Partner with data management and statistical programming leads to support seamless integration of data pipelines and metadata-driven standards Lead the development of automated workflows for clinical data ingestion, transformation, analysis, and reporting within the SCE. Drive process automation and efficiency initiatives in data preparation and statistical programming workflows Develop and implement solutions to enhance system performance, stability and security Act as a subject matter expert for AWS SAS implementations and integration with clinical systems. Lead the implementation of cloud-based infrastructure using AWS EC2, Auto Scaling, CloudWatch, and AWS related packages Provide architectural guidance and oversight for CDISC SDTM/ADaM data standards and eCTD regulatory submissions. Collaborate with cross-functional teams to identify product improvements and enhancements. Administer production environment and diagnose & resolve technical issues in a timely manner, documenting solutions for future reference. Coordinate with vendors, suppliers, and contractors to ensure the timely delivery of products and services Serve as a technical mentor for development and operations teams supporting SCE solutions. Analyze business challenges and identify areas for improvement through technology solutions. Ensure regulatory and security compliance through proper governance and access controls. Provide guidance to the resources supporting projects, enhancements, and operations. Stay up to date with the latest technology trends and industry best practices. Qualifications & Experience: Master's or bachelor's degree in computer science, information technology, or related field preferred. 15 + years of experience in software development and engineering, clinical development or data science field 8-10 years of hands-on experience working on implementing and operation of different type of Statistical Clinical Environment (SCE) with Life Sciences and Healthcare business vertical. Strong experience with SAS in an AWS-hosted environment including EC2, S3, IAM, Glue, Athena, and Lambda Hands-on development experience managing and delivering data solutions with AWS data, analytics, AI technologies such as AWS Glue, Redshift, RDS (PostgreSQL), S3, Athena, Lambda, Databricks, Business Intelligence and Visualization tools etc. Experience with R, Python, or other programming languages for data analysis or automation. Experience in shell/ Python scripting and Linux automation for operational monitoring and alerting across the environment Familiarity with cloud DevOps practices, infrastructure-as-code (e.g., CloudFormation, Terraform). Expertise in SAS Grid architecture, grid node orchestration, and job lifecycle management. Strong working knowledge of SASGSUB, job submission parameters, and performance tuning. Understanding of submission readiness and Health Authority requirements for data traceability and transparency. Excellent communication, collaboration and interpersonal skills to interact with diverse stakeholders. Ability to work both independently and collaboratively in a team-oriented environment. Comfortable working in a fast-paced environment with minimal oversight. Prior experience working in an Agile based environment. If you come across a role that intrigues you but doesn't perfectly line up with your resume, we encourage you to apply anyway. You could be one step away from work that will transform your life and career. Uniquely Interesting Work, Life-changing Careers With a single vision as inspiring as Transforming patients' lives through science™ , every BMS employee plays an integral role in work that goes far beyond ordinary. Each of us is empowered to apply our individual talents and unique perspectives in a supportive culture, promoting global participation in clinical trials, while our shared values of passion, innovation, urgency, accountability, inclusion and integrity bring out the highest potential of each of our colleagues. On-site Protocol Responsibilities BMS has an occupancy structure that determines where an employee is required to conduct their work. This structure includes site-essential, site-by-design, field-based and remote-by-design jobs. The occupancy type that you are assigned is determined by the nature and responsibilities of your role: Site-essential roles require 100% of shifts onsite at your assigned facility. Site-by-design roles may be eligible for a hybrid work model with at least 50% onsite at your assigned facility. For these roles, onsite presence is considered an essential job function and is critical to collaboration, innovation, productivity, and a positive Company culture. For field-based and remote-by-design roles the ability to physically travel to visit customers, patients or business partners and to attend meetings on behalf of BMS as directed is an essential job function. BMS is dedicated to ensuring that people with disabilities can excel through a transparent recruitment process, reasonable workplace accommodations/adjustments and ongoing support in their roles. Applicants can request a reasonable workplace accommodation/adjustment prior to accepting a job offer. If you require reasonable accommodations/adjustments in completing this application, or in any part of the recruitment process, direct your inquiries to adastaffingsupport@bms.com . Visit careers.bms.com/ eeo -accessibility to access our complete Equal Employment Opportunity statement. BMS cares about your well-being and the well-being of our staff, customers, patients, and communities. As a result, the Company strongly recommends that all employees be fully vaccinated for Covid-19 and keep up to date with Covid-19 boosters. BMS will consider for employment qualified applicants with arrest and conviction records, pursuant to applicable laws in your area. If you live in or expect to work from Los Angeles County if hired for this position, please visit this page for important additional information: https://careers.bms.com/california-residents/ Any data processed in connection with role applications will be treated in accordance with applicable data privacy policies and regulations.
Posted 1 week ago
3.0 years
0 Lacs
India
Remote
About BeGig BeGig is the leading tech freelancing marketplace. We empower innovative, early-stage, non-tech founders to bring their visions to life by connecting them with top-tier freelance talent. By joining BeGig, you’re not just taking on one role—you’re signing up for a platform that will continuously match you with high-impact opportunities tailored to your expertise. Your Opportunity Join our network as a Data Engineer and work directly with visionary startups to design, build, and optimize data pipelines and systems. You’ll help transform raw data into actionable insights, ensuring that data flows seamlessly across the organization to support informed decision-making. Enjoy the freedom to structure your engagement on an hourly or project basis—all while working remotely. Role Overview As a Data Engineer, you will: Design & Develop Data Pipelines: Build and maintain scalable, robust data pipelines that power analytics and machine learning initiatives. Optimize Data Infrastructure: Ensure data is processed efficiently, securely, and in a timely manner. Collaborate & Innovate: Work closely with data scientists, analysts, and other stakeholders to streamline data ingestion, transformation, and storage. What You’ll Do Data Pipeline Development: Design, develop, and maintain end-to-end data pipelines using modern data engineering tools and frameworks. Automate data ingestion, transformation, and loading processes across various data sources. Implement data quality and validation measures to ensure accuracy and reliability. Infrastructure & Optimization: Optimize data workflows for performance and scalability in cloud environments (AWS, GCP, or Azure). Leverage tools such as Apache Spark, Kafka, or Airflow for data processing and orchestration. Monitor and troubleshoot pipeline issues, ensuring smooth data operations. Technical Requirements & Skills Experience: 3+ years in data engineering or a related field. Programming: Proficiency in Python, SQL, and familiarity with Scala or Java is a plus. Data Platforms: Experience with big data technologies like Hadoop, Spark, or similar. Cloud: Working knowledge of cloud-based data solutions (e.g., AWS Redshift, BigQuery, or Azure Data Lake). ETL & Data Warehousing: Hands-on experience with ETL processes and data warehousing solutions. Tools: Familiarity with data orchestration tools such as Apache Airflow or similar. Database Systems: Experience with both relational (PostgreSQL, MySQL) and NoSQL databases. What We’re Looking For A detail-oriented data engineer with a passion for building efficient, scalable data systems. A proactive problem-solver who thrives in a fast-paced, dynamic environment. A freelancer with excellent communication skills and the ability to collaborate with cross-functional teams. Why Join Us? Immediate Impact: Tackle challenging data problems that drive real business outcomes. Remote & Flexible: Work from anywhere with engagements structured to suit your schedule. Future Opportunities: Leverage BeGig’s platform to secure additional data-focused roles as your expertise grows. Innovative Work: Collaborate with startups at the forefront of data innovation and technology. Ready to Transform Data? Apply now to become a key Data Engineer for our client and a valued member of the BeGig network!
Posted 1 week ago
3.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Company Overview Domo is a native cloud-native data experiences innovator that puts data to work for everyone. Underpinned by AI, data science, and a secure data foundation, our platform makes data actionable with user-friendly dashboards and apps. With Domo, companies get intuitive, agile data experiences that power exponential business impact. Position Summary Our Technical Support team is looking for problem solvers with executive presence and polish—highly versatile, reliable, self-starting individuals with deep technical troubleshooting skills and experience. You will help Domo clients facilitate their digital transformation and strategic initiatives and increase brand loyalty and referenceability through world-class technical support. When our customers succeed, we succeed. The Technical Support team is staffed 24/7, which allows our global customers to contact us at their convenience. Support Team members build strong, lasting relationships with customers by understanding their needs and concerns. This team takes the lead in providing a world-class experience for every person who contacts Domo through our Support Team. Key Responsibilities Provide exceptional service by connecting, solving, and building relationships with our customers. Interactions may include case work such as telephone, email, Zoom, in person, or other internal tools, as needed and determined by the business Thinking outside the box, our advisors are offered a high degree of latitude to find and develop solutions. Successful candidates will demonstrate independent thinking that consistently leads to robust and scalable solutions for our customers; Perpetually expand your knowledge of Domo’s platform, Business Intelligence, data, and analytics. On-the-job training, time for side projects, and Domo certification; Provide timely (SLAs), constant, and ongoing communication with your peers and customers regarding their support cases until those cases are solved. Job Requirements Essential: Bachelor's degree in a technical field (computer science, mathematics, statistics, analytics, etc.) or 3-5 years related experience in a relevant field. Show us that you know how to learn, find answers, and develop solutions on your own. At least 2 years of experience in a support role ideally in a customer facing environment. Communicate clearly and effectively with customers to fully meet their needs. You will be working with experts in their field; quickly establishing rapport and trust with them is critical. Strong SQL experience is a must. From memory, can you explain the basic purpose and SQL syntax behind joins, unions, selects, grouping, aggregation, indexes, subqueries, etc. Software application support experience. Preference given for SaaS, analytics, data, and Business Intelligence fields. Tell us about your experience working methodically through queues, following through on commitments, SOP’s, company policies, professional communication etiquette through verbal and written correspondence. Flexible and adaptable to rapid change. This is a fast-paced industry and there will always be something new to learn. Desired: APIs - REST/SOAP, endpoints, uses, authentication, methods, Postman; Programming languages - Python, JavaScript, Java, etc. Relational databases - MySQL, PostgreSQL, MSSQL, Redshift, Oracle, ODBC, OLE DB, JDBC Statistical computing - R, Jupyter JSON/XML – Reading, parsing, XPath, etc. SSO/IDP – OpenID Connect, SAML, Okta, Azure AD, Ping Identity Snowflake Data Cloud / ETL. LOCATION: Pune, Maharashtra, India India Benefits & Perks Medical cash allowance provided Maternity and Paternity Leave policy Baby bucks: cash allowance to spend on anything for every newborn or child adopted Haute Mama: cash allowance to spend on Maternity Wardrobe (only for women employees) 18 days paid time off + 10 holidays + 12 medical leaves Sodexo Meal Pass Health and Wellness Benefit One-time Technology Benefit towards the purchase of a tablet or smartwatch Corporate National Pension Scheme Employee Assistance Programme (EAP) Domo is an equal opportunity employer.
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
31458 Jobs | Dublin
Wipro
16542 Jobs | Bengaluru
EY
10788 Jobs | London
Accenture in India
10711 Jobs | Dublin 2
Amazon
8660 Jobs | Seattle,WA
Uplers
8559 Jobs | Ahmedabad
IBM
7988 Jobs | Armonk
Oracle
7535 Jobs | Redwood City
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi
Capgemini
6091 Jobs | Paris,France