Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0.0 - 3.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Job Information Company Yubi Date Opened 07/10/2025 Job Type Full time Work Experience 1-3 years Industry Technology City Bangalore State/Province Karnataka Country India Zip/Postal Code 560076 About Us Yubi stands for ubiquitous. But Yubi will also stand for transparency, collaboration, and the power of possibility. From being a disruptor in India’s debt market to marching towards global corporate markets from one product to one holistic product suite with seven products Yubi is the place to unleash potential. Freedom, not fear. Avenues, not roadblocks. Opportunity, not obstacles. Job Description Data Engineer 2 Position Summary: As a Data Engineer, you will be part of a highly talented Data Engineering team. Responsible for developing reusable capabilities and tools to automate various types of data processing pipelines. You will be contributing to different stages of data engineering like data acquisition, ingestion, processing, monitoring pipelines and validating data. Your contribution will be really crucial in keeping various data ingestion and processing pipelines running successfully. Along with ensuring the data points available in the data lake are up to date, valid and usable. Technology Experience: 3+ years of experience in data engineering. Comfortable and hands on with the Python programming. Strong experience in working with RDBMS and NoSQL systems. Strong experience in working on AWS ecosystem with hands-on experience in working with different AWS components like Airflow, EMR , Redshift, S3, Athena, PySpark etc. Strong experience in developing REST APIs with Python using frameworks like flask, fastapi. Prior experience in working with crawling libraries like BeautifulSoup in Python would be desirable. Proven ability to work with SQL queries, including writing complex queries to retrieve key metrics. Skilled in connecting to, exploring, and understanding upstream data. Experience working with various data lake storage format types and ability to choose it based on the use cases. Responsibilities: Design and build scalable data pipelines that can handle large volumes of data. Develop ETL/ELT pipelines and extract the data from any upstream sources and sync with the data lakes with the format of parquet, iceberg, delta formats. Optimize and ensure the data pipelines are running successfully and ensure the business continuity. Collaborate with cross functional teams and source all the data required for the business use cases. Stay up-to-date with emerging data technologies and trends to ensure the continuous improvement of our data infrastructure and architecture Follow best practices in data querying and manipulation to ensure data integrity.
Posted 1 week ago
0.0 years
0 Lacs
Bengaluru, Karnataka
On-site
- 3+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience - Experience with data visualization using Tableau, Quicksight, or similar tools - Experience with data modeling, warehousing and building ETL pipelines - Experience in Statistical Analysis packages such as R, SAS and Matlab - Experience using SQL to pull data from a database or data warehouse and scripting experience (Python) to process data for modeling Our Customer Experience and Business Trends (CXBT) team is seeking a skilled and motivated Business Intelligence Engineer (BIE) to analyze and deliver insights to help us better serve customers. Our team within the CXBT organization is called Benchmarking Economics, Analytics, and Measurement (BEAM). BEAM is a central team that consists of economics, analytics (business intelligence) and measurement science (data scientists). Our mission is to drive customer experience (CX) improvement through science modeling and quantitative data analytics. Our core functional skills include: data collection, science modeling, insights reporting, and automation. The right candidate is passionate about understanding customer needs, perceptions, and experiences, diving deep into complex problems, and continuously striving to deliver deeper insights. The person in this role will innovate, build new methodologies to generate insights, and make recommendations to drive actions that directly impact our current and future customers. A successful candidate will possess excellent analytical skills, and have the ability to work collaboratively to influence business leaders at all levels, including senior management. Key job responsibilities • Own, design, develop, document, and manage scalable solutions for new and ongoing analyses metrics, reports, and dashboards to support business needs • Identify new data sources and invent new methodologies and approaches to understand and drive improved customer experiences • Drive efforts to simplify, automate, and standardize processes across the team to drive efficiencies, expand scope, and drive increased impact to customer experience • Articulate assumptions, methodologies, results, and implications • Present analyses to both technical and non-technical stakeholders, ensuring clarity and understanding About the team Customer Experience and Business Trends (CXBT) is an organization made up of a diverse suite of functions dedicated to deeply understanding and improving customer experience, globally. We are a team of builders that develop products, services, ideas, and various ways of leveraging data to influence product and service offerings – for almost every business at Amazon – for every customer (e.g., consumers, developers, sellers/brands, employees, investors, streamers, gamers). Our approach is based on determining the customer need, along with problem solving, and we work backwards from there. We use technical and non-technical approaches and stay aware of industry and business trends. We are a global team, made up of a diverse set of profiles, skills, and backgrounds – including: Product Managers, Computer Vision experts, Solutions Architects, Data Scientists, Business Intelligence Engineers, Business Analysts, Risk Managers, and more. Experience with AWS solutions such as EC2, DynamoDB, S3, and Redshift Experience in data mining, ETL, etc. and using databases in a business environment with large-scale, complex datasets Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Posted 1 week ago
0.0 years
0 Lacs
Bengaluru, Karnataka
On-site
- 3+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience - Experience with data visualization using Tableau, Quicksight, or similar tools - Experience with data modeling, warehousing and building ETL pipelines - Experience in Statistical Analysis packages such as R, SAS and Matlab - Experience using SQL to pull data from a database or data warehouse and scripting experience (Python) to process data for modeling Our Customer Experience and Business Trends (CXBT) team is seeking a skilled and motivated Business Intelligence Engineer (BIE) to analyze and deliver insights to help us better serve customers. Our team within the CXBT organization is called Benchmarking Economics, Analytics, and Measurement (BEAM). BEAM is a central team that consists of economics, analytics (business intelligence) and measurement science (data scientists). Our mission is to drive customer experience (CX) improvement through science modeling and quantitative data analytics. Our core functional skills include: data collection, science modeling, insights reporting, and automation. The right candidate is passionate about understanding customer needs, perceptions, and experiences, diving deep into complex problems, and continuously striving to deliver deeper insights. The person in this role will innovate, build new methodologies to generate insights, and make recommendations to drive actions that directly impact our current and future customers. A successful candidate will possess excellent analytical skills, and have the ability to work collaboratively to influence business leaders at all levels, including senior management. Key job responsibilities • Own, design, develop, document, and manage scalable solutions for new and ongoing analyses metrics, reports, and dashboards to support business needs • Identify new data sources and invent new methodologies and approaches to understand and drive improved customer experiences • Drive efforts to simplify, automate, and standardize processes across the team to drive efficiencies, expand scope, and drive increased impact to customer experience • Articulate assumptions, methodologies, results, and implications • Present analyses to both technical and non-technical stakeholders, ensuring clarity and understanding About the team Customer Experience and Business Trends (CXBT) is an organization made up of a diverse suite of functions dedicated to deeply understanding and improving customer experience, globally. We are a team of builders that develop products, services, ideas, and various ways of leveraging data to influence product and service offerings – for almost every business at Amazon – for every customer (e.g., consumers, developers, sellers/brands, employees, investors, streamers, gamers). Our approach is based on determining the customer need, along with problem solving, and we work backwards from there. We use technical and non-technical approaches and stay aware of industry and business trends. We are a global team, made up of a diverse set of profiles, skills, and backgrounds – including: Product Managers, Computer Vision experts, Solutions Architects, Data Scientists, Business Intelligence Engineers, Business Analysts, Risk Managers, and more. Experience with AWS solutions such as EC2, DynamoDB, S3, and Redshift Experience in data mining, ETL, etc. and using databases in a business environment with large-scale, complex datasets Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Posted 1 week ago
10.0 - 17.0 years
0 Lacs
hyderabad, telangana
On-site
We have an exciting opportunity for an ETL Data Architect position with an AI-ML driven SaaS Solution Product Company in Hyderabad. As an ETL Data Architect, you will play a crucial role in designing and implementing a robust Data Access Layer to provide consistent data access needs to the underlying heterogeneous storage layer. You will also be responsible for developing and enforcing data governance policies to ensure data security, quality, and compliance across all systems. In this role, you will lead the architecture and design of data solutions that leverage the latest tech stack and AWS cloud services. Collaboration with product managers, tech leads, and cross-functional teams will be essential to align data strategy with business objectives. Additionally, you will oversee data performance optimization, scalability, and reliability of data systems while guiding and mentoring team members on data architecture, design, and problem-solving. The ideal candidate should have at least 10 years of experience in data-related roles, with a minimum of 5 years in a senior leadership position overseeing data architecture and infrastructure. A deep background in designing and implementing enterprise-level data infrastructure, preferably in a SaaS environment, is required. Extensive knowledge of data architecture principles, data governance frameworks, security protocols, and performance optimization techniques is essential. Hands-on experience with AWS services such as RDS, Redshift, S3, Glue, Document DB, as well as other services like MongoDB, Snowflake, etc., is highly desirable. Familiarity with big data technologies (e.g., Hadoop, Spark) and modern data warehousing solutions is a plus. Proficiency in at least one programming language (e.g., Node.js, Java, Golang, Python) is a must. Excellent communication skills are crucial in this role, with the ability to translate complex technical concepts to non-technical stakeholders. Proven leadership experience, including team management and cross-functional collaboration, is also required. A Bachelor's degree in computer science, Information Systems, or related field is necessary, with a Master's degree being preferred. Preferred qualifications include experience with Generative AI and Large Language Models (LLMs) and their applications in data solutions, as well as familiarity with financial back-office operations and the FinTech domain. Stay updated on emerging trends in data technology, particularly in AI/ML applications for finance. Industry: IT Services and IT Consulting,
Posted 1 week ago
4.0 years
0 Lacs
Greater Kolkata Area
On-site
Roles And Responsibilities Build PySpark-based data ingestion pipelines for Adobe Experience Platform. Design Redshift schemas and optimize data loading. Create performant SQLs for reporting and validation. Support data model implementation and migration across sandboxes. Troubleshoot ingestion and configuration issues in AEP. Contribute to internal innovation and present complex use and Qualifications : 4+ years of experience with PySpark and Redshift. Hands-on experience with AWS Glue, Lambda, Kafka, Athena. Proficiency in SQL performance tuning and analytics. Familiarity with REST APIs, data modeling, and ingestion tools. Experience with ETL tools like Talend, Informatica. Ability to work independently across multiple engagements. Strong documentation and communication capabilities (ref:hirist.tech)
Posted 1 week ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Description As a Data Engineer, you will be part of a Data and Analytics (D & A) team responsible for building data pipelines that enables us to make informed decisions across the entire organization. This is a great opportunity to make a real impact on the course of the company, which makes data-based decisions as a part of its Data + Analytics Strategy. The Data Engineer is responsible for design, development, testing, and implementation of automated data pipelines and for the Enterprise Data Warehouse, hosted in the Cloud. The Data Engineer works closely with Business Intelligence / Analytics teams and business users to understand requirements, translate them into technical design, develop data pipelines and implement solutions in the Enterprise Data Warehouse (Redshift). Primary Responsibilities Include Analyze existing & create new stored procedures which involve complex data models and business rules. Build data pipelines utilizing various ETL transformation tools such as informatica or AWS Glue Actively participate through all phases of the project cycle from ideation to post-implementation stabilization. Work with business and technical peers to define how best to meet requirements, balancing speed & robustness. Build high-quality, maintainable SQL OLAP/Analytic functions following established patterns and coding practices. Analyze technical data to establish solutions that achieve complex data transformations. Participate / perform testing to ensure data quality and integrity via unit, integration, regression, and UAT testing. Create and maintain process design, data model, and operations documentation. Assist in the maintenance of the codebase, unit tests, and related technical design docs and configurations. Engage and collaborate with stakeholders via the Agile process, identifying and mitigating risks & issues as needed. Maintain software velocity & quality for deliverables, holding oneself accountable to commitments. JOB Requirements (minimum Competencies Required For Job Performance) Experience in PL/SQL scripting and query optimization, required. Experience with AWS (Amazon Web Services) Redshift, Oracle, or PostgreSQL, preferred. Experience with Informatica Power Center and/or Informatica Cloud / IDMC, preferred. Experience in data model design, dimensional data modeling, and complex stored procedure development, required. Strong analytical skills, synthesizing information with attention to detail & accuracy to establish patterns and solutions. Experience with AWS, e.g., S3, PySpark, Glue, Redshift, Lambda, preferred. Experience with Data Lake House platforms, e.g., Databricks, Snowflake, preferred. Experience in scripting languages, e.g., Python, Scala, Java, Unix Shell, Bash, preferred. Experience operating in Agile and Waterfall development methodologies, preferred. Experience building data visualization solutions using BI platforms, e.g., Tableau, Power BI, Qlik, preferred. Capable of balancing technology ideals and business objectives, evaluating options and implications. Must possess strong written and verbal communication skills. Manages and prioritizes work effectively with minimal supervision, seeking and offering help as needed to achieve goals. Adaptable to change and able to work independently and as part of a team. Applies curiosity and creativity to solve problems, seeking opportunities and overcoming challenges with resourcefulness. High bias for action in meeting commitments & deadlines; effectively sees, communicates, and mitigates risks and issues. Active participant in the development community; seeks and offers guidance, coaching, and professional development. (ref:hirist.tech)
Posted 1 week ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Summary Position Summary Data Analytics Engineer – CL4 Role Overview : As a Data Analytics Engineer , you will actively engage in your engineering craft, taking a hands-on approach to multiple high-visibility projects. Your expertise will be pivotal in delivering solutions that delight customers and users, while also driving tangible value for Deloitte's business investments. You will leverage your extensive engineering craftmanship across multiple programming languages and modern frameworks, consistently demonstrating your strong track record in delivering high-quality, outcome-focused solutions. The ideal candidate will be a dependable team player, collaborating with cross-functional teams to design, develop, and deploy advanced software solutions. Key Responsibilities : Outcome-Driven Accountability: Embrace and drive a culture of accountability for customer and business outcomes. Develop engineering solutions that solve complex problems with valuable outcomes, ensuring high-quality, lean designs and implementations. Technical Leadership and Advocacy: Serve as the technical advocate for products, ensuring code integrity, feasibility, and alignment with business and customer goals. Lead requirement analysis, component design, development, unit testing, integrations, and support. Engineering Craftsmanship: Maintain accountability for the integrity of code design, implementation, quality, data, and ongoing maintenance and operations. Stay hands-on, self-driven, and continuously learn new approaches, languages, and frameworks. Create technical specifications, and write high-quality, supportable, scalable code ensuring all quality KPIs are met or exceeded. Demonstrate collaborative skills to work effectively with diverse teams. Customer-Centric Engineering: Develop lean engineering solutions through rapid, inexpensive experimentation to solve customer needs. Engage with customers and product teams before, during, and after delivery to ensure the right solution is delivered at the right time. Incremental and Iterative Delivery: Adopt a mindset that favors action and evidence over extensive planning. Utilize a learning-forward approach to navigate complexity and uncertainty, delivering lean, supportable, and maintainable solutions. Cross-Functional Collaboration and Integration: Work collaboratively with empowered, cross-functional teams including product management, experience, and delivery. Integrate diverse perspectives to make well-informed decisions that balance feasibility, viability, usability, and value. Foster a collaborative environment that enhances team synergy and innovation. Advanced Technical Proficiency: Possess deep expertise in modern software engineering practices and principles, including Agile methodologies and DevSecOps to deliver daily product deployments using full automation from code check-in to production with all quality checks through SDLC lifecycle. Strive to be a role model, leveraging these techniques to optimize solutioning and product delivery. Demonstrate understanding of the full lifecycle product development, focusing on continuous improvement, and learning. Domain Expertise: Quickly acquire domain-specific knowledge relevant to the business or product. Translate business/user needs, architectures, and data designs into technical specifications and code. Be a valuable, flexible, and dedicated team member, supportive of teammates, and focused on quality and tech debt payoff. Effective Communication and Influence: Exhibit exceptional communication skills, capable of articulating complex technical concepts clearly and compellingly. Inspire and influence teammates and product teams through well-structured arguments and trade-offs supported by evidence. Create coherent narratives that align technical solutions with business objectives. Engagement and Collaborative Co-Creation: Engage and collaborate with product engineering teams at all organizational levels, including customers as needed. Build and maintain constructive relationships, fostering a culture of co-creation and shared momentum towards achieving product goals. Align diverse perspectives and drive consensus to create feasible solutions. The team : US Deloitte Technology Product Engineering has modernized software and product delivery, creating a scalable, cost-effective model that focuses on value/outcomes that leverages a progressive and responsive talent structure. As Deloitte’s primary internal development team, Product Engineering delivers innovative digital solutions to businesses, service lines, and internal operations with proven bottom-line results and outcomes. It helps power Deloitte’s success. It is the engine that drives Deloitte, serving many of the world’s largest, most respected companies. We develop and deploy cutting-edge internal and go-to-market solutions that help Deloitte operate effectively and lead in the market. Our reputation is built on a tradition of delivering with excellence. Key Qualifications : A bachelor’s degree in computer science, software engineering, or a related discipline. An advanced degree (e.g., MS) is preferred but not required. Experience is the most relevant factor. Strong data engineering foundation with deep understanding of data-structure, algorithms, code instrumentations, etc. 5+ years proven experience with data ETL and ELT tools (such as ADF, Alteryx, cloud-native tools), data warehousing tools (such as SAP HANA, Snowflake, ADLS, Amazon Redshift, Google Cloud BigQuery). 5+ years of experience with cloud-native engineering, using FaaS/PaaS/micro-services on cloud hyper-scalers like Azure, AWS, and GCP. Strong understanding of methodologies & tools like, XP, Lean, SAFe, DevSecOps, SRE, ADO, GitHub, SonarQube, etc. Strong preference will be given to candidates with experience in AI/ML and GenAI. Excellent interpersonal and organizational skills, with the ability to handle diverse situations, complex projects, and changing priorities, behaving with passion, empathy, and care. How You will Grow: At Deloitte, our professional development plans focus on helping people at every level of their career to identify and use their strengths to do their best work every day and excel in everything they do. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India . Benefits To Help You Thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 306372
Posted 1 week ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Summary Position Summary Data Analytics Engineer – CL3 Role Overview : As a Data Analytics Engineer , you will actively engage in your engineering craft, taking a hands-on approach to multiple high-visibility projects. Your expertise will be pivotal in delivering solutions that delight customers and users, while also driving tangible value for Deloitte's business investments. You will leverage your extensive engineering craftmanship across multiple programming languages and modern frameworks, consistently demonstrating your strong track record in delivering high-quality, outcome-focused solutions. The ideal candidate will be a dependable team player, collaborating with cross-functional teams to design, develop, and deploy advanced software solutions. Key Responsibilities : Outcome-Driven Accountability: Embrace and drive a culture of accountability for customer and business outcomes. Develop engineering solutions that solve complex problems with valuable outcomes, ensuring high-quality, lean designs and implementations. Technical Leadership and Advocacy: Serve as the technical advocate for products, ensuring code integrity, feasibility, and alignment with business and customer goals. Lead requirement analysis, component design, development, unit testing, integrations, and support. Engineering Craftsmanship: Maintain accountability for the integrity of code design, implementation, quality, data, and ongoing maintenance and operations. Stay hands-on, self-driven, and continuously learn new approaches, languages, and frameworks. Create technical specifications, and write high-quality, supportable, scalable code ensuring all quality KPIs are met or exceeded. Demonstrate collaborative skills to work effectively with diverse teams. Customer-Centric Engineering: Develop lean engineering solutions through rapid, inexpensive experimentation to solve customer needs. Engage with customers and product teams before, during, and after delivery to ensure the right solution is delivered at the right time. Incremental and Iterative Delivery: Adopt a mindset that favors action and evidence over extensive planning. Utilize a learning-forward approach to navigate complexity and uncertainty, delivering lean, supportable, and maintainable solutions. Cross-Functional Collaboration and Integration: Work collaboratively with empowered, cross-functional teams including product management, experience, and delivery. Integrate diverse perspectives to make well-informed decisions that balance feasibility, viability, usability, and value. Foster a collaborative environment that enhances team synergy and innovation. Advanced Technical Proficiency: Possess deep expertise in modern software engineering practices and principles, including Agile methodologies and DevSecOps to deliver daily product deployments using full automation from code check-in to production with all quality checks through SDLC lifecycle. Strive to be a role model, leveraging these techniques to optimize solutioning and product delivery. Demonstrate understanding of the full lifecycle product development, focusing on continuous improvement, and learning. Domain Expertise: Quickly acquire domain-specific knowledge relevant to the business or product. Translate business/user needs, architectures, and data designs into technical specifications and code. Be a valuable, flexible, and dedicated team member, supportive of teammates, and focused on quality and tech debt payoff. Effective Communication and Influence: Exhibit exceptional communication skills, capable of articulating complex technical concepts clearly and compellingly. Inspire and influence teammates and product teams through well-structured arguments and trade-offs supported by evidence. Create coherent narratives that align technical solutions with business objectives. Engagement and Collaborative Co-Creation: Engage and collaborate with product engineering teams at all organizational levels, including customers as needed. Build and maintain constructive relationships, fostering a culture of co-creation and shared momentum towards achieving product goals. Align diverse perspectives and drive consensus to create feasible solutions. The team : US Deloitte Technology Product Engineering has modernized software and product delivery, creating a scalable, cost-effective model that focuses on value/outcomes that leverages a progressive and responsive talent structure. As Deloitte’s primary internal development team, Product Engineering delivers innovative digital solutions to businesses, service lines, and internal operations with proven bottom-line results and outcomes. It helps power Deloitte’s success. It is the engine that drives Deloitte, serving many of the world’s largest, most respected companies. We develop and deploy cutting-edge internal and go-to-market solutions that help Deloitte operate effectively and lead in the market. Our reputation is built on a tradition of delivering with excellence. Key Qualifications : A bachelor’s degree in computer science, software engineering, or a related discipline. An advanced degree (e.g., MS) is preferred but not required. Experience is the most relevant factor. Strong data engineering foundation with deep understanding of data-structure, algorithms, code instrumentations, etc. 3+ years proven experience with data ETL and ELT tools (such as ADF, Alteryx, cloud-native tools), data warehousing tools (such as SAP HANA, Snowflake, ADLS, Amazon Redshift, Google Cloud BigQuery). 3+ years of experience with cloud-native engineering, using FaaS/PaaS/micro-services on cloud hyper-scalers like Azure, AWS, and GCP. Strong understanding of methodologies & tools like XP, Lean, SAFe, DevSecOps, SRE, ADO, GitHub, SonarQube, etc. Strong preference will be given to candidates with experience in AI/ML and GenAI. Excellent interpersonal and organizational skills, with the ability to handle diverse situations, complex projects, and changing priorities, behaving with passion, empathy, and care. How You will Grow: At Deloitte, our professional development plans focus on helping people at every level of their career to identify and use their strengths to do their best work every day and excel in everything they do. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India . Benefits To Help You Thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 306373
Posted 1 week ago
3.0 - 5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Title: Engineer Introduction to role:- Are you ready to make a significant impact in the world of biopharmaceuticals? AstraZeneca, a global leader in innovation-driven prescription medicines, is seeking a dedicated Data Engineer to join our Commercial IT Data Analytics & AI (DAAI) team. With operations in over 100 countries and headquarters in the United Kingdom, AstraZeneca offers a unique workplace culture that fosters innovation and collaboration. As a Data Engineer, you will play a crucial role in supporting and enhancing our data platforms built on AWS services. Your expertise in ETL, Data Warehousing, Databricks, and AWS applications will be vital in ensuring business continuity and driving efficiency. Are you up for the challenge? Accountabilities Monitor and maintain the health and performance of production systems and applications. Provide timely incident response, solve, and resolution for technical issues raised by users or monitoring tools. Perform root cause analysis for recurring issues and implement preventive measures. Investigate data anomalies, solve failures, and coordinate with relevant teams for resolution. Collaborate with development and infrastructure teams to support deployments and configuration changes. Maintain and update technical documentation, standard operating procedures, and knowledge bases. Ensure alignment to service-level agreements (SLAs) and minimize downtime or service disruptions. Manage user access, permissions, and security-related requests as per organizational policies. Participate in on-call rotations and provide after-hours support as needed. Communicate effectively with collaborators, providing status updates and post-incident reports. Proactively find opportunities for automation and process improvement in support activities. Support data migration, upgrades, and transitions as required. Support business continuity and disaster recovery exercises as required.. Essential Skills/Experience Education Background: B.E/B.Tech/MCA/MSc/BSc Overall Years of Experience: 3 to 5 years of experience Solid experience with SQL, data warehousing, and building ETL pipelines Hands-on experience with AWS services, including EMR, EC2, S3, Athena, RDS, Databricks, and Redshift. Skilled in working with columnar databases such as Redshift, Cassandra or BigQuery. Good understanding of ETL processes and data warehousing concepts. Familiarity with scheduling tools (especially Airflow is a plus). Able to write complex SQL queries for data extraction, transformation, and reporting. Excellent communication skills and ability to work well with both technical and non-technical teams. Strong analytical and troubleshooting skills in complex data environments Desirable Skills/Experience Experience with Databricks or Snowflake Proficient in scripting and programming languages such as Shell Scripting and Python Familiar with CI/CD using Bamboo Proficient in version control systems, including Bitbucket and GitHub Preferably experienced with release management processes Significant prior experience in an IT environment within the pharmaceutical or healthcare industry At AstraZeneca, we are committed to driving exciting transformation on our journey to becoming a digital and data-led enterprise. Our work connects across the entire business to power each function, influencing patient outcomes and improving lives. By unleashing the power of our latest innovations in data, machine learning, and technology, we turn complex information into life-changing insights. Join us to work alongside leading experts in our specialist communities, where your contributions are recognized from the top. Ready to take the next step? Apply now to become part of our dynamic team! Date Posted 09-Jul-2025 Closing Date 13-Jul-2025 AstraZeneca embraces diversity and equality of opportunity. We are committed to building an inclusive and diverse team representing all backgrounds, with as wide a range of perspectives as possible, and harnessing industry-leading skills. We believe that the more inclusive we are, the better our work will be. We welcome and consider applications to join our team from all qualified candidates, regardless of their characteristics. We comply with all applicable laws and regulations on non-discrimination in employment (and recruitment), as well as work authorization and employment eligibility verification requirements.
Posted 2 weeks ago
6.0 - 11.0 years
20 - 35 Lacs
Hyderabad
Hybrid
Greetings from Astrosoft Technologies! We are currently seeking for Highly skilled and talented Senior AWS Data Engineer for our Hyderabad office AstroSoft Technologies (https://www.astrosofttech.com/) Join Astrosoft Technologies , a global award-winning leader in Data, Cloud, AI/ML, and Digital Innovation , founded in 2004 , Headquarters in FL , USA , Corporate Office - India , Hyderabad. as a Senior AWS Data Engineer . We are looking for highly skilled professionals with strong cloud engineering and data pipeline expertise to be part of our fast-growing IT team in Hyderabad. If you're passionate about delivering scalable, real-time data solutions and have a strong foundation in AWS and big data technologies, we want to hear from you. Apply here to Email: karthik.jangam@astrosofttech.com Role: Senior AWS Data Engineer Location: Gachibowli, Hyderabad (Vasavi Sky City) Work Mode: Hybrid (Work from Office Tue to Thu | WFH Mon & Fri) Job Type: Full-Time Shift: 12:30 PM 9:30 PM IST Experience Required: 7+ years Key Responsibilities: Design and develop scalable data pipelines using Kafka, Kinesis, Spark, Flink Strong experience in AWS services: S3, Glue, EMR, DMS, SNS, SQS, MWAA (Airflow) Proficiency in Python , Java, or Scala (Python preferred) Infrastructure automation with Terraform Experience with ETL tools (ODI is a plus) Work with Oracle , Redshift , advanced SQL tuning , and physical DB optimization Implement monitoring tools: CloudWatch, Splunk, Data Dog and SRE bestpractices Collaborate with cross-functional teams to translate business needs into technical solutions Desired Candidate Profile: 7+ years total experience; 4+ years in AWS Data Engineering Hands-on, solution-driven mindset with strong critical thinking AWS Certification preferred Excellent communication and stakeholder engagement skills Immediate joiners preferred Why Join Astrosoft? H1B Sponsorship (based on performance/project) Daily Lunch & Dinner provided Group Health Insurance Skill Certifications & Learning Support Competitive Leave Policy Work in a collaborative, innovation-driven environment Thanks & Regards Karthik Kumar HR TAG Lead-India Astrosoft Technologies, Unit 1810, level 18, Vasavi Sky city, Gachibowli, Hyderabad, Telangana 500081. Contact: +91-8712229084 Email: karthik.jangam@astrosofttech.com Winner Telangana - Best Employer Brand Award - 2024
Posted 2 weeks ago
6.0 - 11.0 years
5 - 15 Lacs
Mumbai, Navi Mumbai
Work from Office
Job Title: SQL Developer Location: Mumbai Duration: Full-time Job Description We are seeking an experienced SQL Developer with a minimum of 6 years in SQL development and a strong command of Amazon Redshift . The ideal candidate will be responsible for designing, developing, and optimizing complex SQL queries and data models to support our data warehousing and reporting infrastructure. This role is based at our Mumbai (Airoli) office and requires hands-on expertise in working with large datasets and cloud-based data warehouses. Responsibilities Design, develop, and maintain complex SQL queries, procedures, functions, and views. Optimize queries for performance across large datasets, particularly within Amazon Redshift. Work closely with data engineers, analysts, and business stakeholders to understand data needs and deliver reliable solutions. Create and maintain documentation related to database structures, data flows, and processes. Troubleshoot and resolve issues related to data accuracy, performance, and integration. Ensure data security, integrity, and compliance with organizational policies. Participate in code reviews and support continuous improvement of development practices. Requirements Minimum 6 years of experience in SQL development. Strong hands-on experience with Amazon Redshift mandatory. Proficiency in writing and optimizing complex SQL queries. Experience with ETL processes and working with large-scale data environments. Familiarity with data modeling techniques and best practices. Strong problem-solving skills and attention to detail. Excellent communication and collaboration skills. Bachelor's degree in Computer Science, Information Systems, or related field (preferred). Role & responsibilities Preferred candidate profile
Posted 2 weeks ago
0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Job Description Are You Ready to Make It Happen at Mondelēz International? Join our Mission to Lead the Future of Snacking. Make It With Pride. You will provide technical contributions to the data science process. In this role, you are the internally recognized expert in data, building infrastructure and data pipelines/retrieval mechanisms to support our data needs How You Will Contribute You will: Operationalize and automate activities for efficiency and timely production of data visuals Assist in providing accessibility, retrievability, security and protection of data in an ethical manner Search for ways to get new data sources and assess their accuracy Build and maintain the transports/data pipelines and retrieve applicable data sets for specific use cases Understand data and metadata to support consistency of information retrieval, combination, analysis, pattern recognition and interpretation Validate information from multiple sources. Assess issues that might prevent the organization from making maximum use of its information assets What You Will Bring A desire to drive your future and accelerate your career and the following experience and knowledge: Extensive experience in data engineering in a large, complex business with multiple systems such as SAP, internal and external data, etc. and experience setting up, testing and maintaining new systems Experience of a wide variety of languages and tools (e.g. script languages) to retrieve, merge and combine data Ability to simplify complex problems and communicate to a broad audience In This Role As a Senior Data Engineer, you will have the opportunity to design and build scalable, secure, and cost-effective cloud-based data solutions. You will develop and maintain data pipelines to extract, transform, and load data into data warehouses or data lakes, ensuring data quality and validation processes to maintain data accuracy and integrity. You will ensure efficient data storage and retrieval for optimal performance, and collaborate closely with data teams, product owners, and other stakeholders to stay updated with the latest cloud technologies and best practices. Role & Responsibilities: Design and Build: Develop and implement scalable, secure, and cost-effective cloud-based data solutions. Manage Data Pipelines: Develop and maintain data pipelines to extract, transform, and load data into data warehouses or data lakes. Ensure Data Quality: Implement data quality and validation processes to ensure data accuracy and integrity. Optimize Data Storage: Ensure efficient data storage and retrieval for optimal performance. Collaborate and Innovate: Work closely with data teams, product owners, and stay updated with the latest cloud technologies and best practices. Technical Requirements: Programming: Python, PySpark, Go/Java Database: SQL, PL/SQL ETL & Integration: DBT, Databricks + DLT, AecorSoft, Talend, Informatica/Pentaho/Ab-Initio, Fivetran. Data Warehousing: SCD, Schema Types, Data Mart. Visualization: Databricks Notebook, PowerBI (Optional), Tableau (Optional), Looker. GCP Cloud Services: Big Query, GCS, Cloud Function, PubSub, Dataflow, DataProc, Dataplex. AWS Cloud Services: S3, Redshift, Lambda, Glue, CloudWatch, EMR, SNS, Kinesis. Azure Cloud Services: Azure Datalake Gen2, Azure Databricks, Azure Synapse Analytics, Azure Data Factory, Azure Stream Analytics. Supporting Technologies: Graph Database/Neo4j, Erwin, Collibra, Ataccama DQ, Kafka, Airflow. Soft Skills: Problem-Solving: The ability to identify and solve complex data-related challenges. Communication: Effective communication skills to collaborate with Product Owners, analysts, and stakeholders. Analytical Thinking: The capacity to analyze data and draw meaningful insights. Attention to Detail: Meticulousness in data preparation and pipeline development. Adaptability: The ability to stay updated with emerging technologies and trends in the data engineering field. Within Country Relocation support available and for candidates voluntarily moving internationally some minimal support is offered through our Volunteer International Transfer Policy Business Unit Summary At Mondelēz International, our purpose is to empower people to snack right by offering the right snack, for the right moment, made the right way. That means delivering a broad range of delicious, high-quality snacks that nourish life's moments, made with sustainable ingredients and packaging that consumers can feel good about. We have a rich portfolio of strong brands globally and locally including many household names such as Oreo , belVita and LU biscuits; Cadbury Dairy Milk , Milka and Toblerone chocolate; Sour Patch Kids candy and Trident gum. We are proud to hold the top position globally in biscuits, chocolate and candy and the second top position in gum. Our 80,000 makers and bakers are located in more than 80 countries and we sell our products in over 150 countries around the world. Our people are energized for growth and critical to us living our purpose and values. We are a diverse community that can make things happen—and happen fast. Mondelēz International is an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, gender, sexual orientation or preference, gender identity, national origin, disability status, protected veteran status, or any other characteristic protected by law. Job Type Regular Data Science Analytics & Data Science
Posted 2 weeks ago
5.0 years
0 Lacs
Pune, Maharashtra, India
Remote
Your work days are brighter here. At Workday, it all began with a conversation over breakfast. When our founders met at a sunny California diner, they came up with an idea to revolutionize the enterprise software market. And when we began to rise, one thing that really set us apart was our culture. A culture which was driven by our value of putting our people first. And ever since, the happiness, development, and contribution of every Workmate is central to who we are. Our Workmates believe a healthy employee-centric, collaborative culture is the essential mix of ingredients for success in business. That’s why we look after our people, communities and the planet while still being profitable. Feel encouraged to shine, however that manifests: you don’t need to hide who you are. You can feel the energy and the passion, it's what makes us unique. Inspired to make a brighter work day for all and transform with us to the next stage of our growth journey? Bring your brightest version of you and have a brighter work day here. At Workday, we value our candidates’ privacy and data security. Workday will never ask candidates to apply to jobs through websites that are not Workday Careers. Please be aware of sites that may ask for you to input your data in connection with a job posting that appears to be from Workday but is not. In addition, Workday will never ask candidates to pay a recruiting fee, or pay for consulting or coaching services, in order to apply for a job at Workday. About The Team About Workday At Workday, it all began with a conversation over breakfast. When our founders met at a sunny California diner, they came up with an idea to revolutionize the enterprise software market. And when we began to rise, one thing that really set us apart was our culture. A culture which was driven by our value of putting our people first. And ever since, the happiness, development, and contribution of every Workmate is central to who we are. Our Workmates believe a healthy employee-centric, collaborative culture is the essential mix of ingredients for success in business. That’s why we look after our people, communities and the planet while still being profitable. Feel encouraged to shine, however that manifests: you don’t need to hide who you are. You can feel the energy and the passion, it's what makes us unique. Inspired to make a brighter work day for all and transform with us to the next stage of our growth journey? Bring your brightest version of you and have a brighter work day here. About The Team The Enterprise Data & AI Technologies and Architecture (EDATA) organization is a dynamic and evolving team that is spearheading Workday’s growth through trusted data excellence, innovation, and architectural thought leadership. Equipped with an array of skills in data science, engineering, and analytics, this team orchestrates the flow of data across our growing company while ensuring data accessibility, accuracy, and security. With a relentless focus on innovation and efficiency, Workmates in EDATA enable the transformation of complex data sets into actionable insights that fuel strategic decisions and position Workday at the forefront of the technology industry. EDATA is a global team distributed across the U.S, India and Canada. About The Role Join a pioneering organization at the forefront of technological advancement, dedicated to leveraging data-driven insights to transform industries and drive innovation. We are seeking a highly skilled and motivated Data Quality Engineer to join our dynamic team. The ideal candidate is someone who loves to learn, is detail oriented, has exceptional critical thinking and analytical skills. As a Data Quality Engineer, you will play a critical role in ensuring the accuracy, consistency, and completeness of our data across the enterprise data platform. You will be responsible for designing, developing, and implementing data quality processes, standards, and best practices across various data sources and systems to identify, resolve data issues. This role offers an exciting opportunity to learn, collaborate with cross-functional teams, including data engineers, data scientists, and business analysts, to drive data quality improvements and enhance decision-making capabilities. Responsibilities The incumbent will be responsible for (but not limited to) the following: Design and automate data quality checks; resolve issues and improve data pipelines with engineering and product teams. Collaborate with stakeholders to define data quality requirements and best practices. Develop test automation strategies and integrate checks into CI/CD pipelines. Monitor data quality metrics, identify root causes, and drive continuous improvements. Provide guidance on data quality standards across projects. Work with Data Ops to address production issues and document quality processes. About You Basic Qualifications 5+ years of experience as a Data Quality Engineer in data quality management or data governance. Good understanding of data management concepts, including data profiling, data cleansing, and data integration. Proficiency in SQL for data querying and manipulation. Develop and execute automated data quality tests using tools like SQL, Python (Pyspark), and data quality frameworks. Hands-on experience with cloud platforms (AWS/GCP), data warehouses (Snowflake, Databricks, Redshift), and integration tools (Snaplogic, dbt, Talend, etc.) Exposure to data quality tools (e.g., Acceldata, Tricentis) and CI/CD or DevOps practices is a plus. Experience with data quality monitoring tools (Acceldata, Tricentis) a plus. Other Qualifications Proven ability to prioritize and manage multiple tasks in a fast-paced environment. Certification in relevant technologies or data management disciplines is a plus. Analytical mindset with the ability to think strategically and make data-driven decisions. If you are a results-driven individual with a passion for data and analytics and a proven track record in data quality assurance, we invite you to apply for this exciting opportunity. Join our team and contribute to the success of our data-driven initiatives. Our Approach to Flexible Work With Flex Work, we’re combining the best of both worlds: in-person time and remote. Our approach enables our teams to deepen connections, maintain a strong community, and do their best work. We know that flexibility can take shape in many ways, so rather than a number of required days in-office each week, we simply spend at least half (50%) of our time each quarter in the office or in the field with our customers, prospects, and partners (depending on role). This means you'll have the freedom to create a flexible schedule that caters to your business, team, and personal needs, while being intentional to make the most of time spent together. Those in our remote "home office" roles also have the opportunity to come together in our offices for important moments that matter. Are you being referred to one of our roles? If so, ask your connection at Workday about our Employee Referral process!
Posted 2 weeks ago
0 years
4 - 8 Lacs
Hyderābād
On-site
Ready to shape the future of work? At Genpact, we don’t just adapt to change—we drive it. AI and digital innovation are redefining industries, and we’re leading the charge. Genpact’s AI Gigafactory , our industry-first accelerator, is an example of how we’re scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to agentic AI , our breakthrough solutions tackle companies’ most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that’s shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions – we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at genpact.com and on LinkedIn , X , YouTube , and Facebook . Inviting applications for the role of Senior Principal consultant- Data Engineer In this role, we are looking for candidates who have relevant years of experience in designing and developing machine learning and deep learning system . Who have professional software development experience. Hands on running machine learning tests and experiments. Implementing appropriate ML algorithms engineers. Responsibilities Drive the vision for modern data and analytics platform to deliver well architected and engineered data and analytics products leveraging cloud tech stack and third-party products Close the gap between ML research and production to create ground-breaking new products, features and solve problems for our customers Design, develop, test, and deploy data pipelines, machine learning infrastructure and client-facing products and services Build and implement machine learning models and prototype solutions for proof-of-concept Scale existing ML models into production on a variety of cloud platforms Analyze and resolve architectural problems, working closely with engineering, data science and operations teams. Design and develop data pipelines: Create efficient data pipelines to collect, process, and store large volumes of data from various sources. Implement data solutions: Develop and implement scalable data solutions using technologies like Hadoop, Spark, and SQL databases. Ensure data quality: Monitor and improve data quality by implementing validation processes and error handling. Collaborate with teams: Work closely with data scientists, analysts, and business stakeholders to understand data requirements and deliver solutions. Optimize performance: Continuously optimize data systems for performance, scalability, and cost-effectiveness. Experience in GenAI project Qualifications we seek in you! Minimum Qualifications / Skills Bachelor's degree in computer science engineering, information technology or BSc in Computer Science, Mathematics or similar field Master’s degree is a plus Integration – APIs, micro- services and ETL/ELT patterns DevOps (Good to have) – Ansible, Jenkins, ELK Containerization – Docker, Kubernetes etc Orchestration – Airflow, Step Functions, Ctrl M etc Languages and scripting: Python, Scala Java etc Cloud Services - AWS, GCP, Azure and Cloud Native Analytics and ML tooling – Sagemaker , ML Studio Execution Paradigm – low latency/Streaming, batch Preferred Qualifications/ Skills Data platforms – Big Data (Hadoop, Spark, Hive, Kafka etc.) and Data Warehouse (Teradata, Redshift, BigQuery , Snowflake etc.) Visualization Tools - PowerBI , Tableau Why join Genpact? Be a transformation leader – Work at the cutting edge of AI, automation, and digital innovation Make an impact – Drive change for global enterprises and solve business challenges that matter Accelerate your career – Get hands-on experience, mentorship, and continuous learning opportunities Work with the best – Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture – Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let’s build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training. Job Senior Principal Consultant Primary Location India-Hyderabad Schedule Full-time Education Level Bachelor's / Graduation / Equivalent Job Posting Jul 8, 2025, 6:53:51 AM Unposting Date Ongoing Master Skills List Digital Job Category Full Time
Posted 2 weeks ago
0 years
4 - 8 Lacs
Hyderābād
On-site
Ready to shape the future of work? At Genpact, we don’t just adapt to change—we drive it. AI and digital innovation are redefining industries, and we’re leading the charge. Genpact’s AI Gigafactory , our industry-first accelerator, is an example of how we’re scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to agentic AI , our breakthrough solutions tackle companies’ most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that’s shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions – we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at genpact.com and on LinkedIn , X , YouTube , and Facebook . Inviting applications for the role of Lead Consultant-Data Engineer! In this role, we are looking for candidates who have relevant years of experience in designing and developing machine learning and deep learning system . Who have professional software development experience. Hands on running machine learning tests and experiments. Implementing appropriate ML algorithms engineers. Responsibilities Drive the vision for modern data and analytics platform to deliver well architected and engineered data and analytics products leveraging cloud tech stack and third-party products Close the gap between ML research and production to create ground-breaking new products, features and solve problems for our customers Design, develop, test, and deploy data pipelines, machine learning infrastructure and client-facing products and services Build and implement machine learning models and prototype solutions for proof-of-concept Scale existing ML models into production on a variety of cloud platforms Analyze and resolve architectural problems, working closely with engineering, data science and operations teams. Design and develop data pipelines: Create efficient data pipelines to collect, process, and store large volumes of data from various sources. Implement data solutions: Develop and implement scalable data solutions using technologies like Hadoop, Spark, and SQL databases. Ensure data quality: Monitor and improve data quality by implementing validation processes and error handling. Collaborate with teams: Work closely with data scientists, analysts, and business stakeholders to understand data requirements and deliver solutions. Optimize performance: Continuously optimize data systems for performance, scalability, and cost-effectiveness. Experience in GenAI project Qualifications we seek in you! Minimum Qualifications / Skills Bachelor's degree in computer science engineering, information technology or BSc in Computer Science, Mathematics or similar field Master’s degree is a plus Integration – APIs, micro- services and ETL/ELT patterns DevOps (Good to have) – Ansible, Jenkins, ELK Containerization – Docker, Kubernetes etc Orchestration – Airflow, Step Functions, Ctrl M etc Languages and scripting: Python, Scala Java etc Cloud Services - AWS, GCP, Azure and Cloud Native Analytics and ML tooling – Sagemaker , ML Studio Execution Paradigm – low latency/Streaming, batch Preferred Qualifications/ Skills Data platforms – Big Data (Hadoop, Spark, Hive, Kafka etc.) and Data Warehouse (Teradata, Redshift, BigQuery , Snowflake etc.) Visualization Tools - PowerBI , Tableau Why join Genpact? Be a transformation leader – Work at the cutting edge of AI, automation, and digital innovation Make an impact – Drive change for global enterprises and solve business challenges that matter Accelerate your career – Get hands-on experience, mentorship, and continuous learning opportunities Work with the best – Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture – Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let’s build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training. Job Lead Consultant Primary Location India-Hyderabad Schedule Full-time Education Level Bachelor's / Graduation / Equivalent Job Posting Jul 8, 2025, 6:56:06 AM Unposting Date Ongoing Master Skills List Digital Job Category Full Time
Posted 2 weeks ago
5.0 years
15 - 25 Lacs
Hyderābād
On-site
Role - Data Engineer Location - Hyderabad, INDIA [Hybrid] Responsibilities: ● Design, build, and optimize data pipelines to ingest, process, transform, and load data from various sources into our data platform ● Implement and maintain ETL workflows using tools like Debezium, Kafka, Airflow, and Jenkins to ensure reliable and timely data processing ● Develop and optimize SQL and NoSQL database schemas, queries, and stored procedures for efficient data retrieval and processing ● Work with both relational databases (MySQL, PostgreSQL) and NoSQL databases (MongoDB, DocumentDB) to build scalable data solutions ● Design and implement data warehouse solutions that support analytical needs and machine learning applications ● Collaborate with data scientists and ML engineers to prepare data for AI/ML models and implement data-driven features ● Implement data quality checks, monitoring, and alerting to ensure data accuracy and reliability ● Optimize query performance across various database systems through indexing, partitioning, and query refactoring ● Develop and maintain documentation for data models, pipelines, and processes ● Collaborate with cross-functional teams to understand data requirements and deliver solutions that meet business needs. ● Stay current with emerging technologies and best practices in data engineering Requirements: ● 5+ years of experience in data engineering or related roles with a proven track record of building data pipelines and infrastructure work exp on enterprise SAAS is mandatory. ● Strong proficiency in SQL and experience with relational databases like MySQL and PostgreSQL ● Hands-on experience with NoSQL databases such as MongoDB or AWS DocumentDB ● Expertise in designing, implementing, and optimizing ETL processes using tools like Kafka, Debezium, Airflow, or similar technologies ● Experience with data warehousing concepts and technologies ● Solid understanding of data modeling principles and best practices for both operational and analytical systems ● Proven ability to optimize database performance, including query optimization, indexing strategies, and database tuning ● Experience with AWS data services such as RDS, Redshift, S3, Glue, Kinesis, and ELK stack ● Proficiency in at least one programming language (Python, Node.js, Java) ● Experience with version control systems (Git) and CI/CD pipelines ● Bachelor's degree in Computer Science, Engineering, or related field Job Description Preferred Qualifications: ● Experience with graph databases (Neo4j, Amazon Neptune) ● Knowledge of big data technologies such as Hadoop, Spark, Hive, and data lake architectures ● Experience working with streaming data technologies and real-time data processing ● Familiarity with data governance and data security best practices ● Experience with containerization technologies (Docker, Kubernetes) ● Understanding of financial back-office operations and FinTech domain ● Experience working in a high-growth startup environment ● Master's degree in Computer Science, Data Engineering, or related field Job Types: Full-time, Permanent Pay: ₹1,500,000.00 - ₹2,500,000.00 per year Schedule: Day shift Monday to Friday Work Location: In person
Posted 2 weeks ago
5.0 years
5 - 18 Lacs
Hyderābād
On-site
Job Title: AWS Data Engineer – IoT Industry Location: [Hyderabad] Employment Type: Full-time Experience Level: [Mid / Senior] Department: Data Engineering / IoT Solutions Job Summary: We are seeking a skilled AWS Data Engineer to join our team focused on IoT data solutions . In this role, you will be responsible for designing, building, and maintaining scalable data pipelines and architectures in the AWS cloud ecosystem. Your work will directly support real-time and batch processing of data collected from IoT devices and sensors across multiple environments. Key Responsibilities: Design and implement scalable ETL/ELT pipelines for ingesting, processing, and storing data from IoT devices. Build and maintain data lakes and data warehouses on AWS using services such as S3, Glue, Redshift, Athena, and Lake Formation . Work with real-time streaming data using services like Kinesis Data Streams, Kinesis Data Analytics, and Kafka . Optimize data storage and access patterns for both structured and unstructured data. Implement data quality, governance, and monitoring best practices. Collaborate with Data Scientists, IoT Engineers, and DevOps teams to ensure reliable data delivery and infrastructure automation. Ensure data security and compliance with industry standards. - Design Step Functions state machines (or MWAA) that ingest & process data from IoT telemetry, manufacturing ERP/MES dumps, field-service CRM exports, and finance CSVs/SaaS APIs. - Trigger via EventBridge, manage retries, and alert through SNS/Slack. - IoT Core MQTT rules Aurora Postgres for telemetry. - AWS Glue jobs / AppFlow / Lambda to pull data from SAP, Oracle, Salesforce / Service Now, S3 uploads, etc. - dbt + Python in CodeBuild to cleanse, dedupe, enrich, and unify data into Raw Silver Gold - All pipelines deployed with CodePipeline + CDK, monitored in CloudWatch Required Skills & Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, or related field. 5–12 years of experience in data engineering, preferably in IoT or industrial domains . Proficient in AWS cloud services , including but not limited to S3, Glue, Lambda, Redshift, Athena, DynamoDB, and Kinesis. Strong programming skills in Python, Scala, Experience with SQL and NoSQL databases. Familiarity with data modeling, data warehousing, and big data tools (e.g., Apache Spark, Hive). Ingestion : 1. AWS IoT Core (MQTT) 2. AWS Glue Jobs / Workflows for ERP/CSV 3. AWS AppFlow or custom Lambda for SaaS (Salesforce, Service Now) Job Type: Full-time Pay: ₹501,241.62 - ₹1,841,015.39 per year Work Location: In person
Posted 2 weeks ago
1.0 years
4 - 10 Lacs
Hyderābād
Remote
- Experience defining requirements and using data and metrics to draw business insights - Knowledge of SQL - Knowledge of data visualization tools such as Quick Sight, Tableau, Power BI or other BI packages - Knowledge of Python, VBA, Macros, Selenium scripts - 1+ year of experience working in Analytics / Business Intelligence environment with prior experience of design and execution of analytical projects Want to join the Earth’s most customer centric company? Do you like to dive deep to understand problems? Are you someone who likes to challenge Status Quo? Do you strive to excel at goals assigned to you? If yes, we have opportunities for you. Global Operations – Artificial Intelligence (GO-AI) at Amazon is looking to hire candidates who can excel in a fast-paced dynamic environment. Are you somebody that likes to use and analyze big data to drive business decisions? Do you enjoy converting data into insights that will be used to enhance customer decisions worldwide for business leaders? Do you want to be part of the data team which measures the pulse of innovative machine vision-based projects? If your answer is yes, join our team. GO-AI is looking for a motivated individual with strong skills and experience in resource utilization planning, process optimization and execution of scalable and robust operational mechanisms, to join the GO-AI Ops DnA team. In this position you will be responsible for supporting our sites to build solutions for the rapidly expanding GO-AI team. The role requires the ability to work with a variety of key stakeholders across job functions with multiple sites. We are looking for an entrepreneurial and analytical program manager, who is passionate about their work, understands how to manage service levels across multiple skills/programs, and who is willing to move fast and experiment often. Key job responsibilities - Design and develop highly available dashboards and metrics using SQL and Excel/Tableau - Execute high priority (i.e. cross functional, high impact) projects to create robust, scalable analytics solutions and frameworks with the help of Analytics/BIE managers - Work closely with internal stakeholders such as business teams, engineering teams, and partner teams and align them with respect to your focus area - Creates and maintains comprehensive business documentation including user stories, acceptance criteria, and process flows that help the BIE understand the context for developing ETL processes and visualization solutions. - Performs user acceptance testing and business validation of delivered dashboards and reports, ensuring that BIE-created solutions meet actual operational needs and can be effectively utilized by site managers and operations teams. - Monitors business performance metrics and operational KPIs to proactively identify emerging analytical requirements, working with BIEs to rapidly develop solutions that address real-time operational challenges in the dynamic AI-enhanced fulfillment environment. About the team The Global Operations – Artificial Intelligence (GO-AI) team remotely handles exceptions in the Amazon Robotic Fulfillment Centers Globally. GO-AI seeks to complement automated vision based decision-making technologies by providing remote human support for the subset of tasks which require higher cognitive ability and cannot be processed through automated decision making with high confidence. This team provides end-to-end solutions through inbuilt competencies of Operations and strong central specialized teams to deliver programs at Amazon scale. It is operating multiple programs including Nike IDS, Proteus, Sparrow and other new initiatives in partnership with global technology and operations teams. Experience in using AI tools Experience in Amazon Redshift and other AWS technologies for large datasets Analytical mindset and ability to see the big picture and influence others Detail-oriented and must have an aptitude for solving unstructured problems. The role will require the ability to extract data from various sources and to design/construct/execute complex analyses to finally come up with data/reports that help solve the business problem Good oral, written and presentation skills combined with the ability to be part of group discussions and explaining complex solutions Ability to apply analytical, computer, statistical and quantitative problem solving skills is required Ability to work effectively in a multi-task, high volume environment Ability to be adaptable and flexible in responding to deadlines and workflow fluctuations Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Posted 2 weeks ago
0 years
8 - 8 Lacs
Chennai
On-site
Job Summary: We are looking for a skilled AWS Data Engineer with strong experience in building and managing cloud-based ETL pipelines using AWS Glue, Python/PySpark, and Athena, along with data warehousing expertise in Amazon Redshift. The ideal candidate will be responsible for designing, developing, and maintaining scalable data solutions in a cloud-native environment. Design and implement ETL workflows using AWS Glue, Python, and PySpark. Develop and optimize queries using Amazon Athena and Redshift. Build scalable data pipelines to ingest, transform, and load data from various sources. Ensure data quality, integrity, and security across AWS services. Collaborate with data analysts, data scientists, and business stakeholders to deliver data solutions. Monitor and troubleshoot ETL jobs and cloud infrastructure performance. Automate data workflows and integrate with CI/CD pipelines. Required Skills & Qualifications: Hands-on experience with AWS Glue, Athena, and Redshift. Strong programming skills in Python and PySpark. Experience with ETL design, implementation, and optimization. Familiarity with S3, Lambda, CloudWatch, and other AWS services. Understanding of data warehousing concepts and performance tuning in Redshift. Experience with schema design, partitioning, and query optimization in Athena. Proficiency in version control (Git) and agile development practices. About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.
Posted 2 weeks ago
4.0 - 6.0 years
2 - 8 Lacs
Noida
On-site
Position: Data Engineer (AWS QuickSight, Glue, PySpark) (Noida) (CE46SF RM 3386) Education Required: Bachelor’s / Masters / PhD: Bachelor’s or master’s in computer science, Statistics, Mathematics, Data Science, Engineering AWS certification (e.g., AWS Certified Data Analytics – Specialty, AWS Certified Developer) Must have skills: Proficiency in AWS cloud services: AWS Glue, QuickSight, S3, Lambda, Athena, Redshift, EMR, and related technologies Strong experience with PySpark Expertise in SQL and data modeling for relational and non-relational databases Familiarity with business intelligence and visualization tools, especially Amazon QuickSight Good to have: Proficiency in Python and ML libraries (scikit-learn, TensorFlow, PyTorch). Understanding of MLOps and model deployment best practices. Hands-on experience with AWS services for ML. Experience or familiarity with HVAC domain is a plus Key Responsibilities: Design, develop, and maintain data pipelines using AWS Glue, PySpark, and related AWS services to extract, transform, and load (ETL) data from diverse sources Build and optimize data warehouse/data lake infrastructure on AWS, ensuring efficient data storage, processing, and retrieval Develop and manage ETL processes to source data from various systems, including databases, APIs, and file storage, and create unified data models for analytics and reporting Implement and maintain business intelligence dashboards using Amazon QuickSight, enabling stakeholders to derive actionable insights Collaborate with cross-functional teams (business analysts, data scientists, product managers) to understand requirements and deliver scalable data solutions Ensure data quality, integrity, and security throughout the data lifecycle, implementing best practices for governance and compliance5. Support self-service analytics by empowering internal users to access and analyze data through QuickSight and other reporting tools1. Troubleshoot and resolve data pipeline issues, optimizing performance and reliability as needed Required Skills: Proficiency in AWS cloud services: AWS Glue, QuickSight, S3, Lambda, Athena, Redshift, EMR, and related technologies Strong experience with PySpark for large-scale data processing and transformation Expertise in SQL and data modeling for relational and non-relational databases Experience building and optimizing ETL pipelines and data integration workflows Familiarity with business intelligence and visualization tools, especially Amazon QuickSight Knowledge of data governance, security, and compliance best practices. Strong programming skills in Python; experience with automation and scripting Ability to work collaboratively in agile environments and manage multiple priorities effectively Excellent problem-solving and communication skills. ******************************************************************************************************************************************* Job Category: Digital_Cloud_Web Technologies Job Type: Full Time Job Location: Noida Experience: 4-6 years Notice period: 0-15 days
Posted 2 weeks ago
8.0 - 12.0 years
3 - 5 Lacs
Noida
On-site
Posted On: 8 Jul 2025 Location: Noida, UP, India Company: Iris Software Why Join Us? Are you inspired to grow your career at one of India’s Top 25 Best Workplaces in IT industry? Do you want to do the best work of your life at one of the fastest growing IT services companies ? Do you aspire to thrive in an award-winning work culture that values your talent and career aspirations ? It’s happening right here at Iris Software. About Iris Software At Iris Software, our vision is to be our client’s most trusted technology partner, and the first choice for the industry’s top professionals to realize their full potential. With over 4,300 associates across India, U.S.A, and Canada, we help our enterprise clients thrive with technology-enabled transformation across financial services, healthcare, transportation & logistics, and professional services. Our work covers complex, mission-critical applications with the latest technologies, such as high-value complex Application & Product Engineering, Data & Analytics, Cloud, DevOps, Data & MLOps, Quality Engineering, and Business Automation. Working at Iris Be valued, be inspired, be your best. At Iris Software, we invest in and create a culture where colleagues feel valued, can explore their potential, and have opportunities to grow. Our employee value proposition (EVP) is about “Being Your Best” – as a professional and person. It is about being challenged by work that inspires us, being empowered to excel and grow in your career, and being part of a culture where talent is valued. We’re a place where everyone can discover and be their best version. Job Description General Roles & Responsibilities: Technical Leadership: Demonstrate leadership, and ability to guide business and technology teams in adoption of best practices and standards Design & Development: Design, develop, and maintain robust, scalable, and high-performance data estate Architecture: Architect and design robust data solutions that meet business requirements & include scalability, performance, and security. Quality: Ensure the quality of deliverables through rigorous reviews, and adherence to standards. Agile Methodologies: Actively participate in agile processes, including planning, stand-ups, retrospectives, and backlog refinement. Collaboration: Work closely with system architects, data engineers, data scientists, data analysts, cloud engineers and other business stakeholders to determine optimal solution & architecture that is future-proof too. Innovation: Stay updated with the latest industry trends and technologies, and drive continuous improvement initiatives within the development team. Documentation: Create and maintain technical documentation, including design documents, and architectural user guides. Technical Responsibilities: Optimize data pipelines for performance and efficiency. Work with Databricks clusters and configuration management tools. Use appropriate tools in the cloud data lake development and deployment. Developing/implementing cloud infrastructure to support current and future business needs. Provide technical expertise and ownership in the diagnosis and resolution of issues. Ensure all cloud solutions exhibit a higher level of cost efficiency, performance, security, scalability, and reliability. Manage cloud data lake development and deployment on AWS /Databricks. Manage and create workspaces, configure cloud resources, view usage data, and manage account identities, settings, and subscriptions in Databricks Required Technical Skills: Experience & Proficiency with Databricks platform - Delta Lake storage, Spark (PySpark, Spark SQL). Must be well versed with Databricks Lakehouse, Unity Catalog concept and its implementation in enterprise environments. Familiarity of data design pattern - medallion architecture to organize data in a Lakehouse. Experience & Proficiency with AWS Data Services – S3, Glue, Athena, Redshift etc.| Airflow scheduling Proficiency in SQL and experience with relational databases. Proficiency in at least one programming language (e.g., Python, Java) for data processing and scripting. Experience with DevOps practices - AWS DevOps for CI/CD, Terraform/CDK for infrastructure as code Good understanding of data principles, Cloud Data Lake design & development including data ingestion, data modeling and data distribution. Jira: Proficient in using Jira for managing projects and tracking progress. Other Skills: Strong communication and interpersonal skills. Collaborate with data stewards, data owners, and IT teams for effective implementation Understanding of business processes and terminology – preferably Logistics Experienced with Scrum and Agile Methodologies Qualification Bachelor’s degree in information technology or a related field. Equivalent experience may be considered. Overall experience of 8-12 years in Data Engineering Mandatory Competencies Cloud - AWS Data Science - Databricks Database - SQL Data on Cloud - Azure Data Lake (ADL) Agile - Agile Data Analysis - Data Analysis Big Data - PySpark Data on Cloud - AWS S3 Data on Cloud - Redshift ETL - AWS Glue Python - Python DevOps - CI/CD Beh - Communication and collaboration Perks and Benefits for Irisians At Iris Software, we offer world-class benefits designed to support the financial, health and well-being needs of our associates to help achieve harmony between their professional and personal growth. From comprehensive health insurance and competitive salaries to flexible work arrangements and ongoing learning opportunities, we're committed to providing a supportive and rewarding work environment. Join us and experience the difference of working at a company that values its employees' success and happiness.
Posted 2 weeks ago
3.0 years
0 Lacs
India
Remote
AWS Data Engineer Location: Remote (India) Experience: 3+ Years Employment Type: Full-Time About the Role: We are seeking a talented AWS Data Engineer with at least 3 years of hands-on experience in building and managing data pipelines using AWS services. This role involves working with large-scale data, integrating multiple data sources (including sensor/IoT data), and enabling efficient, secure, and analytics-ready solutions. Experience in the energy industry or working with time-series/sensor data is a strong plus. Key Responsibilities: Build and maintain scalable ETL/ELT data pipelines using AWS Glue, Redshift, Lambda, EMR, S3, and Athena Process and integrate structured and unstructured data, including sensor/IoT and real-time streams Optimize pipeline performance and ensure reliability and fault tolerance Collaborate with cross-functional teams including data scientists and analysts Perform data transformations using Python, Pandas, and SQL Maintain data integrity, quality, and security across the platform Use Terraform and CI/CD tools (e.g., Azure DevOps) for infrastructure and deployment automation Support and monitor pipeline workflows, troubleshoot issues, and implement fixes Contribute to the adoption of emerging tools like AWS Bedrock, Textract, Rekognition, and GenAI solutions Required Skills and Qualifications: Bachelor’s or Master’s degree in Computer Science, Information Technology, or related field 3+ years of experience in data engineering using AWS Strong skills in: AWS Glue, Redshift, S3, Lambda, EMR, Athena Python, Pandas, SQL RDS, Postgres, SAP HANA Solid understanding of data modeling, warehousing, and pipeline orchestration Experience with version control (Git) and infrastructure as code (Terraform) Preferred Skills: Experience working with energy sector dat a or IoT/sensor-based data Exposure to machine learnin g tools and frameworks (e.g., SageMaker, TensorFlow, Scikit-learn) Familiarity with big data technologie s like Apache Spark, Kafka Experience with data visualization tool s (Tableau, Power BI, AWS QuickSight) Awareness of data governance and catalog tool s such as AWS Data Quality, Collibra, and AWS Databrew AWS Certifications (Data Analytics, Solutions Architect
Posted 2 weeks ago
6.0 - 9.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Role Description ocation: Any UST Location Experience: 6 to 9 years Mandatory Skills: PySpark, GCP, Hadoop, Hive, AWS, GCP Good to Have: CI/CD and DevOps experience Job Description We are seeking a highly skilled Senior Big Data Engineer to join our team at UST. The ideal candidate will have solid experience in Big Data technologies, cloud platforms, and data processing frameworks with a strong focus on PySpark and Google Cloud Platform (GCP). Key Responsibilities Design, develop, and maintain scalable data pipelines and ETL workflows using PySpark, Hadoop, and Hive. Deploy and manage big data workloads on cloud platforms like GCP and AWS. Work closely with cross-functional teams to understand data requirements and deliver high-quality solutions. Optimize data processing jobs for performance and cost-efficiency on cloud infrastructure. Implement automation and CI/CD pipelines to streamline deployment and monitoring of data workflows. Ensure data security, governance, and compliance in cloud environments. Troubleshoot and resolve data issues, monitoring job executions and system health. Mandatory Skills PySpark: Strong experience in developing data processing jobs and ETL pipelines. Google Cloud Platform (GCP): Hands-on experience with BigQuery, Dataflow, Dataproc, or similar services. Hadoop Ecosystem: Expertise with Hadoop, Hive, and related big data tools. AWS: Familiarity with AWS data services like S3, EMR, Glue, or Redshift. Strong SQL and data modeling skills. Good To Have Experience with CI/CD tools and DevOps practices (Jenkins, GitLab, Terraform, etc.). Containerization and orchestration knowledge (Docker, Kubernetes). Experience with Infrastructure as Code (IaC). Knowledge of data governance and data security best practices. Skills Spark,Hadoop,Hive,Gcp
Posted 2 weeks ago
3.0 - 8.0 years
5 - 14 Lacs
Hyderabad
Work from Office
Position: Data Analyst | Interview: Walk-in | Type: Full-time | Location: Hyderabad | Exp: 3–8 yrs | Work: 5 Days WFO Data Analysis & Insights Reporting & Visualization Data Extraction & ETL Collaboration & Management Contact:6309124068 Manoj Required Candidate profile Looking for Data Analysts with 3–8 yrs exp in SQL, BI tools (Tableau/Power BI), Python/AppScript. Should have experience in ETL, dashboarding, A/B testing, Contact:6309124068 Manoj
Posted 2 weeks ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Company Description Zimetrics is a technology services and solutions provider specializing in Data, AI, and Digital. We help enterprises leverage the economic potential and business value of data from systems, machines, connected devices, and human-generated content. Our core principles are Integrity, Intellect, and Ingenuity, guiding our value system, engineering expertise, and organizational behavior. We are problem solvers and innovators who challenge conventional wisdom and believe in possibilities. Key Responsibilities: Design scalable and secure cloud-based data architecture solutions Lead data modeling, integration, and migration strategies across platforms Engage directly with clients to understand business needs and translate them into technical solutions Support sales/pre-sales teams with solution architecture, technical presentations, and proposals Collaborate with cross-functional teams including engineering, BI, and product Ensure best practices in data governance, security, and performance optimization Key Requirements: Strong experience with Cloud platforms (AWS, Azure, or GCP) Deep understanding of Data Warehousing concepts and tools (Snowflake, Redshift, BigQuery, etc.) Proven expertise in data modeling (conceptual, logical, and physical) Excellent communication and client engagement skills Experience in pre-sales or solution consulting is a strong advantage Ability to present complex technical concepts to non-technical stakeholders
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
31458 Jobs | Dublin
Wipro
16542 Jobs | Bengaluru
EY
10788 Jobs | London
Accenture in India
10711 Jobs | Dublin 2
Amazon
8660 Jobs | Seattle,WA
Uplers
8559 Jobs | Ahmedabad
IBM
7988 Jobs | Armonk
Oracle
7535 Jobs | Redwood City
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi
Capgemini
6091 Jobs | Paris,France