Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
6.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Are you excited by the prospect of wrangling data, helping develop information systems/sources/tools, and shaping the way businesses make decisions? The Go-To-Markets Data Analytics team is looking for a skilled Senior Data Engineer who is motivated to deliver top notch data-engineering solutions to support business intelligence, data science, and self-service data solutions. About the Role: In this role as a Senior Data Engineer, you will: Design, develop, optimize, and automate data pipelines that blend and transform data across different sources to help drive business intelligence, data science, and self-service data solutions. Work closely with data scientists and data visualization teams to understand data requirements to ensure the availability of high-quality data for analytics, modelling, and reporting. Build pipelines that source, transform, and load data that’s both structured and unstructured keeping in mind data security and access controls. Explore large volumes of data with curiosity and conviction. Contribute to the strategy and architecture of data management systems and solutions. Proactively troubleshoot and resolve data-related and performance bottlenecks in a timely manner. Be open to learning and working on emerging technologies in the data engineering, data science and cloud computing space. Have the curiosity to interrogate data, conduct independent research, utilize various techniques, and tackle ambiguous problems. Shift Timings: 12 PM to 9 PM (IST) Work from office for 2 days in a week (Mandatory) About You You’re a fit for the role of Senior Data Engineer, if your background includes: Must have at least 6-7 years of total work experience with at least 3+ years in data engineering or analytics domains. Graduates in data analytics, data science, computer science, software engineering or other data centric disciplines. SQL Proficiency a must. Experience with data pipeline and transformation tools such as dbt, Glue, FiveTran, Alteryx or similar solutions. Experience using cloud-based data warehouse solutions such as Snowflake, Redshift, Azure. Experience with orchestration tools like Airflow or Dagster. Preferred experience using Amazon Web Services (S3, Glue, Athena, Quick sight). Data modelling knowledge of various schemas like snowflake and star. Has built data pipelines and other custom automated solutions to speed the ingestion, analysis, and visualization of large volumes of data. Knowledge building ETL workflows, database design, and query optimization. Has experience of a scripting language like Python. Works well within a team and collaborates with colleagues across domains and geographies. Excellent oral, written, and visual communication skills. Has a demonstrable ability to assimilate new information thoroughly and quickly. Strong logical and scientific approach to problem-solving. Can articulate complex results in a simple and concise manner to all levels within the organization. What’s in it For You? Hybrid Work Model: We’ve adopted a flexible hybrid working environment (2-3 days a week in the office depending on the role) for our office-based roles while delivering a seamless experience that is digitally and physically connected. Flexibility & Work-Life Balance: Flex My Way is a set of supportive workplace policies designed to help manage personal and professional responsibilities, whether caring for family, giving back to the community, or finding time to refresh and reset. This builds upon our flexible work arrangements, including work from anywhere for up to 8 weeks per year, empowering employees to achieve a better work-life balance. Career Development and Growth: By fostering a culture of continuous learning and skill development, we prepare our talent to tackle tomorrow’s challenges and deliver real-world solutions. Our Grow My Way programming and skills-first approach ensures you have the tools and knowledge to grow, lead, and thrive in an AI-enabled future. Industry Competitive Benefits: We offer comprehensive benefit plans to include flexible vacation, two company-wide Mental Health Days off, access to the Headspace app, retirement savings, tuition reimbursement, employee incentive programs, and resources for mental, physical, and financial wellbeing. Culture: Globally recognized, award-winning reputation for inclusion and belonging, flexibility, work-life balance, and more. We live by our values: Obsess over our Customers, Compete to Win, Challenge (Y)our Thinking, Act Fast / Learn Fast, and Stronger Together. Social Impact: Make an impact in your community with our Social Impact Institute. We offer employees two paid volunteer days off annually and opportunities to get involved with pro-bono consulting projects and Environmental, Social, and Governance (ESG) initiatives. Making a Real-World Impact: We are one of the few companies globally that helps its customers pursue justice, truth, and transparency. Together, with the professionals and institutions we serve, we help uphold the rule of law, turn the wheels of commerce, catch bad actors, report the facts, and provide trusted, unbiased information to people all over the world. About Us Thomson Reuters informs the way forward by bringing together the trusted content and technology that people and organizations need to make the right decisions. We serve professionals across legal, tax, accounting, compliance, government, and media. Our products combine highly specialized software and insights to empower professionals with the data, intelligence, and solutions needed to make informed decisions, and to help institutions in their pursuit of justice, truth, and transparency. Reuters, part of Thomson Reuters, is a world leading provider of trusted journalism and news. We are powered by the talents of 26,000 employees across more than 70 countries, where everyone has a chance to contribute and grow professionally in flexible work environments. At a time when objectivity, accuracy, fairness, and transparency are under attack, we consider it our duty to pursue them. Sound exciting? Join us and help shape the industries that move society forward. As a global business, we rely on the unique backgrounds, perspectives, and experiences of all employees to deliver on our business goals. To ensure we can do that, we seek talented, qualified employees in all our operations around the world regardless of race, color, sex/gender, including pregnancy, gender identity and expression, national origin, religion, sexual orientation, disability, age, marital status, citizen status, veteran status, or any other protected classification under applicable law. Thomson Reuters is proud to be an Equal Employment Opportunity Employer providing a drug-free workplace. We also make reasonable accommodations for qualified individuals with disabilities and for sincerely held religious beliefs in accordance with applicable law. More information on requesting an accommodation here. Learn more on how to protect yourself from fraudulent job postings here. More information about Thomson Reuters can be found on thomsonreuters.com.
Posted 1 week ago
4.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Are you excited by the prospect of wrangling data, helping develop information systems/sources/tools, and shaping the way businesses make decisions? The Go-To-Markets Data Analytics team is looking for a skilled Data Engineer who is motivated to deliver top notch data-engineering solutions to support business intelligence, data science, and self-service data solutions. About the Role: In this role as a Data Engineer, you will: Design, develop, optimize, and automate data pipelines that blend and transform data across different sources to help drive business intelligence, data science, and self-service data solutions. Work closely with data scientists and data visualization teams to understand data requirements to ensure the availability of high-quality data for analytics, modelling, and reporting. Build pipelines that source, transform, and load data that’s both structured and unstructured keeping in mind data security and access controls. Explore large volumes of data with curiosity and conviction. Contribute to the strategy and architecture of data management systems and solutions. Proactively troubleshoot and resolve data-related and performance bottlenecks in a timely manner. Be open to learning and working on emerging technologies in the data engineering, data science and cloud computing space. Have the curiosity to interrogate data, conduct independent research, utilize various techniques, and tackle ambiguous problems. Shift Timings: 12 PM to 9 PM (IST) Work from office for 2 days in a week (Mandatory) About You You’re a fit for the role of Data Engineer, if your background includes: Must have at least 4-6 years of total work experience with at least 2+ years in data engineering or analytics domains. Graduates in data analytics, data science, computer science, software engineering or other data centric disciplines. SQL Proficiency a must. Experience with data pipeline and transformation tools such as dbt, Glue, FiveTran, Alteryx or similar solutions. Experience using cloud-based data warehouse solutions such as Snowflake, Redshift, Azure. Experience with orchestration tools like Airflow or Dagster. Preferred experience using Amazon Web Services (S3, Glue, Athena, Quick sight). Data modelling knowledge of various schemas like snowflake and star. Has built data pipelines and other custom automated solutions to speed the ingestion, analysis, and visualization of large volumes of data. Knowledge building ETL workflows, database design, and query optimization. Has experience of a scripting language like Python. Works well within a team and collaborates with colleagues across domains and geographies. Excellent oral, written, and visual communication skills. Has a demonstrable ability to assimilate new information thoroughly and quickly. Strong logical and scientific approach to problem-solving. Can articulate complex results in a simple and concise manner to all levels within the organization. What’s in it For You? Hybrid Work Model: We’ve adopted a flexible hybrid working environment (2-3 days a week in the office depending on the role) for our office-based roles while delivering a seamless experience that is digitally and physically connected. Flexibility & Work-Life Balance: Flex My Way is a set of supportive workplace policies designed to help manage personal and professional responsibilities, whether caring for family, giving back to the community, or finding time to refresh and reset. This builds upon our flexible work arrangements, including work from anywhere for up to 8 weeks per year, empowering employees to achieve a better work-life balance. Career Development and Growth: By fostering a culture of continuous learning and skill development, we prepare our talent to tackle tomorrow’s challenges and deliver real-world solutions. Our Grow My Way programming and skills-first approach ensures you have the tools and knowledge to grow, lead, and thrive in an AI-enabled future. Industry Competitive Benefits: We offer comprehensive benefit plans to include flexible vacation, two company-wide Mental Health Days off, access to the Headspace app, retirement savings, tuition reimbursement, employee incentive programs, and resources for mental, physical, and financial wellbeing. Culture: Globally recognized, award-winning reputation for inclusion and belonging, flexibility, work-life balance, and more. We live by our values: Obsess over our Customers, Compete to Win, Challenge (Y)our Thinking, Act Fast / Learn Fast, and Stronger Together. Social Impact: Make an impact in your community with our Social Impact Institute. We offer employees two paid volunteer days off annually and opportunities to get involved with pro-bono consulting projects and Environmental, Social, and Governance (ESG) initiatives. Making a Real-World Impact: We are one of the few companies globally that helps its customers pursue justice, truth, and transparency. Together, with the professionals and institutions we serve, we help uphold the rule of law, turn the wheels of commerce, catch bad actors, report the facts, and provide trusted, unbiased information to people all over the world. About Us Thomson Reuters informs the way forward by bringing together the trusted content and technology that people and organizations need to make the right decisions. We serve professionals across legal, tax, accounting, compliance, government, and media. Our products combine highly specialized software and insights to empower professionals with the data, intelligence, and solutions needed to make informed decisions, and to help institutions in their pursuit of justice, truth, and transparency. Reuters, part of Thomson Reuters, is a world leading provider of trusted journalism and news. We are powered by the talents of 26,000 employees across more than 70 countries, where everyone has a chance to contribute and grow professionally in flexible work environments. At a time when objectivity, accuracy, fairness, and transparency are under attack, we consider it our duty to pursue them. Sound exciting? Join us and help shape the industries that move society forward. As a global business, we rely on the unique backgrounds, perspectives, and experiences of all employees to deliver on our business goals. To ensure we can do that, we seek talented, qualified employees in all our operations around the world regardless of race, color, sex/gender, including pregnancy, gender identity and expression, national origin, religion, sexual orientation, disability, age, marital status, citizen status, veteran status, or any other protected classification under applicable law. Thomson Reuters is proud to be an Equal Employment Opportunity Employer providing a drug-free workplace. We also make reasonable accommodations for qualified individuals with disabilities and for sincerely held religious beliefs in accordance with applicable law. More information on requesting an accommodation here. Learn more on how to protect yourself from fraudulent job postings here. More information about Thomson Reuters can be found on thomsonreuters.com.
Posted 1 week ago
15.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. Those in artificial intelligence and machine learning at PwC will focus on developing and implementing advanced AI and ML solutions to drive innovation and enhance business processes. Your work will involve designing and optimising algorithms, models, and systems to enable intelligent decision-making and automation. Years of Experience: Candidates with 15+ years of hands on experience Preferred Skills any cloud data and reporting migration experience Use an analytical and data driven approach to drive a deep understanding of fast changing business Have familiarity with data technologies such as snowflake, databricks, redshift, synapse Leading large-scale data modernization and governance initiatives emphasizing the strategy, design and development of Platform-as-a-Service, Infrastructure-as-a-Service that extend to private and public cloud deployment models Experience in designing, architecting, implementation, and managing data lakes/warehouses. Experience with complex environment delivering application migration to cloud platform Understanding of Agile, SCRUM and Continuous Delivery methodologies Hands-on experience with Docker and Kubernetes or other container orchestration platforms Strong Experience in data management with an understanding of analytics and reporting Understanding of emerging technologies and latest data engineering providers Experience in implementing enterprise data concepts such as Master Data Management and Enterprise Data Warehouse, experience with MDM standards. Educational Background BE / B.Tech / MCA / M.Sc / M.E / M.Tech / MBA
Posted 1 week ago
7.0 years
0 Lacs
Hyderabad, Telangana, India
Remote
Working with Us Challenging. Meaningful. Life-changing. Those aren't words that are usually associated with a job. But working at Bristol Myers Squibb is anything but usual. Here, uniquely interesting work happens every day, in every department. From optimizing a production line to the latest breakthroughs in cell therapy, this is work that transforms the lives of patients, and the careers of those who do it. You'll get the chance to grow and thrive through opportunities uncommon in scale and scope, alongside high-achieving teams. Take your career farther than you thought possible. Bristol Myers Squibb recognizes the importance of balance and flexibility in our work environment. We offer a wide variety of competitive benefits, services and programs that provide our employees with the resources to pursue their goals, both at work and in their personal lives. Read more: careers.bms.com/working-with-us . Summary As a Data Engineer based out of our BMS Hyderabad you are part of the Data Platform team along with supporting the larger Data Engineering community, that delivers data and analytics capabilities across different IT functional domains. The ideal candidate will have a strong background in data engineering, DataOps, cloud native services, and will be comfortable working with both structured and unstructured data. Key Responsibilities The Data Engineer will be responsible for designing, building, and maintaining the ETL pipelines, data products, evolution of the data products, and utilize the most suitable data architecture required for our organization's data needs. Responsible for delivering high quality, data products and analytic ready data solution Work with an end-to-end ownership mindset, innovate and drive initiatives through completion. Develop and maintain data models to support our reporting and analysis needs Optimize data storage and retrieval to ensure efficient performance and scalability Collaborate with data architects, data analysts and data scientists to understand their data needs and ensure that the data infrastructure supports their requirements Ensure data quality and integrity through data validation and testing Implement and maintain security protocols to protect sensitive data Stay up-to-date with emerging trends and technologies in data engineering and analytics Closely partner with the Enterprise Data and Analytics Platform team, other functional data teams and Data Community lead to shape and adopt data and technology strategy. Serves as the Subject Matter Expert on Data & Analytics Solutions. Knowledgeable in evolving trends in Data platforms and Product based implementation Has end-to-end ownership mindset in driving initiatives through completion Comfortable working in a fast-paced environment with minimal oversight Mentors other team members effectively to unlock full potential Prior experience working in an Agile/Product based environment Qualifications & Experience 7+ years of hands-on experience working on implementing and operating data capabilities and cutting-edge data solutions, preferably in a cloud environment. Breadth of experience in technology capabilities that span the full life cycle of data management including data lakehouses, master/reference data management, data quality and analytics/AI ML is needed. In-depth knowledge and hands-on experience with ASW Glue services and AWS Data engineering ecosystem. Hands-on experience developing and delivering data, ETL solutions with some of the technologies like AWS data services (Redshift, Athena, lakeformation, etc.), Cloudera Data Platform, Tableau labs is a plus 5+ years of experience in data engineering or software development Create and maintain optimal data pipeline architecture, assemble large, complex data sets that meet functional / non-functional business requirements. Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. Strong programming skills in languages such as Python, R, PyTorch, PySpark, Pandas, Scala etc. Experience with SQL and database technologies such as MySQL, PostgreSQL, Presto, etc. Experience with cloud-based data technologies such as AWS, Azure, or Google Cloud Platform Strong analytical and problem-solving skills Excellent communication and collaboration skills Functional knowledge or prior experience in Lifesciences Research and Development domain is a plus Experience and expertise in establishing agile and product-oriented teams that work effectively with teams in US and other global BMS site. Initiates challenging opportunities that build strong capabilities for self and team Demonstrates a focus on improving processes, structures, and knowledge within the team. Leads in analyzing current states, deliver strong recommendations in understanding complexity in the environment, and the ability to execute to bring complex solutions to completion. If you come across a role that intrigues you but doesn't perfectly line up with your resume, we encourage you to apply anyway. You could be one step away from work that will transform your life and career. Uniquely Interesting Work, Life-changing Careers With a single vision as inspiring as Transforming patients' lives through science™ , every BMS employee plays an integral role in work that goes far beyond ordinary. Each of us is empowered to apply our individual talents and unique perspectives in a supportive culture, promoting global participation in clinical trials, while our shared values of passion, innovation, urgency, accountability, inclusion and integrity bring out the highest potential of each of our colleagues. On-site Protocol Responsibilities BMS has an occupancy structure that determines where an employee is required to conduct their work. This structure includes site-essential, site-by-design, field-based and remote-by-design jobs. The occupancy type that you are assigned is determined by the nature and responsibilities of your role: Site-essential roles require 100% of shifts onsite at your assigned facility. Site-by-design roles may be eligible for a hybrid work model with at least 50% onsite at your assigned facility. For these roles, onsite presence is considered an essential job function and is critical to collaboration, innovation, productivity, and a positive Company culture. For field-based and remote-by-design roles the ability to physically travel to visit customers, patients or business partners and to attend meetings on behalf of BMS as directed is an essential job function. BMS is dedicated to ensuring that people with disabilities can excel through a transparent recruitment process, reasonable workplace accommodations/adjustments and ongoing support in their roles. Applicants can request a reasonable workplace accommodation/adjustment prior to accepting a job offer. If you require reasonable accommodations/adjustments in completing this application, or in any part of the recruitment process, direct your inquiries to adastaffingsupport@bms.com . Visit careers.bms.com/ eeo -accessibility to access our complete Equal Employment Opportunity statement. BMS cares about your well-being and the well-being of our staff, customers, patients, and communities. As a result, the Company strongly recommends that all employees be fully vaccinated for Covid-19 and keep up to date with Covid-19 boosters. BMS will consider for employment qualified applicants with arrest and conviction records, pursuant to applicable laws in your area. If you live in or expect to work from Los Angeles County if hired for this position, please visit this page for important additional information: https://careers.bms.com/california-residents/ Any data processed in connection with role applications will be treated in accordance with applicable data privacy policies and regulations.
Posted 1 week ago
15.0 years
0 Lacs
Andhra Pradesh, India
On-site
At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. Those in artificial intelligence and machine learning at PwC will focus on developing and implementing advanced AI and ML solutions to drive innovation and enhance business processes. Your work will involve designing and optimising algorithms, models, and systems to enable intelligent decision-making and automation. Years of Experience: Candidates with 15+ years of hands on experience Preferred Skills any cloud data and reporting migration experience Use an analytical and data driven approach to drive a deep understanding of fast changing business Have familiarity with data technologies such as snowflake, databricks, redshift, synapse Leading large-scale data modernization and governance initiatives emphasizing the strategy, design and development of Platform-as-a-Service, Infrastructure-as-a-Service that extend to private and public cloud deployment models Experience in designing, architecting, implementation, and managing data lakes/warehouses. Experience with complex environment delivering application migration to cloud platform Understanding of Agile, SCRUM and Continuous Delivery methodologies Hands-on experience with Docker and Kubernetes or other container orchestration platforms Strong Experience in data management with an understanding of analytics and reporting Understanding of emerging technologies and latest data engineering providers Experience in implementing enterprise data concepts such as Master Data Management and Enterprise Data Warehouse, experience with MDM standards. Educational Background BE / B.Tech / MCA / M.Sc / M.E / M.Tech / MBA
Posted 1 week ago
15.0 years
0 Lacs
Andhra Pradesh, India
On-site
At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. Those in artificial intelligence and machine learning at PwC will focus on developing and implementing advanced AI and ML solutions to drive innovation and enhance business processes. Your work will involve designing and optimising algorithms, models, and systems to enable intelligent decision-making and automation. Years of Experience: Candidates with 15+ years of hands on experience Preferred Skills Experience in GCP and one more cloud ( AWS/Azure) platform specifically in data migration experience Use an analytical and data driven approach to drive a deep understanding of fast changing business Have familiarity with data technologies such as snowflake, databricks, redshift, synapse Leading large-scale data modernization and governance initiatives emphasizing the strategy, design and development of Platform-as-a-Service, Infrastructure-as-a-Service that extend to private and public cloud deployment models Experience in designing, architecting, implementation, and managing data lakes/warehouses. Experience with complex environment delivering application migration to cloud platform Understanding of Agile, SCRUM and Continuous Delivery methodologies Hands-on experience with Docker and Kubernetes or other container orchestration platforms Strong Experience in data management with an understanding of analytics and reporting Understanding of emerging technologies and latest data engineering providers Experience in implementing enterprise data concepts such as Master Data Management and Enterprise Data Warehouse, experience with MDM standards. Educational Background BE / B.Tech / MCA / M.Sc / M.E / M.Tech / MBA
Posted 1 week ago
15.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. Those in artificial intelligence and machine learning at PwC will focus on developing and implementing advanced AI and ML solutions to drive innovation and enhance business processes. Your work will involve designing and optimising algorithms, models, and systems to enable intelligent decision-making and automation. Years of Experience: Candidates with 15+ years of hands on experience Preferred Skills Experience in GCP and one more cloud ( AWS/Azure) platform specifically in data migration experience Use an analytical and data driven approach to drive a deep understanding of fast changing business Have familiarity with data technologies such as snowflake, databricks, redshift, synapse Leading large-scale data modernization and governance initiatives emphasizing the strategy, design and development of Platform-as-a-Service, Infrastructure-as-a-Service that extend to private and public cloud deployment models Experience in designing, architecting, implementation, and managing data lakes/warehouses. Experience with complex environment delivering application migration to cloud platform Understanding of Agile, SCRUM and Continuous Delivery methodologies Hands-on experience with Docker and Kubernetes or other container orchestration platforms Strong Experience in data management with an understanding of analytics and reporting Understanding of emerging technologies and latest data engineering providers Experience in implementing enterprise data concepts such as Master Data Management and Enterprise Data Warehouse, experience with MDM standards. Educational Background BE / B.Tech / MCA / M.Sc / M.E / M.Tech / MBA
Posted 1 week ago
15.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. Those in artificial intelligence and machine learning at PwC will focus on developing and implementing advanced AI and ML solutions to drive innovation and enhance business processes. Your work will involve designing and optimising algorithms, models, and systems to enable intelligent decision-making and automation. Years of Experience: Candidates with 15+ years of hands on experience Preferred Skills any cloud data and reporting migration experience Use an analytical and data driven approach to drive a deep understanding of fast changing business Have familiarity with data technologies such as snowflake, databricks, redshift, synapse Leading large-scale data modernization and governance initiatives emphasizing the strategy, design and development of Platform-as-a-Service, Infrastructure-as-a-Service that extend to private and public cloud deployment models Experience in designing, architecting, implementation, and managing data lakes/warehouses. Experience with complex environment delivering application migration to cloud platform Understanding of Agile, SCRUM and Continuous Delivery methodologies Hands-on experience with Docker and Kubernetes or other container orchestration platforms Strong Experience in data management with an understanding of analytics and reporting Understanding of emerging technologies and latest data engineering providers Experience in implementing enterprise data concepts such as Master Data Management and Enterprise Data Warehouse, experience with MDM standards. Educational Background BE / B.Tech / MCA / M.Sc / M.E / M.Tech / MBA
Posted 1 week ago
2.0 - 3.0 years
0 Lacs
Hyderabad, Telangana, India
Remote
Working with Us Challenging. Meaningful. Life-changing. Those aren't words that are usually associated with a job. But working at Bristol Myers Squibb is anything but usual. Here, uniquely interesting work happens every day, in every department. From optimizing a production line to the latest breakthroughs in cell therapy, this is work that transforms the lives of patients, and the careers of those who do it. You'll get the chance to grow and thrive through opportunities uncommon in scale and scope, alongside high-achieving teams. Take your career farther than you thought possible. Bristol Myers Squibb recognizes the importance of balance and flexibility in our work environment. We offer a wide variety of competitive benefits, services and programs that provide our employees with the resources to pursue their goals, both at work and in their personal lives. Read more careers.bms.com/working-with-us . Summary As a Data Engineer based out of our BMS Hyderabad you are part of the Data Platform team along with supporting the larger Data Engineering community, that delivers data and analytics capabilities for Data Platforms and Data Engineering Community. The ideal candidate will have a strong background in data engineering, DataOps, cloud native services, and will be comfortable working with both structured and unstructured data. Key Responsibilities The Data Engineer will be responsible for developing and maintaining ETL/ELT pipelines for ingesting data from various sources into our data warehouse. Work with an end-to-end ownership mindset, innovate and drive initiatives through completion. Optimize data storage and retrieval to ensure efficient performance and scalability Collaborate with data architects, data analysts and data scientists to understand their data needs and ensure that the data infrastructure supports their requirements Ensure data quality and integrity through data validation and testing Implement and maintain security protocols to protect sensitive data Stay up-to-date with emerging trends and technologies in data engineering and analytics Closely partner with the Enterprise Data and Analytics Platform team, other functional data teams and Data Community lead to enable adoption of data and technology strategy. Knowledgeable in evolving trends in Data platforms and Product based implementation Comfortable working in a fast-paced environment with minimal oversight Prior experience working in an Agile/Product based environment. Qualifications & Experience 2-3 years of hands-on experience working on implementing and operating data capabilities and cutting-edge data solutions, preferably in AWS cloud environment. Breadth of experience in technology capabilities that span the full life cycle of data management including data lakehouses, master/reference data management, data quality and analytics/AI ML is needed. In-depth experience with AWS Glue service and data engineering ecosystem on AWS. Hands-on experience developing and delivering data, ETL solutions with some of the technologies like AWS data services ( Redshift, Athena, lakeformation, etc.), Cloudera Data Platform, Tableau labs is a plus Create and maintain optimal data pipeline architecture, assemble large, complex data sets that meet functional / non-functional business requirements. Identify, design, and implement internal process improvements automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. Strong programming skills in languages such as Python, PySpark, Pandas, Scala etc. Experience with SQL and database technologies such as MySQL, PostgreSQL, Presto, etc. Experience with cloud-based data technologies such as AWS, Azure, or Google Cloud Platform Strong analytical and problem-solving skills Excellent communication and collaboration skills Functional knowledge or prior experience in Lifesciences Research and Development domain is a plus If you come across a role that intrigues you but doesn't perfectly line up with your resume, we encourage you to apply anyway. You could be one step away from work that will transform your life and career. Uniquely Interesting Work, Life-changing Careers With a single vision as inspiring as Transforming patients' lives through science™ , every BMS employee plays an integral role in work that goes far beyond ordinary. Each of us is empowered to apply our individual talents and unique perspectives in a supportive culture, promoting global participation in clinical trials, while our shared values of passion, innovation, urgency, accountability, inclusion and integrity bring out the highest potential of each of our colleagues. On-site Protocol BMS has an occupancy structure that determines where an employee is required to conduct their work. This structure includes site-essential, site-by-design, field-based and remote-by-design jobs. The occupancy type that you are assigned is determined by the nature and responsibilities of your role Site-essential roles require 100% of shifts onsite at your assigned facility. Site-by-design roles may be eligible for a hybrid work model with at least 50% onsite at your assigned facility. For these roles, onsite presence is considered an essential job function and is critical to collaboration, innovation, productivity, and a positive Company culture. For field-based and remote-by-design roles the ability to physically travel to visit customers, patients or business partners and to attend meetings on behalf of BMS as directed is an essential job function. BMS is dedicated to ensuring that people with disabilities can excel through a transparent recruitment process, reasonable workplace accommodations/adjustments and ongoing support in their roles. Applicants can request a reasonable workplace accommodation/adjustment prior to accepting a job offer. If you require reasonable accommodations/adjustments in completing this application, or in any part of the recruitment process, direct your inquiries to adastaffingsupport@bms.com . Visit careers.bms.com/ eeo -accessibility to access our complete Equal Employment Opportunity statement. BMS cares about your well-being and the well-being of our staff, customers, patients, and communities. As a result, the Company strongly recommends that all employees be fully vaccinated for Covid-19 and keep up to date with Covid-19 boosters. BMS will consider for employment qualified applicants with arrest and conviction records, pursuant to applicable laws in your area. If you live in or expect to work from Los Angeles County if hired for this position, please visit this page for important additional information https //careers.bms.com/california-residents/ Any data processed in connection with role applications will be treated in accordance with applicable data privacy policies and regulations.
Posted 1 week ago
3.0 - 6.0 years
0 Lacs
Gurugram, Haryana, India
On-site
DATABASE ENGINEERING Responsibilities: Design, develop, and manage databases on the AWS cloud platform Develop and maintain automation scripts or jobs to perform routine database tasks such as provisioning, backups, restores, and data migrations. Build and maintain automated testing frameworks for database changes and upgrades to minimize the risk of introducing errors. Implement self-healing mechanisms to automatically recover from database failures or performance degradation. Integrate database automation tools with CI/CD pipelines to enable continuous delivery and deployment of database changes. Collaborate with cross-functional teams to understand their data requirements and ensure that the databases meet their needs Implement and manage database security policies, including access control, data encryption, and backup and recovery procedures Ensure that database backups and disaster recovery procedures are in place and tested regularly Develop and maintain database documentation, including data dictionaries, data models, and technical specifications Stay up-to-date with the latest cloud technologies and trends and evaluate new tools and products that could improve database performance and scalability. Requirements: (Postgres/MySQL/SQL Server, AWS CloudFormation/CDK, Python) Bachelor degree in Computer Science, Information Technology, or a related field Minimum of 3-6 years of experience in designing, building, and administering databases on the AWS cloud platform Strong experience with Infra as Code (CloudFormation/AWS CDK) and automation experience in Python In-depth knowledge of AWS database services such as Amazon RDS, EC2, S3, Amazon Aurora, and Amazon Redshift and Postgres/Mysql/SqlServer Strong understanding of database design principles, data modelling, and normalisation Experience with database migration to AWS cloud platform Strong understanding of database security principles and best practices Excellent troubleshooting and problem-solving skills Ability to work independently and in a team environment
Posted 1 week ago
5.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Wissen Technology is Hiring fo r Python + Data Engineer About Wissen Technology: Wissen Technology is a globally recognized organization known for building solid technology teams, working with major financial institutions, and delivering high-quality solutions in IT services. With a strong presence in the financial industry, we provide cutting-edge solutions to address complex business challenges. Role Overview: We are seeking a skilled and innovative Python Data Engineer with expertise in designing and implementing data solutions using the AWS cloud platform. The ideal candidate will be responsible for building and maintaining scalable, efficient, and secure data pipelines while leveraging Python and AWS services to enable robust data analytics and decision-making processes. Experience: 5-9 Years Location: Mumbai Key Responsibilities Design, develop, and optimize data pipelines using Python and AWS services such as Glue, Lambda, S3, EMR, Redshift, Athena, and Kinesis. Implement ETL/ELT processes to extract, transform, and load data from various sources into centralized repositories (e.g., data lakes or data warehouses). Collaborate with cross-functional teams to understand business requirements and translate them into scalable data solutions. Monitor, troubleshoot, and enhance data workflows for performance and cost optimization. Ensure data quality and consistency by implementing validation and governance practices. Work on data security best practices in compliance with organizational policies and regulations. Automate repetitive data engineering tasks using Python scripts and frameworks. Leverage CI/CD pipelines for deployment of data workflows on AWS. Required Skills: Professional Experience: 5+ years of experience in data engineering or a related field. Programming: Strong proficiency in Python, with experience in libraries like pandas, pyspark , or boto3. AWS Expertise: Hands-on experience with core AWS services for data engineering, such as: -AWS Glue for ETL/ELT. -S3 for storage. -Redshift or Athena for data warehousing and querying. -Lambda for serverless compute . -Kinesis or SNS/SQS for data streaming. -IAM Roles for security. Databases: Proficiency in SQL and experience with relational (e.g., PostgreSQL, MySQL) and NoSQL (e.g., DynamoDB) databases. Data Processing: Knowledge of big data frameworks (e.g., Hadoop, Spark) is a plus. DevOps: Familiarity with CI/CD pipelines and tools like Jenkins, Git, and CodePipeline . Version Control: Proficient with Git-based workflows. Problem Solving: Excellent analytical and debugging skills. The Wissen Group was founded in the year 2000. Wissen Technology, a part of Wissen Group, was established in the year 2015. Wissen Technology is a specialized technology company that delivers high-end consulting for organizations in the Banking & Finance, Telecom, and Healthcare domains. We help clients build world class products. We offer an array of services including Core Business Application Development, Artificial Intelligence & Machine Learning, Big Data & Analytics, Visualization & Business Intelligence, Robotic Process Automation, Cloud Adoption, Mobility, Digital Adoption, Agile & DevOps, Quality Assurance & Test Automation. Over the years, Wissen Group has successfully delivered $1 billion worth of projects for more than 20 of the Fortune 500 companies. Wissen Technology provides exceptional value in mission critical projects for its clients, through thought leadership, ownership, and assured on-time deliveries that are always ‘first time right ’. The technology and thought leadership that the company commands in the industry is the direct result of the kind of people Wissen has been able to attract. Wissen is committed to providing them with the best possible opportunities and careers, which extends to providing the best possible experience and value to our clients. We have been certified as a Great Place to Work® company for two consecutive years (2020-2022) and voted as the Top 20 AI/ML vendor by CIO Insider. Great Place to Work® Certification is recognized world over by employees and employers alike and is considered the ‘Gold Standard ’. Wissen Technology has created a Great Place to Work by excelling in all dimensions - High-Trust, High-Performance Culture, Credibility, Respect, Fairness, Pride and Camaraderie. Website : www.wissen.com LinkedIn : https ://www.linkedin.com/company/wissen-technology Wissen Leadership : https://www.wissen.com/company/leadership-team/ Wissen Live: https://www.linkedin.com/company/wissen-technology/posts/feedView=All Wissen Thought Leadership : https://www.wissen.com/articles/ Employee Speak: https://www.ambitionbox.com/overview/wissen-technology-overview https://www.glassdoor.com/Reviews/Wissen-Infotech-Reviews-E287365.htm Great Place to Work: https://www.wissen.com/blog/wissen-is-a-great-place-to-work-says-the-great-place-to-work-institute-india/ https://www.linkedin.com/posts/wissen-infotech_wissen-leadership-wissenites-activity-6935459546131763200-xF2k About Wissen Interview Process : https://www.wissen.com/blog/we-work-on-highly-complex-technology-projects-here-is-how-it-changes-whom-we-hire/ Latest in Wissen in CIO Insider: https://www.cioinsiderindia.com/vendor/wissen-technology-setting-new-benchmarks-in-technology-consulting-cid-1064.html
Posted 1 week ago
5.0 - 8.0 years
0 Lacs
Bengaluru
Remote
Job Information Number of Positions 1 Date Opened 06/20/2025 Job Type Full time Industry IT Services Work Experience 5-8 years Last Activity Time 07/13/2025 11:32 City bangalore State/Province Karnataka Country India Zip/Postal Code 560002 Job Description Job Title: Senior Data Engineer Location: Remote Experience: 5–8 Years Employment Type: Full-Time About the Role Aptus Data Labs is looking for a talented and proactive Senior Data Engineer to help build the backbone of our enterprise data and AI initiatives. You’ll work on modern data lake architectures and high-performance pipelines in AWS, enabling real-time insights and scalable analytics. This role reports to the Head – Data Platform and AI Lead , offering a unique opportunity to be part of a cross-functional team shaping the future of data-driven innovation. Key Responsibilities Data Engineering & Pipeline Development Design and develop reliable, reusable ETL/ELT pipelines using AWS Glue, Python, and Spark. Process structured and semi-structured data (e.g., JSON, Parquet, CSV) efficiently for analytics and AI workloads. Build automation and orchestration workflows using Airflow or AWS Step Functions. Data Lake Architecture & Integration Implement AWS-native data lake/lakehouse architectures using S3, Redshift, Glue Catalog, and Lake Formation. Consolidate data from APIs, on-prem systems, and third-party sources into a centralized platform. Optimize data models and partitioning strategies for high-performance queries. Security, IAM & Governance Support Ensure secure data architecture practices across AWS components using encryption, access control, and policy enforcement. Implement and manage AWS IAM roles and policies to control data access across services and users. Collaborate with platform and security teams to maintain compliance and audit readiness (e.g., HIPAA, GxP). Apply best practices in data security, privacy, and identity management in cloud environments. DevOps & Observability Automate deployment of data infrastructure using CI/CD pipelines (GitHub Actions, Jenkins, or AWS CodePipeline). Create Docker-based containers and manage workloads using ECS or EKS. Monitor pipeline health, failures, and performance using CloudWatch and custom logs. Collaboration & Communication Partner with the Data Platform Lead and AI Lead to align engineering efforts with AI product goals. Engage with analysts, data scientists, and business teams to gather requirements and deliver data assets. Contribute to documentation, code reviews, and architectural discussions with clarity and confidence. Required Qualifications Bachelor’s degree in Computer Science, Engineering, or equivalent. 5–8 years of experience in data engineering, preferably in AWS cloud environments. Proficient in Python, SQL, and AWS services: Glue, Redshift, S3, IAM, Lake Formation. Experience managing IAM roles, security policies, and cloud-based data access controls. Hands-on experience with orchestration tools like Airflow or AWS Step Functions. Exposure to CI/CD practices and infrastructure automation. Strong interpersonal and communication skills—able to convey technical ideas clearly. Preferred Additional Skills Proficiency in Databricks , Unity Catalog , and Spark-based distributed data processing . Background in Pharma, Life Sciences, or other regulated environments (GxP, HIPAA). Experience with EMR, Snowflake, or hybrid-cloud data platforms. Experience with BI/reporting tools such as Power BI or QuickSight. Knowledge of integration tools (Boomi, Kafka) or real-time streaming frameworks. Ready to build data solutions that fuel AI innovation? Join Aptus Data Labs and play a key role in transforming raw data into enterprise intelligence.
Posted 1 week ago
1.0 years
1 - 2 Lacs
Jaipur
On-site
About the Role We are seeking a proactive and detail-oriented Apache Superset & SQL Expert with 1+ years of experience in the healthcare domain. You’ll be responsible for building insightful BI dashboards and maintaining complex data pipelines to support mission-critical analytics for healthcare operations and compliance reporting. Key Responsibilities Develop and maintain advanced Apache Superset dashboards tailored for healthcare KPIs and operational metrics Write, optimise, and maintain complex SQL queries to extract and transform data from multiple healthcare systems Collaborate with data engineering and clinical teams to define and model datasets for visualisation Ensure dashboards comply with healthcare data governance, privacy (e.g., HIPAA), and audit requirements Monitor performance, implement row-level security, and maintain a robust Superset configuration Translate clinical and operational requirements into meaningful visual stories Required Skills & Experience 1+ years of domain experience in healthcare analytics or working with healthcare datasets (EHR, claims, patient outcomes, etc.) 3+ years of experience working with Apache Superset in a production environment Strong command over SQL, including query tuning, joins, aggregations, and complex transformations Hands-on experience with data modelling and relational database design Solid understanding of clinical terminology, healthcare KPIs, and reporting workflows Experience in working with PostgreSQL, MySQL, or other SQL-based databases Strong documentation, communication, and stakeholder-facing skills Nice-to-Have Familiarity with HIPAA, HL7/FHIR data structures, or other regulatory standards Experience with Python, Flask, or Superset plugin development Exposure to modern healthcare data platforms, dbt, or Airflow Experience integrating Superset with EMR, clinical data lakes, or warehouse systems like Redshift or BigQuery Job Type: Full-time Pay: ₹10,000.00 - ₹20,000.00 per month Schedule: Day shift Work Location: In person Expected Start Date: 19/07/2025
Posted 1 week ago
3.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. *Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us.At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Job Description & Summary: A career within PWC Responsibilities: Job Title: Cloud Data Engineer (AWS/Azure/Databricks/GCP) Experience:3-8 years in Data Engineering Job Description: We are seeking skilled and dynamic Cloud Data Engineers specializing in AWS, Azure, Databricks, and GCP. The ideal candidate will have a strong background in data engineering, with a focus on data ingestion, transformation, and warehousing. They should also possess excellent knowledge of PySpark or Spark, and a proven ability to optimize performance in Spark job executions. Key Responsibilities: - Design, build, and maintain scalable data pipelines for a variety of cloud platforms including AWS, Azure, Databricks, and GCP. - Implement data ingestion and transformation processes to facilitate efficient data warehousing. - Utilize cloud services to enhance data processing capabilities: - AWS: Glue, Athena, Lambda, Redshift, Step Functions, DynamoDB, SNS. - Azure: Data Factory, Synapse Analytics, Functions, Cosmos DB, Event Grid, Logic Apps, Service Bus. - GCP: Dataflow, BigQuery, DataProc, Cloud Functions, Bigtable, Pub/Sub, Data Fusion. - Optimize Spark job performance to ensure high efficiency and reliability. - Stay proactive in learning and implementing new technologies to improve data processing frameworks. - Collaborate with cross-functional teams to deliver robust data solutions. - Work on Spark Streaming for real-time data processing as necessary. Qualifications: - 3-8 years of experience in data engineering with a strong focus on cloud environments. - Proficiency in PySpark or Spark is mandatory. - Proven experience with data ingestion, transformation, and data warehousing. - In-depth knowledge and hands-on experience with cloud services(AWS/Azure/GCP): - Demonstrated ability in performance optimization of Spark jobs. - Strong problem-solving skills and the ability to work independently as well as in a team. - Cloud Certification (AWS, Azure, or GCP) is a plus. - Familiarity with Spark Streaming is a bonus. Mandatory skill sets: Python, Pyspark, SQL with (AWS or Azure or GCP) Preferred skill sets: Python, Pyspark, SQL with (AWS or Azure or GCP) Years of experience required: 3-8 years Education qualification: BE/BTECH, ME/MTECH, MBA, MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master of Engineering, Bachelor of Technology, Bachelor of Engineering, Master of Business Administration Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills PySpark, Python (Programming Language) Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline {+ 28 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date
Posted 1 week ago
0.0 - 10.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Bengaluru, Karnataka Job ID 30185740 Job Category Digital Technology Role : SQL developer with Data modeling and AWS/Azure Location: Bangalore Full/ Part-time: Full Time. Build a career with confidence: Carrier Global Corporation, global leader in intelligent climate and energy solutions is committed to creating solutions that matter for people and our planet for generations to come. From the beginning, we've led in inventing new technologies and entirely new industries. Today, we continue to lead because we have a world-class, diverse workforce that puts the customer at the center of everything we do. About the Role: Looking for SQL Developer with ETL background and AWS OR Azure cloud platform experience. Job Description: Design, develop, and implement scalable and efficient data warehouse solutions on cloud platforms using Azure Fabric, AWS Redshift etc, Create and optimize data models to support business reporting and analytical needs. Integration using ETL Tools like Azure Data Factory etc. Write complex SQL queries, stored procedures, and functions for data manipulation and analysis. Implement data quality checks and validation processes to ensure data accuracy and integrity. Monitor and optimize data warehouse performance, including query tuning, indexing, and data partitioning strategies. Identify and troubleshoot data-related issues, ensuring data availability and reliability. Collaborate with data architects, data engineers, business analysts, and other stakeholders to understand data requirements and translate them into technical solutions. Analytical Skills: Strong problem-solving, analytical, and critical thinking skills. Preferred Skills & Tools for this role are: Experience of 7 to 10 years in the below mentioned skill sets Cloud Platforms: Azure (Data Factory, Azure Fabric, SQL DB, Data Lake), AWS (RedShift)—Any Azure tools OR AWS Databases: Postgres SQL OR MSSQL ETL Tools: Azure Data Factory OR Any ETL Tool Experience.- Languages: Expert level proficiency in T-SQL, Python.—TSQL AND PYTHON BI Tools: Power BI or similar—POWERBI OR TABLEAU OR SPOTFIRE Version Control & DevOps: Azure DevOps, Git.—any of these is preferred Benefits: We are committed to offering competitive benefits programs for all our employees and enhancing our programs when necessary. Make yourself a priority with flexible schedules, parental leave. Drive forward your career through professional development opportunities. Achieve your personal goals with our Employee Assistance Programme. Our commitment to you: Our greatest assets are the expertise, creativity, and passion of our employees. We strive to provide a great place to work that attracts, develops, and retains the best talent, promotes employee engagement, fosters teamwork and ultimately drives innovation for the benefit of our customers. We strive to create an environment where you feel that you belong, with diversity and inclusion as the engine to growth and innovation. We develop and deploy best-in-class programs and practices, providing enriching career opportunities, listening to employee feedback, and always challenging ourselves to do better. This is The Carrier Way. Join us and make a difference. Now! Carrier is An Equal Opportunity/Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or veteran status, age or any other federally protected class.
Posted 1 week ago
5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Level Up Your Career with Zynga! At Zynga, we bring people together through the power of play. As a global leader in interactive entertainment and a proud label of Take-Two Interactive, our games have been downloaded over 6 billion times—connecting players in 175+ countries through fun, strategy, and a little friendly competition. From thrilling casino spins to epic strategy battles, mind-bending puzzles, and social word challenges, our diverse game portfolio has something for everyone. Fan-favorites and latest hits include FarmVille™, Words With Friends™, Zynga Poker™, Game of Thrones Slots Casino™, Wizard of Oz Slots™, Hit it Rich! Slots™, Wonka Slots™, Top Eleven™, Toon Blast™, Empires & Puzzles™, Merge Dragons!™, CSR Racing™, Harry Potter: Puzzles & Spells™, Match Factory™, and Color Block Jam™—plus many more! Founded in 2007 and headquartered in California, our teams span North America, Europe, and Asia, working together to craft unforgettable gaming experiences. Whether you're spinning, strategizing, matching, or competing, Zynga is where fun meets innovation—and where you can take your career to the next level. Join us and be part of the play! Position Overview We are seeking expert and hardworking engineers to join our collaborative and innovative team. Zynga’s mission is to “Connect the World through Games” by building a truly social experience that makes the world a better place. The ideal candidate needs to have a strong focus on building high-quality, maintainable software that has global impact. The Analytics Engineering team is responsible for all things data at Zynga. We own the full game and player data pipeline - from ingestion to storage to driving insights and analytics. A Senior Software Engineer looks after the software design and development of quality services and products to support the Analytics needs of our games. In this role, you will be part of our Analytics Engineering group focusing on sophisticated technology developments for building scalable data infrastructure and end-to-end services which can be used by the various games. We are a 120+ organization servicing 1500 others across 13 global locations. What You'll Do Build and operate a multi PB-scale data platform. Design, code, and develop new features/fix bugs/improvements to systems and data pipelines (ETLs) while adhering to the SLOs. Follow engineering best methodologies towards ensuring performance, reliability, scalability, and observability of the code. Collaborate with other Software Engineers, ML Engineers, Data Scientists, and other collaborators, taking on learning and leadership opportunities that will arise every single day. Support the professional growth of junior engineers. Contribute to and promote sustainable engineering practices by incorporating best practices, producing outstanding code, documentation, testing and monitoring. What You Bring Bachelor's degree in Computer Science, or a related technical field, or equivalent experience. Minimum 5 years of strong data engineering design/development experience in building large-scale, distributed data platforms/products. Advanced coding expertise in SQL & Python/JVM-based language. Exposure to heterogeneous data storage systems like relational, NoSQL, in-memory etc. Knowledge of data modeling, lineage, data access and its governance. Proficient in AWS services like Redshift, Kinesis, Lambda, RDS, EKS/ECS, S3 etc. Exposure to open source software, frameworks and broader groundbreaking technologies (Airflow, Kafka, DataHub etc). Familiar with infrastructure provisioning, deployment, observability tools like Terraform, DataDog etc. Proven track record to deliver work on time with attention to quality. Excellent written and spoken communication skills and ability to work effectively in a geographically distributed team environment. We encourage you to apply even if you don’t meet every single requirement. Your unique perspective and experience could be exactly what we’re looking for. What We Offer You Zynga offers a world-class benefits package that helps support and balance the needs of our teams. To find out more about our benefits, visit the Zynga Benefits We are proud to be an equal opportunity employer, which means we are committed to creating and celebrating diverse thoughts, cultures, and backgrounds throughout our organization. Employment with us is based on substantive ability, objective qualifications, and work ethic – not an individual’s race, creed, color, religion, sex or gender, gender identity or expression, sexual orientation, national origin or ancestry, alienage or citizenship status, physical or mental disability, pregnancy, age, genetic information, veteran status, marital status, status as a victim of domestic violence or sex offenses, reproductive health decision, or any other characteristics protected by applicable law. As an equal opportunity employer, we are committed to providing the necessary support and accommodation to qualified individuals with disabilities, health conditions, or impairments (subject to any local qualifying requirements) to ensure their full participation in the job application or interview process. Please contact us at accommodationrequest@zynga.com to request any accommodations or for support related to your application for an open position. Please be aware that Zynga does not conduct job interviews or make job offers over third-party messaging apps such as Telegram, WhatsApp, or others. Zynga also does not engage in any financial exchanges during the recruitment or onboarding process, and will never ask a candidate for their personal or financial information over an app or other unofficial chat channel. Any attempt to do so may be the result of a scamp or phishing attack, and you should not engage. Zynga’s in-house recruitment team will only contact individuals through their official Company email addresses (i.e., via a zynga.com, naturalmotion.com, smallgiantgames.com, themavens.com, gram.gs email domain).
Posted 1 week ago
6.0 - 12.0 years
0 Lacs
pune, maharashtra
On-site
Calfus is a Silicon Valley headquartered software engineering and platforms company that seeks to inspire its team to rise faster, higher, stronger, and work together to build software at speed and scale. The company's core focus lies in creating engineered digital solutions that bring about a tangible and positive impact on business outcomes while standing for #Equity and #Diversity in its ecosystem and society at large. As a Data Engineer specializing in BI Analytics & DWH at Calfus, you will play a pivotal role in designing and implementing comprehensive business intelligence solutions that empower the organization to make data-driven decisions. Leveraging expertise in Power BI, Tableau, and ETL processes, you will create scalable architectures and interactive visualizations. This position requires a strategic thinker with strong technical skills and the ability to collaborate effectively with stakeholders at all levels. Key Responsibilities: - BI Architecture & DWH Solution Design: Develop and design scalable BI Analytical & DWH Solution that meets business requirements, leveraging tools such as Power BI and Tableau. - Data Integration: Oversee the ETL processes using SSIS to ensure efficient data extraction, transformation, and loading into data warehouses. - Data Modelling: Create and maintain data models that support analytical reporting and data visualization initiatives. - Database Management: Utilize SQL to write complex queries, stored procedures, and manage data transformations using joins and cursors. - Visualization Development: Lead the design of interactive dashboards and reports in Power BI and Tableau, adhering to best practices in data visualization. - Collaboration: Work closely with stakeholders to gather requirements and translate them into technical specifications and architecture designs. - Performance Optimization: Analyse and optimize BI solutions for performance, scalability, and reliability. - Data Governance: Implement best practices for data quality and governance to ensure accurate reporting and compliance. - Team Leadership: Mentor and guide junior BI developers and analysts, fostering a culture of continuous learning and improvement. - Azure Databricks: Leverage Azure Databricks for data processing and analytics, ensuring seamless integration with existing BI solutions. Qualifications: - Bachelors degree in computer science, Information Systems, Data Science, or a related field. - 6-12 years of experience in BI architecture and development, with a strong focus on Power BI and Tableau. - Proven experience with ETL processes and tools, especially SSIS. Strong proficiency in SQL Server, including advanced query writing and database management. - Exploratory data analysis with Python. - Familiarity with the CRISP-DM model. - Ability to work with different data models. - Familiarity with databases like Snowflake, Postgres, Redshift & Mongo DB. - Experience with visualization tools such as Power BI, QuickSight, Plotly, and/or Dash. - Strong programming foundation with Python for data manipulation and analysis using Pandas, NumPy, PySpark, data serialization & formats like JSON, CSV, Parquet & Pickle, database interaction, data pipeline and ETL tools, cloud services & tools, and code quality and management using version control. - Ability to interact with REST APIs and perform web scraping tasks is a plus. Calfus Inc. is an Equal Opportunity Employer.,
Posted 1 week ago
0 years
0 Lacs
Itanagar, Arunachal Pradesh, India
Remote
Data Architect/ Developer Mode - Work From Office(5 Days)/ Remote Time India Time and U.S overlap. Work Location Hyderabad Job Description Architect is required to know AWS Redshift Design, develop, test, deploy and troubleshoot SQL scripts, Stored Procedures that implement complex PLSQL solutions. Must have experience in Query Optimization. Must be able to understand complex business logic, fine tune long-running queries Must be expert at reverse engineering and extracting business requirements and rules from the code. Scheduling and monitoring the ETL jobs using SSIS. Build and review functional and business specifications Expert understanding of PostgreSQL, Stored Procedures, Views, and Functions. Provide the estimates for the ETL work Database/Data Warehouse performance tuning, monitoring and troubleshooting expertise. Expertise in query construction, including writing and optimizing complex SQL queries that contain sub queries, joins, and derived tables Troubleshooting and fixing the code Unit testing the code and documentation of results Must be expert at providing non-functional requirements. Help creating test data and provide support to QA process. Work with gate keeper on promoting the data stage jobs from QA to Production. Build, patch and data fix in production environment. Ensure very high availability to scale of 99.999%. Establish coding practices, best practices, and SoPs. Participate in code reviews and enforce relevant process. Strong analytical and thinking capabilities, good communication skills to conduct requirement gathering sessions/interview customers Ability to perform independent research to understand the product requirements and customer needs Communicates effectively with the project teams and other stakeholders. Translate technical details to non-technical audience. Expert at creating architectural artifacts for Data Warehouse. Team, effort management. Ability to set expectations for the client and the team. Ensure all deliverables are delivered in time at highest quality. Technical Skills ETL-SSIS SQL Stored Procedure, Functions, Triggers etc Query Optimization Server monitoring ETL : AWS-Glue DBMS : AWS Aurora MySQL, AWS Redshift, PostgreSQL Cloud Services : AWS cloud services, including EC2, RDS, S3, and IAM. Data Skills : SQL Performance tuning. Coding : knowledge of programming language like C#, Python or Java. To oversee dev resources. Team and people management. Agile scrum practices Great learning attitude Eagerness to take ownership A global mindset Excellent communication skills (ref:hirist.tech)
Posted 1 week ago
6.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
At PwC, our people in infrastructure focus on designing and implementing robust, secure IT systems that support business operations. They enable the smooth functioning of networks, servers, and data centres to optimise performance and minimise downtime. Those in cloud operations at PwC will focus on managing and optimising cloud infrastructure and services to enable seamless operations and high availability for clients. You will be responsible for monitoring, troubleshooting, and implementing industry leading practices for cloud-based systems. Focused on relationships, you are building meaningful client connections, and learning how to manage and inspire others. Navigating increasingly complex situations, you are growing your personal brand, deepening technical expertise and awareness of your strengths. You are expected to anticipate the needs of your teams and clients, and to deliver quality. Embracing increased ambiguity, you are comfortable when the path forward isn’t clear, you ask questions, and you use these moments as opportunities to grow. Skills Examples of the skills, knowledge, and experiences you need to lead and deliver value at this level include but are not limited to: Respond effectively to the diverse perspectives, needs, and feelings of others. Use a broad range of tools, methodologies and techniques to generate new ideas and solve problems. Use critical thinking to break down complex concepts. Understand the broader objectives of your project or role and how your work fits into the overall strategy. Develop a deeper understanding of the business context and how it is changing. Use reflection to develop self awareness, enhance strengths and address development areas. Interpret data to inform insights and recommendations. Uphold and reinforce professional and technical standards (e.g. refer to specific PwC tax and audit guidance), the Firm's code of conduct, and independence requirements. MS-SQL Database Administrator – Senior Associate Basic Qualifications Job Requirements and Preferences : Minimum Degree Required Bachelor’s degree Minimum Years Of Experience 6 year(s) Certification(s) Required NA Database Admin Database Lead/Admin will coordinate the day-to-day activities of the operational systems, processes, and infrastructure required for all service offerings being developed that assist clients on their cloud Managed Service Delivery. This individual will work with one or multiple clients to gather requirements and the corresponding work needed based on the client’s Cloud Journey roadmap. They will manage the day-to-day business of operations including various stakeholders and internal and external delivery partners. Responsibilities Database role supports our services that focus on Database technologies, including Amazon Aurora, DynamoDB, Redshift, Athena, Couchbase, Cassandra, MySQL, MS-SQL and Oracle. Extensive experience with Installation, Configuration, Patching, Backup-Recovery, Configuring Replication on Linux/Windows and CentOS infrastructure. Experience in involving discussion with clients for DB requirements, performance, and integration issues and providing better solutions or approaches along with capacity planning. Responsible for identifying and resolving performance bottlenecks in relation to CPU, I/O, Memory, and DB Architecture Responsible for Database migration i.e., on-prem to on-prem or to cloud Responsible for resource layout for new application from DB side Day-to-Day Production DB support and maintenance of different versions of server database Design and build function-centric solutions in the context of transition from traditional, legacy platforms to microservices architectures Identifies trends and assess opportunities to improve processes and execution. Raises and tracks issues and conflicts, removes barriers, resolves issues of medium complexity involving partners and calls out to appropriate levels when required. Solicits and responds to feedback while gaining dedication and support. Stays up to date on industry regulations, trends, and technology. Coordinates with management to ensure all operational, administrative, and compliance functions within the team are being carried out in accordance with regulatory standard methodologies. Qualifications Bachelor’s degree in Computer Science or related technology field preferred Minimum of 4 years of hands-on experience on Database Technologies. Strong working knowledge of ITIL principles and ITSM Current understanding of industry trends and methodologies Outstanding verbal and written communication skills Excellent attention to detail Strong interpersonal skills and leadership qualities
Posted 1 week ago
50.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Location : Bangalore Type : Full-time About Digit88 Digit88 empowers digital transformation for innovative and high-growth B2B and B2C SaaS companies as their trusted offshore software product engineering partner! We are a lean mid-stage software company, with a team of 75+ fantastic technologists, backed by executives with a deep understanding of and extensive experience in consumer and enterprise product development across large corporations and startups. We build highly efficient and effective engineering teams that solve real and complex problems for our partners. With more than 50+ years of collective experience in areas ranging from B2B and B2C SaaS, web and mobile apps, e-commerce platforms and solutions, custom enterprise SaaS platforms and domains spread across Conversational AI, Chatbots, IoT, Health-tech, ESG/Energy Analytics, Data Engineering, the founding team thrives in a fast paced and challenging environment that allows us to showcase our best. The Vision : To be the most trusted technology partner to innovative software product companies world-wide. The Opportunity Digit88 manages and is expanding the dedicated offshore product development team for its US (Bay Area, NYC) based NLP/Chatbot platform development partner, that is building a next-generation AI/NLP-based customer engagement platform. The candidate would be joining an existing team of 25+ engineers and helping expand the team that develops and delivers cutting-edge front-end applications for our clients that engage with consumers and manage conversations across the world. Responsibilities 5+ years of professional experience in Java development in building high scalable distributed systems. Strong foundation in Core Java, Data Structures, and Object-Oriented Design. Experience in building & maintaining scalable distributed systems using Java 1.8+, Spring/Spring boot, Microservices architecture, RESTful APIs. Hands-on experience with SQL(MySQL) and No SQL(Cassandra, Mongo or DynamoDB) databases. Experience in Amazon tools & services like Lambdas for serverless architectures, Redshift for data warehousing and analytical processing. Proven experience in building high-performance, high-availability systems. Experience with caching frameworks (e.g., Redis, Memcached) and messaging systems. Familiarity building and deploying apps using Docker, Kubernetes, and CI/CD workflows. Familiarity with monitoring, logging, and observability tools in a cloud environment. To Be Successful In This Role, You Should Possess : Bachelor's degree in computer science or related field. Good to have AWS Certification (e.g., AWS Certified Developer, AWS Certified Solutions Architect). Experience with containerization technologies such as Docker and Kubernetes. Knowledge of continuous integration and continuous deployment (CI/CD) pipelines. Familiarity with Agile/Scrum methodologies Exposure to analytics pipelines or working with analytics databases (e.g., Redshift, Click House). Experience in the design, development, and deployment of agentic AI @ Digit88 : Comprehensive Insurance (Life, Health, Accident) Flexible Work Model Accelerated learning & non-linear growth Flat organization structure driven by ownership and accountability Work with global peers, including top engineers and professionals from companies like Apple, Amazon, IBM Research, and Adobe Make a global impact with your work, leading innovations in Conversational AI, Tele-Medicine, Healthcare, and more Collaborate with a founding team of serial entrepreneurs with multiple successful exits, offering immense learning opportunities and challenges Join us and partner in our growth journey! (ref:hirist.tech)
Posted 1 week ago
15.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. Those in artificial intelligence and machine learning at PwC will focus on developing and implementing advanced AI and ML solutions to drive innovation and enhance business processes. Your work will involve designing and optimising algorithms, models, and systems to enable intelligent decision-making and automation. Years of Experience: Candidates with 15+ years of hands on experience Preferred Skills Experience in GCP and one more cloud ( AWS/Azure) platform specifically in data migration experience Use an analytical and data driven approach to drive a deep understanding of fast changing business Have familiarity with data technologies such as snowflake, databricks, redshift, synapse Leading large-scale data modernization and governance initiatives emphasizing the strategy, design and development of Platform-as-a-Service, Infrastructure-as-a-Service that extend to private and public cloud deployment models Experience in designing, architecting, implementation, and managing data lakes/warehouses. Experience with complex environment delivering application migration to cloud platform Understanding of Agile, SCRUM and Continuous Delivery methodologies Hands-on experience with Docker and Kubernetes or other container orchestration platforms Strong Experience in data management with an understanding of analytics and reporting Understanding of emerging technologies and latest data engineering providers Experience in implementing enterprise data concepts such as Master Data Management and Enterprise Data Warehouse, experience with MDM standards. Educational Background BE / B.Tech / MCA / M.Sc / M.E / M.Tech / MBA
Posted 1 week ago
6.0 years
0 Lacs
Andhra Pradesh, India
On-site
At PwC, our people in infrastructure focus on designing and implementing robust, secure IT systems that support business operations. They enable the smooth functioning of networks, servers, and data centres to optimise performance and minimise downtime. Those in cloud operations at PwC will focus on managing and optimising cloud infrastructure and services to enable seamless operations and high availability for clients. You will be responsible for monitoring, troubleshooting, and implementing industry leading practices for cloud-based systems. Focused on relationships, you are building meaningful client connections, and learning how to manage and inspire others. Navigating increasingly complex situations, you are growing your personal brand, deepening technical expertise and awareness of your strengths. You are expected to anticipate the needs of your teams and clients, and to deliver quality. Embracing increased ambiguity, you are comfortable when the path forward isn’t clear, you ask questions, and you use these moments as opportunities to grow. Skills Examples of the skills, knowledge, and experiences you need to lead and deliver value at this level include but are not limited to: Respond effectively to the diverse perspectives, needs, and feelings of others. Use a broad range of tools, methodologies and techniques to generate new ideas and solve problems. Use critical thinking to break down complex concepts. Understand the broader objectives of your project or role and how your work fits into the overall strategy. Develop a deeper understanding of the business context and how it is changing. Use reflection to develop self awareness, enhance strengths and address development areas. Interpret data to inform insights and recommendations. Uphold and reinforce professional and technical standards (e.g. refer to specific PwC tax and audit guidance), the Firm's code of conduct, and independence requirements. MS-SQL Database Administrator – Senior Associate Basic Qualifications Job Requirements and Preferences : Minimum Degree Required Bachelor’s degree Minimum Years Of Experience 6 year(s) Certification(s) Required NA Database Admin Database Lead/Admin will coordinate the day-to-day activities of the operational systems, processes, and infrastructure required for all service offerings being developed that assist clients on their cloud Managed Service Delivery. This individual will work with one or multiple clients to gather requirements and the corresponding work needed based on the client’s Cloud Journey roadmap. They will manage the day-to-day business of operations including various stakeholders and internal and external delivery partners. Responsibilities Database role supports our services that focus on Database technologies, including Amazon Aurora, DynamoDB, Redshift, Athena, Couchbase, Cassandra, MySQL, MS-SQL and Oracle. Extensive experience with Installation, Configuration, Patching, Backup-Recovery, Configuring Replication on Linux/Windows and CentOS infrastructure. Experience in involving discussion with clients for DB requirements, performance, and integration issues and providing better solutions or approaches along with capacity planning. Responsible for identifying and resolving performance bottlenecks in relation to CPU, I/O, Memory, and DB Architecture Responsible for Database migration i.e., on-prem to on-prem or to cloud Responsible for resource layout for new application from DB side Day-to-Day Production DB support and maintenance of different versions of server database Design and build function-centric solutions in the context of transition from traditional, legacy platforms to microservices architectures Identifies trends and assess opportunities to improve processes and execution. Raises and tracks issues and conflicts, removes barriers, resolves issues of medium complexity involving partners and calls out to appropriate levels when required. Solicits and responds to feedback while gaining dedication and support. Stays up to date on industry regulations, trends, and technology. Coordinates with management to ensure all operational, administrative, and compliance functions within the team are being carried out in accordance with regulatory standard methodologies. Qualifications Bachelor’s degree in Computer Science or related technology field preferred Minimum of 4 years of hands-on experience on Database Technologies. Strong working knowledge of ITIL principles and ITSM Current understanding of industry trends and methodologies Outstanding verbal and written communication skills Excellent attention to detail Strong interpersonal skills and leadership qualities
Posted 1 week ago
13.0 years
0 Lacs
Andhra Pradesh, India
On-site
Summary about Organization A career in our Advisory Acceleration Center is the natural extension of PwC’s leading global delivery capabilities. The team consists of highly skilled resources that can assist in the areas of helping clients transform their business by adopting technology using bespoke strategy, operating model, processes and planning. You’ll be at the forefront of helping organizations around the globe adopt innovative technology solutions that optimize business processes or enable scalable technology. Our team helps organizations transform their IT infrastructure, modernize applications and data management to help shape the future of business. An essential and strategic part of Advisory's multi-sourced, multi-geography Global Delivery Model, the Acceleration Centers are a dynamic, rapidly growing component of our business. The teams out of these Centers have achieved remarkable results in process quality and delivery capability, resulting in a loyal customer base and a reputation for excellence. . Job Description Senior Data Architect with experience in design, build, and optimization of complex data landscapes and legacy modernization projects. The ideal candidate will have deep expertise in database management, data modeling, cloud data solutions, and ETL (Extract, Transform, Load) processes. This role requires a strong leader capable of guiding data teams and driving the design and implementation of scalable data architectures. Key areas of expertise include Design and implement scalable and efficient data architectures to support business needs. Develop data models (conceptual, logical, and physical) that align with organizational goals. Lead the database design and optimization efforts for structured and unstructured data. Establish ETL pipelines and data integration strategies for seamless data flow. Define data governance policies, including data quality, security, privacy, and compliance. Work closely with engineering, analytics, and business teams to understand requirements and deliver data solutions. Oversee cloud-based data solutions (AWS, Azure, GCP) and modern data warehouses (Snowflake, BigQuery, Redshift). Ensure high availability, disaster recovery, and backup strategies for critical databases. Evaluate and implement emerging data technologies, tools, and frameworks to improve efficiency. Conduct data audits, performance tuning, and troubleshooting to maintain optimal performance Qualifications Bachelor’s or Master’s degree in Computer Science, Information Systems, or a related field. 13+ years of experience in data modeling, including conceptual, logical, and physical data design. 5 – 8 years of experience in cloud data lake platforms such as AWS Lake Formation, Delta Lake, Snowflake or Google Big Query. Proven experience with NoSQL databases and data modeling techniques for non-relational data. Experience with data warehousing concepts, ETL/ELT processes, and big data frameworks (e.g., Hadoop, Spark). Hands-on experience delivering complex, multi-module projects in diverse technology ecosystems. Strong understanding of data governance, data security, and compliance best practices. Proficiency with data modeling tools (e.g., ER/Studio, ERwin, PowerDesigner). Excellent leadership and communication skills, with a proven ability to manage teams and collaborate with stakeholders. Preferred Skills Experience with modern data architectures, such as data fabric or data mesh. Knowledge of graph databases and modeling for technologies like Neo4j. Proficiency with programming languages like Python, Scala, or Java. Understanding of CI/CD pipelines and DevOps practices in data engineering.
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
On DevOn, a leading provider of innovative technology solutions focusing on data-driven decision-making, cloud computing, and advanced analytics, the dynamic team is dedicated to solving complex business problems through technology. We are currently seeking a skilled and motivated Data Engineer Lead to join our team. As a Data Engineer Lead, your primary responsibility will be to lead the design, development, and maintenance of data pipelines and ETL workflows utilizing modern cloud technologies. You will collaborate closely with cross-functional teams to ensure data availability, reliability, and scalability, facilitating data-driven decision-making throughout the organization. This role necessitates a deep understanding of Python, PySpark, AWS Glue, RedShift, SQL, Jenkins, Bitbucket, EKS, and Airflow. Key Responsibilities: - Lead the design and implementation of scalable data pipelines and ETL workflows in a cloud environment, primarily AWS. - Develop and manage data ingestion, transformation, and storage frameworks using AWS Glue, PySpark, and RedShift. - Architect and optimize complex SQL queries for large datasets to maintain data integrity across systems. - Work with data scientists, analysts, and business stakeholders to comprehend data requirements and provide high-quality data solutions. - Automate the end-to-end data pipeline process using Jenkins and Bitbucket, ensuring efficient CI/CD practices. - Optimize and oversee data orchestration utilizing Apache Airflow. - Offer technical leadership and mentorship to junior team members, ensuring adherence to best practices for data engineering. - Utilize AWS services like RedShift, S3, Lambda, and EKS for the deployment and management of data solutions. - Troubleshoot and resolve complex data pipeline issues, minimizing downtime and ensuring high availability. - Engage in architecture and design reviews, contributing insights on technical solutions and enhancements. - Continuously assess new tools and technologies to enhance the efficiency and scalability of our data infrastructure. Required Skills and Qualifications: - 5+ years of professional experience in Data Engineering, showcasing a track record of building scalable data pipelines and ETL workflows. - Proficiency in Python for data processing and scripting. - Hands-on experience with PySpark for large-scale data processing. - Comprehensive knowledge of AWS Glue, RedShift, S3, and other AWS services. - Advanced skills in SQL for data manipulation and optimization. - Experience with Jenkins and Bitbucket for CI/CD automation. - Familiarity with EKS (Elastic Kubernetes Service) for containerized deployment of data applications. - Proficiency in Apache Airflow for data orchestration and workflow automation. - Strong problem-solving abilities and the capability to debug complex issues in data workflows. - Excellent communication skills, enabling collaboration with cross-functional teams and clear explanation of technical concepts. - Ability to work in an Agile development environment, managing multiple priorities and meeting tight deadlines. Preferred Qualifications: - Experience with additional AWS services like Lambda, Redshift Spectrum, Athena. - Familiarity with Docker and container orchestration technologies such as Kubernetes. - Knowledge of data modeling and data warehousing concepts. - Bachelor's or Master's degree in Computer Science, Engineering, or a related field.,
Posted 1 week ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
This job is with Amazon, an inclusive employer and a member of myGwork – the largest global platform for the LGBTQ+ business community. Please do not contact the recruiter directly. Description The Maintenance Automation Platform (MAP) team within the Global Reliability and Maintenance Engineering (RME) Central Team is looking for an exceptional business intelligence engineer to join us. In this role, you will work on an analytical team to bust myths, create insights, and produce recommendations to help Central RME to deliver world class service to the Amazon network. As part of the team will be involved in all phases of research, experiment, design and analysis, including defining research questions, designing experiments, identifying data requirements, and communicating insights and recommendations. You'll also be expected to continuously learn new systems, tools, and industry best practices to analyze big data and enhance our analytics. These are exciting fast-paced businesses in which we get to work on extremely interesting analytical problems, in an environment where you get to learn from other engineers and apply business intelligence to help leadership make informed decisions. Your work focuses on complex and/or ambiguous problem areas in existing or new BI initiatives. You take the long term view of your team's solutions and how they fit into the team's architecture. You consider where each solution is in its lifecycle and where appropriate, proactively fix architecture deficiencies You understand capabilities and limitations of the systems you work with (e.g. cluster size, concurrent users, data classification). You are able to explain these limitations to technical and non-technical audiences, helping them understand what's currently possible and which efforts need a technology investment You take ownership of team infrastructure, providing a system-wide view and design guidance. You make things simpler. You drive BI engineering best practices (e.g. Operational Excellence, code reviews, syntax and naming convention, metric definitions, alarms) and set standards. You collaborate with customers and other internal partners to refine the problem into specific deliverables, and you understand the business context well enough to recommend alternatives and anticipate future requests. In addition to stakeholders, you may work with partner teams (business and technical) and Data Engineers/Data Scientists/BA/SDES/other BIEs to design and deliver the right solution. You contribute to the professional development of colleagues, improving their business and technical knowledge and their understanding of BI engineering best practices. Key job responsibilities Own the development, and maintenance of ongoing metrics, reports, analyses, dashboards on the key drivers of our business Partner with operations and business teams to consult, develop and implement KPI's, automated reporting solutions and infrastructure improvements to meet business needs Develop and maintain scaled, automated, user-friendly systems, reports, dashboards, etc. that will support business needs Perform both ad-hoc and strategic analyses Strong verbal/written communication and presentation skills, including an ability to effectively communicate with both business and technical teams. Basic Qualifications 3+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience Experience with data visualization using Tableau, Quicksight, or similar tools Experience with data modeling, warehousing and building ETL pipelines Experience in Statistical Analysis packages such as R, SAS and Matlab Experience using SQL to pull data from a database or data warehouse and scripting experience (Python) to process data for modeling Preferred Qualifications Experience with AWS solutions such as EC2, DynamoDB, S3, and Redshift Experience in data mining, ETL, etc. and using databases in a business environment with large-scale, complex datasets Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you're applying in isn't listed, please contact your Recruiting Partner.
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
31458 Jobs | Dublin
Wipro
16542 Jobs | Bengaluru
EY
10788 Jobs | London
Accenture in India
10711 Jobs | Dublin 2
Amazon
8660 Jobs | Seattle,WA
Uplers
8559 Jobs | Ahmedabad
IBM
7988 Jobs | Armonk
Oracle
7535 Jobs | Redwood City
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi
Capgemini
6091 Jobs | Paris,France