Home
Jobs

787 Teradata Jobs - Page 19

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 5.0 years

0 Lacs

Navi Mumbai, Maharashtra, India

On-site

Linkedin logo

Our Company At Teradata, we believe that people thrive when empowered with better information. That’s why we built the most complete cloud analytics and data platform for AI. By delivering harmonized data, trusted AI, and faster innovation, we uplift and empower our customers—and our customers’ customers—to make better, more confident decisions. The world’s top companies across every major industry trust Teradata to improve business performance, enrich customer experiences, and fully integrate data across the enterprise. What You'll Do Teradata is looking to add a new Program Manager / Product Owner to our existing Global Sales Operations Tools & Technologies team of analytical, problem solving and solution-oriented product owners and program managers with experience supporting sales teams. Day to day focus is on implementation, adoption, hygiene and documenting best practices while being on the leading edge of developing and representing business requirements for our Sales and Channel Partner Cloud Platform and other sales technologies. This position will work closely with Sales and GTM Leadership, Account Teams, Partner Team (Global Alliances / Client Relationship Management), Sales Operations Managers, Technology and Enablement teams, Marketing, IT to define and deliver channel partner technology solutions and business processes aligned with our strategy and roadmap. The ideal candidate will be data driven, intellectually curious, a fast learner, and able to move quickly while maintaining focus on high impact projects aligned to a global strategy and to develop and make recommendations on business technology and business process improvements. This is a full-time individual contributor position based in a Teradata office in India. Responsibilities: Product Owner for assigned capability / program area representing the business stakeholder(s) and/or customer(s) and process owner for such designated areas and capabilities Define, document, and share CRM best practices to ensure sales processes and terminology are consistently understood and applied across the organization and regions Develop and make recommendations to business process improvements and impacts to different business / sales and partner areas Build and manage relationships with cross-functional teams such as Geographic Sales Leadership and Sales Operations Managers, Marketing, and IT to ensure that tools and technologies are set up and aligned to effectively support Teradata’s coverage models around the world. Work closely with Sales Enablement to identify training needs for leadership and account team members on technology, tools, business practices and processes. Actively participate in roadmap identification and prioritization with Business and IT partners managing all phases of the program / project delivery cycle and consult / bring recommendations for programs / projects. Determine the business impact of current and future technologies for the GTM Organization Who You'll Work With You will interact directly with field sales, sales leaders, and other team members to capture feedback for sales technology and process improvements to drive adoption and deliver business value. What Makes You a Qualified Candidate 3-5 years of experience as an Agile / Scrum product or process owner experience or 3-5 years Sales Operations, Sales Support, or Sales Field Impacting role experience. Direct experience in managing and driving value from CRM (Salesforce. com) and sales tools and leading / partnering cross-functionally to deliver complex programs and projects. Experience with direct sales and resellers/distribution partner processes in a SaaS/Cloud enterprise company or software vendor, and knowledge of how these processes integrate into existing systems/tools. Experience with Salesforce Partner Relationship Management (PRM), Salesforce Communities, Partner platforms and a good understanding of the different channel partner types is a plus. Must possess business acumen, field facing acumen, strong analytical, troubleshooting, problem-solving, and project management skills. Proactive and passionate: Independently capable of seeking information, solving conceptual problems, corralling resources, and delivering results in challenging situations. Ability to manage multiple concurrent projects and drive initiatives in a cross-functional environment. Solution Business Consulting skills, including analysis/evaluation of business and/or system processes and functional recommendation highly desired. Experience working and communicating with senior executives to solve complex business problems. Bachelor’s Degree in an analytical field (e. g. Computer Science, Information Systems, Business Administration, Engineering, Mathematics, or Statistics) What You Will Bring Project/Program Management, or Agile / Product Owner Certification a plus, but not required Salesforce or PRM Certifications a plus but not required Why We Think You’ll Love Teradata We prioritize a people-first culture because we know our people are at the very heart of our success. We embrace a flexible work model because we trust our people to make decisions about how, when, and where they work. We focus on well-being because we care about our people and their ability to thrive both personally and professionally. We are an anti-racist company because our dedication to Diversity, Equity, and Inclusion is more than a statement. It is a deep commitment to doing the work to foster an equitable environment that celebrates people for all of who they are. Teradata invites all identities and backgrounds in the workplace. We work with deliberation and intent to ensure we are cultivating collaboration and inclusivity across our global organization. ​ We are proud to be an equal opportunity and affirmative action employer. We do not discriminate based upon race, color, ancestry, religion, creed, sex (including pregnancy, childbirth, breastfeeding, or related conditions), national origin, sexual orientation, age, citizenship, marital status, disability, medical condition, genetic information, gender identity or expression, military and veteran status, or any other legally protected status. Show more Show less

Posted 3 weeks ago

Apply

2.0 years

0 Lacs

Gurugram, Haryana

On-site

Indeed logo

Location Gurugram, Haryana, India Category Corporate Job Id GGN00002085 Marketing / Loyalty / Mileage Plus / Alliances Job Type Full-Time Posted Date 06/05/2025 Achieving our goals starts with supporting yours. Grow your career, access top-tier health and wellness benefits, build lasting connections with your team and our customers, and travel the world using our extensive route network. Come join us to create what’s next. Let’s define tomorrow, together. Description Our Marketing and Loyalty team is the strategic force behind United’s industry-leading brand and experience, supporting revenue growth by turning customers into lifelong United flyers. Our marketing communications, market research and brand teams drive travelers’ familiarity and engagement with the United brand. Our product, design and service teams bring the signature United brand to life in our clubs and onboard our aircraft, ensuring a seamless, premier experience. And when customers choose United again and again, that’s because the loyalty team has been hard at work crafting an award-winning program. Our loyalty team manages United MileagePlus®, building travel and lifestyle partnerships that customers can engage with every day, and works with our Star Alliance partners to ensure United can take you anywhere you want to go. Job overview and responsibilities United Airlines reaches out to customers and potential travelers via digital campaigns with new information, travel inspiration, personalized offers, promos, etc. The Digital Marketing & Personalized Offers team at IKC supports all such digital acquisition initiatives with insights to help strategize campaigns and analytics to help measure performance. We work closely with stakeholders in the US to bring these campaigns to life and continuously improve performance with learnings and actionable insights. Assist in campaign planning, targeting and audience identification; measure campaign results and performance using data analysis Gather and organize data from various sources using SQL/ Python/ R; continuously develop and demonstrate improved analysis methodologies Create content for and deliver presentations to United leadership and external stakeholders Ensure alignment and prioritization with business objectives and initiatives – help teams make faster, smarter decisions Conduct exploratory analysis, identify opportunities, and proactively suggest initiatives to meet marketing objectives Create, modify and automate reports and dashboards - take ownership of reporting structure and metrics, clearly and effectively communicate relevant information to decision makers using data visualization tools Ensure seamless stakeholder management and keep lines of communication open with all stakeholders This position is offered on local terms and conditions. Expatriate assignments and sponsorship for employment visas, even on a time-limited visa status, will not be awarded. This position is for United Airlines Business Services Pvt. Ltd - a wholly owned subsidiary of United Airlines Inc. Qualifications What’s needed to succeed (Minimum Qualifications): Bachelor's degree 2+ years of experience in Analytics and working with analytical tools Proven comfort and an intellectual curiosity for working with very large data sets Experience in manipulating and analyzing complex, high-volume, high-dimensionality data from various sources to highlight patterns and relationships Proficiency in using database querying tools and writing complex queries and procedures using Teradata SQL and/ or Microsoft SQL Familiarity with one or more reporting tools – Spotfire/ Tableau/ PowerBI Advanced level comfort with Microsoft Office, especially Excel and PowerPoint Ability to communicate analysis in a clear and precise manner High sense of ownership of work Ability to work under time constraints Must be legally authorized to work in India for any employer without sponsorship Must be fluent in English (written and spoken) Successful completion of interview required to meet job qualification Reliable, punctual attendance is an essential function of the position What will help you propel from the pack (Preferred Qualifications): Master's degree Bachelor’s Degree in a quantitative field like Math, Statistics, Analytics and/ or Business SQL/ Python/ R Visualization tools – Tableau/ Spotfire/ PowerBI Understanding of digital acquisition channels Strong knowledge of either Python or R

Posted 3 weeks ago

Apply

0.0 - 4.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Indeed logo

Tesco India • Bengaluru, Karnataka, India • Hybrid • Full-Time • Permanent • Apply by 30-Jun-2025 About the role Responsible to provide support via automation while devising efficient reporting solutions in alignment with customer and business needs What is in it for you At Tesco, we are committed to providing the best for you. As a result, our colleagues enjoy a unique, differentiated, market- competitive reward package, based on the current industry practices, for all the work they put into serving our customers, communities and planet a little better every day. Our Tesco Rewards framework consists of pillars - Fixed Pay, Incentives, and Benefits. Total Rewards offered at Tesco is determined by four principles - simple, fair, competitive, and sustainable. Salary - Your fixed pay is the guaranteed pay as per your contract of employment. Performance Bonus - Opportunity to earn additional compensation bonus based on performance, paid annually Leave & Time-off - Colleagues are entitled to 30 days of leave (18 days of Earned Leave, 12 days of Casual/Sick Leave) and 10 national and festival holidays, as per the company’s policy. Making Retirement Tension-FreeSalary - In addition to Statutory retirement beneets, Tesco enables colleagues to participate in voluntary programmes like NPS and VPF. Health is Wealth - Tesco promotes programmes that support a culture of health and wellness including insurance for colleagues and their family. Our medical insurance provides coverage for dependents including parents or in-laws. Mental Wellbeing - We offer mental health support through self-help tools, community groups, ally networks, face-to-face counselling, and more for both colleagues and dependents. Financial Wellbeing - Through our financial literacy partner, we offer one-to-one financial coaching at discounted rates, as well as salary advances on earned wages upon request. Save As You Earn (SAYE) - Our SAYE programme allows colleagues to transition from being employees to Tesco shareholders through a structured 3-year savings plan. Physical Wellbeing - Our green campus promotes physical wellbeing with facilities that include a cricket pitch, football field, badminton and volleyball courts, along with indoor games, encouraging a healthier lifestyle. You will be responsible for Understands business needs and in depth understanding of Tesco processes Accountable for high quality and timely completion of specified reporting & dash-boarding work Understanding the end to end process of generating reports Understanding the underlying data sources Action any change request received from partners Develop users manual for reporting procedures and related process changes Handle new report development requests Lead the transformation of reports into new age tools and technologies Provide solutions to issues related to reports development and delivery Maintain the log of issues, risks and mitigation plans Identifying operational improvements and apply solution and automation using Python, Alteryx Enhance and Develop Daily, Weekly and Periodic reports and dashboards using Advanced excel, Advanced SQL, Hadoop, Teradata Partnering with stakeholders to identify problems, collaborate with them to brainstorm on the best possible reporting solution, and deliver solutions in the form of intelligent business reports / dashboards (Tableau, BI) Following our Business Code of Conduct and always acting with integrity and due diligence You will need - 2-4 year Experience in analytics delivery in any one of domains like retail, cpg, telecom or hospitality and for one of the following functional areas - marketing, supply chain, customer, merchandising, operations, finance or digital preferred Adv Excel, Strong Verbal and Written Communication Adv SQL, Big Data Infra, Hadoop, Hive, Phython, Spark Automation platforms Alteryx/Python Advanced Developer knowledge of Tableau, PowerBI, Logical Reasoning Eye for detail About us Tesco in Bengaluru is a multi-disciplinary team serving our customers, communities, and planet a little better every day across markets. Our goal is to create a sustainable competitive advantage for Tesco by standardising processes, delivering cost savings, enabling agility through technological solutions, and empowering our colleagues to do even more for our customers. With cross-functional expertise, a wide network of teams, and strong governance, we reduce complexity, thereby offering high-quality services for our customers. Tesco in Bengaluru, established in 2004 to enable standardisation and build centralised capabilities and competencies, makes the experience better for our millions of customers worldwide and simpler for over 3,30,000 colleagues. Tesco Business Solutions: Established in 2017, Tesco Business Solutions (TBS) has evolved from a single entity traditional shared services in Bengaluru, India (from 2004) to a global, purpose-driven solutions-focused organisation. TBS is committed to driving scale at speed and delivering value to the Tesco Group through the power of decision science. With over 4,400 highly skilled colleagues globally, TBS supports markets and business units across four locations in the UK, India, Hungary, and the Republic of Ireland. The organisation underpins everything that the Tesco Group does, bringing innovation, a solutions mindset, and agility to its operations and support functions, building winning partnerships across the business. TBS's focus is on adding value and creating impactful outcomes that shape the future of the business. TBS creates a sustainable competitive advantage for the Tesco Group by becoming the partner of choice for talent, transformation, and value creation

Posted 3 weeks ago

Apply

3.0 - 8.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Linkedin logo

Position: Data Analyst Location: Gurgaon Timings: 12:00 PM to 10:00 PM Role Overview Doing independent research, analyze, and present data as assigned Expected to work in close collaboration with the EXL team and clients on Commercial insurance actuarial projects for US/UK markets Should be able to understand risk and underwriting , plicate rating methodology Develop and use collaborative relationship to facilitate the accomplishment of working goals Working experience in P&C insurance domain for US insurance markets is a must Excellent written and verbal communication skills Facilitate data requirements while working with actuaries Have excellent SQL skills to extract data for scheduled processes and adhoc requests Automate manual processes and ETL pipelines using Python Utilise/help migrate existing SAS processes from SAS to SAS Viya Key Responsibilities Collaborate with actuaries to understand their data and reporting needs related to premium, loss, and exporsure analysis. Build and optimize complex SQL queries to extract, join, and aggregate large datasets from multiple relational sources. Develop and automate data pipelines in Python for ETL ,data wrangling , and wexploratory analytics. Use SAS for legacy processes, statistical outputs, and ad hoc data manipulation as required by actuarial models/processes Validate data outputs for accuracy and consistency, troubleshoot discrepancies, and ensure data quality before delivery Create documentation of data logc, process flows, and metadata over confluence and SharePoint to ensure transparency and knowledge sharing. Contribute to continuous improvement by recommending process automation or optimization opportunities in existing workflows Support Dashboarding or visualization needs(optional) using tools like Powe BI. Work in an agile or iterative environment with clear communication or progress, blockers and timelines. Required Skillset SQL(Expert Level) : Complex Joins, subqueries, window functions, CTEs, query Optimization and performance tuning, working with large tables in cloud/on-premise environments( Teradata, SQL Server, or equivalent) Python( intermediate to expert): Data wrangling using pandas, NumPy, Script automation and API Consumption, Familiarity with Visual Studio, Jupyter and modular Python Scripting SAS(Intermediate): Reading/ writing from/to datasets, connecting with external sources, macros, PROC SQL Knowledge of AWS is preferred Experience with commercial insurance Understanding of actuarial concepts such as loss triangles, reserving, and pricing Exposure to Git, JIRA, Confluence Proficiency in Excel, VBA Macros(Preferred) Candidate Profile Bachelor’s/Master's degree in engineering, economics, mathematics, actuarial sciences, or similar technical degree. Master’s in business or financial management is also suitable Affiliation to IAI or IFOA, with at least 3 actuarial exams 3-8 years’ experience in data analytics in insurance or financial service industry with good understanding of actuarial concepts - pricing, reserving, and/or valuation Demonstrated ability to work with actuarial or statistical teams in delivering high-quality data and insights Strong problem solving attitude and comfort with ambiguity in requirements Strong ability to learn technical and business knowledge Outstanding written and verbal communication skills Excellent time and work management skills Able to work in fast pace continuously evolving environment and ready to take up uphill challenges Is able to understand cross cultural differences and can work with clients across the globe Show more Show less

Posted 3 weeks ago

Apply

6.0 - 11.0 years

8 - 13 Lacs

Bengaluru

Work from Office

Naukri logo

Overall experience of 6 years in DW/ BI technologies and minimum 5 years development experience in ETL DataStage 8.x/ 9.x tool. Design, develop, and maintain ETL processes using IBM DataStage to extract, transform, and load data from multiple sources into our data warehouse Collaborate with business analysts and stakeholders to understand data requirements and translate them into technical specifications Worked extensively in parallel jobs, sequences and preferably in routines. Good conceptual knowledge on data- warehouse and various methodologies. Strong SQL database skills in Teradata and other databases like Oracle, SQL Server, DB2 etc Working knowledge in UNIX shell scripting. Good communication and presentation skills Should be flexible with the overlapping working hours Should be able to work independently and act proactively Develop, implement, and maintain best practices for DataStage Mandatory skills* DataStage , SQL Desired skills* Unix , PL/SQL

Posted 3 weeks ago

Apply

10.0 - 12.0 years

35 - 40 Lacs

Bengaluru

Work from Office

Naukri logo

Gracenote, a Nielsen company, is dedicated to connecting audiences to the entertainment they love, powering a better media future for all people. Gracenote is the content data business unit of Nielsen that powers innovative entertainment experiences for the world s leading media companies. Our entertainment metadata and connected IDs deliver advanced content navigation and discovery to connect consumers to the content they love and discover new ones. Gracenote s industry-leading datasets cover TV programs, movies, sports, music and podcasts in 80 countries and 35 languages. Common identifiers Universally adopted by the world s leading media companies to deliver powerful cross-media entertainment experiences. Machine driven, human validated best-in-class data and images fuel new search and discovery experiences across every screen. Gracenotes Data Organization is a dynamic and innovative group that is essential in delivering business outcomes through data, insights, predictive & prescriptive analytics. An extremely motivated team that values creativity, experimentation through continuous learning in an agile and collaborative manner. From designing, developing and maintaining data architecture that satisfies our business goals to managing data governance and region-specific regulations, the data team oversees the whole data lifecycle. Role Overview We are seeking an experienced Senior Data Engineer with 10-12 years of experience to join our Video engineering team with Gracenote - a NielsenIQ Company. In this role, you will design, build, and maintain our data processing systems and pipelines. You will work closely with Product managers, Architects, analysts, and other stakeholders to ensure data is accessible, reliable, and optimized for Business, analytical and operational needs. Key Responsibilities Design, develop, and maintain scalable data pipelines and ETL processes Architect and implement data warehousing solutions and data lakes Optimize data flow and collection for cross-functional teams Build infrastructure required for optimal extraction, transformation, and loading of data Ensure data quality, reliability, and integrity across all data systems Collaborate with data scientists and analysts to help implement models and algorithms Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, etc. Create and maintain comprehensive technical documentation Mentor junior engineers and provide technical leadership Evaluate and integrate new data management technologies and tools Implement Optimization strategies to enable and maintain sub second latency. Oversee Data infrastructure to ensure robust deployment and monitoring of the pipelines and processes. Stay ahead of emerging trends in Data, cloud, integrating new research into practical applications. Mentor and grow a team of junior data engineers. Required Qualifications and Skills Expert-level proficiency in Python, SQL, and big data tools (Spark, Kafka, Airflow).Bachelors degree in Computer Science, Engineering, or related field; Masters degree preferred Expert knowledge of SQL and experience with relational databases (e.g., PostgreSQL, Redshift, TIDB, MySQL, Oracle, Teradata) Extensive experience with big data technologies (e.g., Hadoop, Spark, Hive, Flink)Proficiency in at least one programming language such as Python, Java, or Scala Experience with data modeling, data warehousing, and building ETL pipelines Strong knowledge of data pipeline and workflow management tools (e.g., Airflow, Luigi, NiFi)Experience with cloud platforms (AWS, Azure, or GCP) and their data services. AWS Preferred Hands on Experience with building streaming pipelines with flink, Kafka, Kinesis. Flink Preferred. Understanding of data governance and data security principles Experience with version control systems (e.g., Git) and CI/CD practices Proven leadership skills in grooming data engineering teams. Preferred Skills Experience with containerization and orchestration tools (Docker, Kubernetes)Basic knowledge of machine learning workflows and MLOps Experience with NoSQL databases (MongoDB, Cassandra, etc.) Familiarity with data visualization tools (Tableau, Power BI, etc.)Experience with real-time data processing Knowledge of data governance frameworks and compliance requirements (GDPR, CCPA, etc.) Experience with infrastructure-as-code tools (Terraform, CloudFormation)

Posted 3 weeks ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Bengaluru

Work from Office

Naukri logo

Looking for associate with 5+ years of hands on experience in Informatica Power Center/ETL Experience in batch monitoring and trouble shooting, impact analysis and batch recovery Good hands on experience in SQL and RDBMS/teradata Proficient in working on scheduler Autosys/TWS/Control-M Basic knowledge of Unix Strong analytical skills, triaging engaging the application teams, support groups and DBAs

Posted 3 weeks ago

Apply

3.0 - 6.0 years

4 - 7 Lacs

Bengaluru

Work from Office

Naukri logo

Job Description: Value Preposition Responsible for designing and building data pipelines for enterprise data through ETL/ELT processes. Develop and maintain large-scale data platforms, data lakes and cloud solutions. Job Details Position Title: Data Engineer II Career Level: P2 Job Category: Senior Associate Role Type: Hybrid Job Location: Bengaluru About the Team: The data engineering team is community of dedicated professionals committed to designing, building, and maintaining data platform solutions for the organization. Impact (Job Summary/Why this Role Matters) Enterprise data warehouse supports several critical business functions for the bank including Regulatory Reporting, Finance, Risk steering, and Customer 360. This role is vital for building and maintaining enterprise data platform, data processes, and to support business objectives. Our values inclusivity, transparency, and excellence drive everything we do. Join us and make a meaningful impact on the organization. Key Deliverables (Duties and Responsibilities) Responsible for building and maintaining data platform that supports data integrations for Enterprise Data Warehouse, Operational Data Store or Data Marts etc. with appropriate data access, data security, data privacy and data governance. Create data ingestion pipelines in data warehouses and other large-scale data platforms. Create Data Ingestion pipeline for a variety of sources - File (Flat, delimited, Excel), DB, API (With Apigee integration), and SharePoint. Build reusable Data pipelines / frameworks using Python. Creating scheduled as well as trigger-based ingestion patterns using scheduling tools. Create performance optimized DDLs for any row-based or columnar databases such as Oracle, Postgres, Netezza database per Logical Data Model. Performance tuning of complex data pipelines and SQL queries. Performs impact analysis of proposed changes on existing architecture, capabilities, system priorities, and technology solutions. Working in Agile Framework, participating in various agile ceremonies, co-ordination with scrum master, tech lead, and PO on sprint planning, backlog creation, refinement, demo, and retrospection. Working with Product Owners to understand PI goals, PI planning, requirement clarification, and delivery coordination. Technical support for production incidents and failures Work with global technology teams across different time zones (primarily US) to deliver timely business value. Skills and Qualification (Functional and Technical Skills) Functional Skills: 5+ years of experience, 3+ years relevant to Snowflake. Team Player: Support peers, team, and department management. Communication: Excellent verbal, written, and interpersonal communication skills. Problem Solving: Excellent problem-solving skills, incident management, root cause analysis, and proactive solutions to improve quality. Partnership and Collaboration: Develop and maintain partnership with business and IT stakeholders Attention to Detail: Ensure accuracy and thoroughness in all tasks. Technical/Business Skills: Data Engineering: Experience in designing and building Data Warehouse and Data lakes. Good knowledge of data warehouse principles, and concepts. Technical expertise working in large scale Data Warehousing applications and databases such as Oracle, Netezza, Teradata, and SQL Server. Experience with public cloud-based data platforms especially Snowflake and AWS. Data integration skills: Expertise in design and development of complex data pipelines Solutions using any industry leading ETL tools such as SAP Business Objects Data Services (BODS), Informatica Cloud Data Integration Services (IICS), IBM Data Stage. Knowledge of ELT tools such as DBT, Fivetran, and AWS Glue Expert in SQL - development experience in at least one scripting language (Python etc.), adept in tracing and resolving data integrity issues. Data Model: Knowledge of Logical and Physical Data Model using Relational or Dimensional Modeling practices, high volume ETL/ELT processes. Performance tuning of data pipelines and DB Objects to deliver optimal performance. Experience in Gitlab version control and CI/CD processes. Experience working in Financial Industry is a plus. Relationships & Collaboration Reports to: Associate Director - Data Engineering Partners: Senior leaders and cross-functional teams Collaborates: A team of Data Engineering associates Accessibility Needs We are committed to providing an inclusive and accessible hiring process. If you require accommodations at any stage (e.g. application, interviews, onboarding) please let us know, and we will work with you to ensure a seamless experience.

Posted 3 weeks ago

Apply

8.0 - 13.0 years

50 - 55 Lacs

Hyderabad

Work from Office

Naukri logo

Do you pioneer? Do you enjoy solving complex problems in building and analyzing large datasets? Do you enjoy focusing first on your customer and working backwards? Amazon transportation controllership team is looking for an experienced Data Engineering Manager with experience in architecting large/complex data systems with a strong record of achieving results, scoping and delivering large projects end-to-end. You will be the key driver in building out our vision for scalable data systems to support the ever-growing Amazon global transportation network businesses. As a Data Engineering Manager in Transportation Controllership, you will be at the forefront of managing large projects, providing vision to the team, designing and planning large financial data systems that will allow our businesses to scale world-wide. You should have deep expertise in the database design, management, and business use of extremely large datasets, including using AWS technologies - Redshift, S3, EC2, Data-pipeline and other big data technologies. Above all you should be passionate about data warehousing large datasets together to answer business questions and drive change. You should have excellent business acumen and communication skills to be able to work with multiple business teams, and be comfortable communicating with senior leadership. Due to the breadth of the areas of business, you will coordinate across many internal and external teams, and provide visibility to the senior leaders of the company with your strong written and oral communication skills. We need individuals with demonstrated ability to learn quickly, think big, execute both strategically and tactically, motivate and mentor their team to deliver business values to our customers on time. A day in the life On a daily basis you will: manage and help GROW a team of high performing engineers understand new business requirements and architect data engineering solutions for the same plan your teams priorities, working with relevant internal/external stakeholders, including sprint planning resolve impediments faced by the team update leadership as needed use judgement in making the right tactical and strategic decisions for the team and organization monitor health of the databases and ingestion pipelines - 2+ years of processing data with a massively parallel technology (such as Redshift, Teradata, Netezza, Spark or Hadoop based big data solution) experience - 2+ years of relational database technology (such as Redshift, Oracle, MySQL or MS SQL) experience - 2+ years of developing and operating large-scale data structures for business intelligence analytics (using ETL/ELT processes) experience - 5+ years of data engineering experience - Experience managing a data or BI team - Experience communicating to senior management and customers verbally and in writing - Experience leading and influencing the data or BI strategy of your team or organization - Experience in at least one modern scripting or programming language, such as Python, Java, Scala, or NodeJS - Experience with big data technologies such as: Hadoop, Hive, Spark, EMR - Experience with AWS Tools and Technologies (Redshift, S3, EC2)

Posted 3 weeks ago

Apply

3.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

hackajob is collaborating with American Express to connect them with exceptional tech professionals for this role. You Lead the Way. We’ve Got Your Back. With the right backing, people and businesses have the power to progress in incredible ways. When you join Team Amex, you become part of a global and diverse community of colleagues with an unwavering commitment to back our customers, communities and each other. Here, you’ll learn and grow as we help you create a career journey that’s unique and meaningful to you with benefits, programs, and flexibility that support you personally and professionally. At American Express, you’ll be recognized for your contributions, leadership, and impact—every colleague has the opportunity to share in the company’s success. Together, we’ll win as a team, striving to uphold our company values and powerful backing promise to provide the world’s best customer experience every day. And we’ll do it with the utmost integrity, and in an environment where everyone is seen, heard and feels like they belong. Join Team Amex and let's lead the way together. Experience writing software in Python or similar. Experience with data structures, algorithms, and software design. Exposure to Data Science including Predictive Modelling. Develop Algorithms in multilingual conversational systems. Solve real-world scenarios for user commands and requests by identifying the right LLM models, tooling and frameworks. Proven experience in developing and working with large language models (GPT-3, BERT, T5, etc.) and productionizing them on the cloud. Strong foundation in machine learning concepts and techniques, including deep learning architectures, natural language processing, and text generation. Proficiency in programming languages such as Python, TensorFlow, PyTorch, and related libraries for model development and deployment. Demonstrated ability to design, train, fine-tune, and optimize large language models for specific tasks. Expertise in pre-processing and cleaning large datasets for training models. Familiarity with data augmentation techniques to enhance model performance. Knowledge of LLM operations , including evaluating model performance using appropriate metrics and benchmarks. Ability to iterate and improve models based on evaluation results. Experience in deploying language models in production environments and integrating them into applications, platforms, or services. Exposure in building Predictive models using machine learning through all phases of development, from design through training, evaluation, validation, and implementation. Experience with modern AI/ML & NLP Frameworks (e.g. Tensorflow), Dialogue Managers (e.g.Rasa), Search (e.g. Google Bert, GPT-3), Parsers (e.g. Dialogflow). Review architecture and provide technical guidance for engineers Perform statistical analysis of results and refine models Experience on various data architectures, latest tools, current and future trends in data engineering space especially Big Data, Streaming and Cloud technologies like GCP, AWS, Azure. Hands on experience with Big Data technologies (Spark, Kafka, Hive, etc.) and have at least 1 Big data implementation on platforms like Cornerstone, Teradata, etc. Experience with Visualization Tools like Tableau, Power BI, etc. Experience with complex, high volume, multi-dimensional data, as well as ML/AI models for unstructured, structured, and streaming datasets. Knowledge of cloud computing, including virtualization, hosted services, multi-tenant cloud infrastructures, storage systems, and content delivery networks Exposure in building cloud-native platforms on modern tech stack: AWS, Java, Spring Framework, RESTful API, and container-based application. Ability to learn new tools and paradigms in data engineering and science Proven experience attracting, hiring, retaining, and leading top engineering talent. Creative, passionate, and experienced leader of both people and technology Team Management savvy (e.g., planning, budgetary control, people management, vendor management, etc.). Experience with DevOps, reliability engineering, and platform monitoring Well versed in AGILE, DevOps and Program Management methods Bachelor's degree with a preference for Computer Science We back our colleagues and their loved ones with benefits and programs that support their holistic well-being. That means we prioritize their physical, financial, and mental health through each stage of life. Minimum Qualifications 3 years of experience with applying Agile methodologies Bachelor's degree in computer science, Engineering, Information Systems, or a related STEM field 3+ years of experience with Java, microservices, React framework. 3 years of experience with applying Agile methodologies 1 year of experience with public cloud platform (GCP, AWS, ...) optimization, enabling managed and serverless service. Preferred Qualifications Bachelor's degree in computer science, Engineering, Information Systems, or a related STEM field 3+ years of experience with Python, microservices, React framework. 3+ years of experience with Python, microservices, React framework. Benefits We back our colleagues and their loved ones with benefits and programs that support their holistic well-being. That means we prioritize their physical, financial, and mental health through each stage of life. Benefits include: Competitive base salaries Bonus incentives Support for financial-well-being and retirement Comprehensive medical, dental, vision, life insurance, and disability benefits (depending on location) Flexible working model with hybrid, onsite or virtual arrangements depending on role and business need Generous paid parental leave policies (depending on your location) Free access to global on-site wellness centers staffed with nurses and doctors (depending on location) Free and confidential counseling support through our Healthy Minds program Career development and training opportunities American Express is an equal opportunity employer and makes employment decisions without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran status, disability status, age, or any other status protected by law. Offer of employment with American Express is conditioned upon the successful completion of a background verification check, subject to applicable laws and regulations. Show more Show less

Posted 3 weeks ago

Apply

4.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Achieving our goals starts with supporting yours. Grow your career, access top-tier health and wellness benefits, build lasting connections with your team and our customers, and travel the world using our extensive route network. Come join us to create what’s next. Let’s define tomorrow, together. Description Our Marketing and Loyalty team is the strategic force behind United’s industry-leading brand and experience, supporting revenue growth by turning customers into lifelong United flyers. Our marketing communications, market research and brand teams drive travelers’ familiarity and engagement with the United brand. Our product, design and service teams bring the signature United brand to life in our clubs and onboard our aircraft, ensuring a seamless, premier experience. And when customers choose United again and again, that’s because the loyalty team has been hard at work crafting an award-winning program. Our loyalty team manages United MileagePlus®, building travel and lifestyle partnerships that customers can engage with every day, and works with our Star Alliance partners to ensure United can take you anywhere you want to go. Job Overview And Responsibilities United Airlines reaches out to customers and potential travelers via digital campaigns with new information, travel inspiration, personalized offers, promos, etc. The Digital Marketing & Personalized Offers team at IKC supports all such digital acquisition initiatives with insights to help strategize campaigns and analytics to help measure performance. We work closely with stakeholders in the US to bring these campaigns to life and continuously improve performance with learnings and actionable insights. Ensure alignment and prioritization with business objectives and initiatives – help teams make faster, smarter decisions Conduct exploratory analysis, identify opportunities, and proactively suggest initiatives to meet marketing objectives Assist in campaign planning, targeting and audience identification; measure campaign results and performance using data analysis Create content for and deliver presentations to United leadership and external stakeholders Own workstreams to deliver results, while leading other team members Ensure seamless stakeholder management and keep lines of communication open with all stakeholders Create, modify and automate reports and dashboards - take ownership of reporting structure and metrics, clearly and effectively communicate relevant information to decision makers using data visualization tools This position is offered on local terms and conditions. Expatriate assignments and sponsorship for employment visas, even on a time-limited visa status, will not be awarded. This position is for United Airlines Business Services Pvt. Ltd - a wholly owned subsidiary of United Airlines Inc. Qualifications What’s needed to succeed (Minimum Qualifications): Bachelor's degree or 4 years of relevant work experience 4+ years of experience in Analytics and working with analytical tools Proven comfort and an intellectual curiosity for working with very large data sets Experience in manipulating and analyzing complex, high-volume, high-dimensionality data from various sources to highlight patterns and relationships Proficiency in using database querying tools and writing complex queries and procedures using Teradata SQL and/ or Microsoft SQL Familiarity with one or more reporting tools – Spotfire/ Tableau Advanced level comfort with Microsoft Office, especially Excel and PowerPoint Ability to communicate analysis in a clear and precise manner High sense of ownership of work, and ability to lead a team Ability to work under time constraints Must be legally authorized to work in India for any employer without sponsorship Must be fluent in English (written and spoken) Successful completion of interview required to meet job qualification Reliable, punctual attendance is an essential function of the position What will help you propel from the pack (Preferred Qualifications): Master's degree Bachelor’s Degree in a quantitative field like Math, Statistics, Analytics and/ or Business SQL/ Python/ R Visualization tools – Tableau/ Spotfire Understanding of digital acquisition channels Strong knowledge of either Python or R GGN00002080 Show more Show less

Posted 3 weeks ago

Apply

0 years

0 Lacs

Kochi, Kerala, India

On-site

Linkedin logo

Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your Role And Responsibilities As an Software Developer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In This Role, Your Responsibilities May Include Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise search applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviours Preferred Education Master's Degree Required Technical And Professional Expertise Design and implement efficient database schemas and data models using Teradata. Optimize SQL queries and stored procedures for performance. Perform database administration tasks including installation, configuration, and maintenance of Teradata systems Preferred Technical And Professional Experience You thrive on teamwork and have excellent verbal and written communication skills. Ability to communicate with internal and external clients to understand and define business needs, providing analytical solutions Ability to communicate results to technical and non-technical audiences Show more Show less

Posted 3 weeks ago

Apply

8.0 - 10.0 years

20 - 35 Lacs

Pune

Work from Office

Naukri logo

Sr Solutions Engineer About the job Mid-Level Position based out of Pune (5-7 years) Senior Solutions Engineer Job Description : We are looking for a Developer good in Python, Spark along with ETL, Complex SQL, Cloud. Primary Skills: Python, Database, Spark Secondary: Azure/AWS, APIs In This Role, You Will Develop and maintain scalable and efficient backend systems, ensuring high performance and responsiveness to requests from the front-end. Design and implement cloud-based solutions, primarily on Microsoft Azure. Manage and optimize CI/CD pipelines for rapid and reliable deployment of software updates. Collaborate with frontend developers and other team members to establish objectives and design more functional, cohesive codes to enhance the user experience. Develop and maintain databases and server-side applications. Ensure the security of the backend infrastructure. Preferred Qualifications 5-7 years of experience developing with Python. Experience in Automation using Python. Experience building rest APIs. Experience with mobile application development is advantageous. Experience working within a Cloud environment. Bachelor's degree in computer science or a related field or related experience Experience with CI/CD tools like Jenkins, GitLab CI, or Azure DevOps. In-depth understanding of database technologies (SQL and NoSQL) and web server technologies. Familiarity with containerization and orchestration tools (e.g., Docker, Kubernetes).

Posted 3 weeks ago

Apply

3.0 years

5 - 8 Lacs

Hyderābād

On-site

GlassDoor logo

We are seeking an experienced and motivated Data Engineer to join our team. In this role, you will design, build, and maintain scalable data solutions to support critical business needs. You will work with distributed data platforms, cloud infrastructure, and modern data engineering tools to enable efficient data processing, storage, and analytics. The role includes participation in an on-call rotation to ensure the reliability and availability of our systems and pipelines Key Responsibilities Data Platform Development : Design, develop, and maintain data pipelines and workflows on distributed data platforms such as BigQuery, Hadoop/EMR/DataProc, or Teradata. Cloud Integration: Build and optimize cloud-based solutions using AWS or GCP to process and store large-scale datasets. Workflow Orchestration: Design and manage workflows and data pipelines using Apache Airflow to ensure scalability, reliability, and maintainability. Containerization and Orchestration : Deploy and manage containerized applications using Kubernetes for efficient scalability and resource management. Event Streaming : Work with Kafka to implement reliable and scalable event streaming systems for real-time data processing. Programming and Automation : Write clean, efficient, and maintainable code in Python and SQL to automate data processing, transformation, and analytics tasks. Database Management : Design and optimize relational and non-relational databases to support high-performance querying and analytics. System Monitoring & Troubleshooting: Participate in the on-call rotation to monitor systems, address incidents, and ensure the reliability of production environments. Collaboration : Work closely with cross-functional teams, including data scientists, analysts, and product managers, to understand data requirements and deliver solutions that meet business objectives. Participate in code reviews, technical discussions, and team collaboration to deliver high-quality software solutions. This role includes participation in an on-call rotation to ensure the reliability and performance of production systems: Rotation Schedule : Weekly rotation beginning Tuesday at 9:00 PM PST through Monday at 9:00 AM PST. Responsibilities During On-Call : Monitor system health and respond to alerts promptly. Troubleshoot and resolve incidents to minimize downtime. Escalate issues as needed and document resolutions for future reference. Requirements: Primary Technologies: Big Query or other distributed data platform, for example, Big Data (Hadoop/EMR/DataProc), SnowFlake, Teradata, or Netezza, ASW, GCP, Kubernetes, Kafka Python, SQL Bachelor’s degree in computer science, Engineering, or a related field (or equivalent work experience). 3+ years of experience in data engineering or related roles. Hands-on experience with distributed data platforms such as BigQuery, Hadoop/EMR/DataProc, Snowflake, or Teradata. Proficiency in Apache Airflow for building and orchestrating workflows and data pipelines. Proficiency in Python and SQL for data processing and analysis. Experience with cloud platforms like AWS or GCP, including building scalable solutions. Familiarity with Kubernetes for container orchestration. Knowledge of Kafka for event streaming and real-time data pipelines. Strong problem-solving skills and ability to troubleshoot complex systems. Excellent communication and collaboration skills to work effectively in a team environment. Preferred Familiarity with CI/CD pipelines for automated deployments. Knowledge of data governance, security, and compliance best practices. Experience with DevOps practices and tools. We have a global team of amazing individuals working on highly innovative enterprise projects & products. Our customer base includes Fortune 100 retail and CPG companies, leading store chains, fast growth fintech, and multiple Silicon Valley startups. What makes Confiz stand out is our focus on processes and culture. Confiz is ISO 9001:2015 (QMS), ISO 27001:2022 (ISMS), ISO 20000-1:2018 (ITSM) and ISO 14001:2015 (EMS) Certified. We have a vibrant culture of learning via collaboration and making workplace fun. People who work with us work with cutting-edge technologies while contributing success to the company as well as to themselves. To know more about Confiz Limited, visit https://www.linkedin.com/company/confiz/

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

Hyderābād

Remote

GlassDoor logo

Overview: As an Analyst, Data Modeler, your focus would be to partner with D&A Data Foundation team members to create data models for Global projects. This would include independently analysing project data needs, identifying data storage and integration needs/issues, and driving opportunities for data model reuse, satisfying project requirements. Role will advocate Enterprise Architecture, Data Design, and D&A standards, and best practices. You will be performing all aspects of Data Modeling working closely with Data Governance, Data Engineering and Data Architects teams. As a member of the data Modeling team, you will create data models for very large and complex data applications in public cloud environments directly impacting the design, architecture, and implementation of PepsiCo's flagship data products around topics like revenue management, supply chain, manufacturing, and logistics. The primary responsibilities of this role are to work with data product owners, data management owners, and data engineering teams to create physical and logical data models with an extensible philosophy to support future, unknown use cases with minimal rework. You'll be working in a hybrid environment with in-house, on-premises data sources as well as cloud and remote systems. You will establish data design patterns that will drive flexible, scalable, and efficient data models to maximize value and reuse. Responsibilities: Complete conceptual, logical and physical data models for any supported platform, including SQL Data Warehouse, EMR, Spark, Data Bricks, Snowflake, Azure Synapse or other Cloud data warehousing technologies. Governs data design/modeling – documentation of metadata (business definitions of entities and attributes) and constructions database objects, for baseline and investment funded projects, as assigned. Provides and/or supports data analysis, requirements gathering, solution development, and design reviews for enhancements to, or new, applications/reporting. Supports assigned project contractors (both on- & off-shore), orienting new contractors to standards, best practices, and tools. Contributes to project cost estimates, working with senior members of team to evaluate the size and complexity of the changes or new development. Ensure physical and logical data models are designed with an extensible philosophy to support future, unknown use cases with minimal rework. Develop a deep understanding of the business domain and enterprise technology inventory to craft a solution roadmap that achieves business objectives, maximizes reuse. Partner with IT, data engineering and other teams to ensure the enterprise data model incorporates key dimensions needed for the proper management: business and financial policies, security, local-market regulatory rules, consumer privacy by design principles (PII management) and all linked across fundamental identity foundations. Drive collaborative reviews of design, code, data, security features implementation performed by data engineers to drive data product development. Assist with data planning, sourcing, collection, profiling, and transformation. Create Source To Target Mappings for ETL and BI developers. Show expertise for data at all levels: low-latency, relational, and unstructured data stores; analytical and data lakes; data streaming (consumption/production), data in-transit. Develop reusable data models based on cloud-centric, code-first approaches to data management and cleansing. Partner with the Data Governance team to standardize their classification of unstructured data into standard structures for data discovery and action by business customers and stakeholders. Support data lineage and mapping of source system data to canonical data stores for research, analysis and productization. Qualifications: Bachelor’s degree required in Computer Science, Data Management/Analytics/Science, Information Systems, Software Engineering or related Technology Discipline. 5+ years of overall technology experience that includes at least 2+ years of data modeling and systems architecture. Around 2+ years of experience with Data Lake Infrastructure, Data Warehousing, and Data Analytics tools. 2+ years of experience developing enterprise data models. Experience in building solutions in the retail or in the supply chain space. Expertise in data modeling tools (ER/Studio, Erwin, IDM/ARDM models). Experience with integration of multi cloud services (Azure) with on-premises technologies. Experience with data profiling and data quality tools like Apache Griffin, Deequ, and Great Expectations. Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets. Experience with at least one MPP database technology such as Redshift, Synapse, Teradata or Snowflake. Experience with version control systems like GitHub and deployment & CI tools. Experience with Azure Data Factory, Databricks and Azure Machine learning is a plus. Experience of metadata management, data lineage, and data glossaries is a plus. Working knowledge of agile development, including Dev Ops and Data Ops concepts. Familiarity with business intelligence tools (such as Power BI). Excellent verbal and written communication and collaboration skills.

Posted 3 weeks ago

Apply

0 years

15 - 25 Lacs

Gurgaon

On-site

GlassDoor logo

Overview C5i C5i is a pure-play AI & Analytics provider that combines the power of human perspective with AI technology to deliver trustworthy intelligence. The company drives value through a comprehensive solution set, integrating multifunctional teams that have technical and business domain expertise with a robust suite of products, solutions, and accelerators tailored for various horizontal and industry-specific use cases. At the core, C5i’s focus is to deliver business impact at speed and scale by driving adoption of AI-assisted decision-making. C5i caters to some of the world’s largest enterprises, including many Fortune 500 companies. The company’s clients span Technology, Media, and Telecom (TMT), Pharma & Lifesciences, CPG, Retail, Banking, and other sectors. C5i has been recognized by leading industry analysts like Gartner and Forrester for its Analytics and AI capabilities and proprietary AI-based platforms. Global offices United States | Canada | United Kingdom | United Arab of Emirates | India Job Summary We are looking for experienced Data Modelers to support large-scale data engineering and analytics initiatives. The role involves developing logical and physical data models, working closely with business and engineering teams to define data requirements, and ensuring alignment with enterprise standards. Independently complete conceptual, logical and physical data models for any supported platform, including SQL Data Warehouse, Spark, Data Bricks Delta Lakehouse or other Cloud data warehousing technologies. Governs data design/modelling – documentation of metadata (business definitions of entities and attributes) and constructions database objects, for baseline and investment funded projects, as assigned. Develop a deep understanding of the business domains like Customer, Sales, Finance, Supplier, and enterprise technology inventory to craft a solution roadmap that achieves business objectives, maximizes reuse. Drive collaborative reviews of data model design, code, data, security features to drive data product development. Show expertise for data at all levels: low-latency, relational, and unstructured data stores; analytical and data lakes; SAP Data Model. Develop reusable data models based on cloud-centric, code-first approaches to data management and data mapping. Partner with the data stewards team for data discovery and action by business customers and stakeholders. Provides and/or supports data analysis, requirements gathering, solution development, and design reviews for enhancements to, or new, applications/reporting. Assist with data planning, sourcing, collection, profiling, and transformation. Support data lineage and mapping of source system data to canonical data stores. Create Source to Target Mappings (STTM) for ETL and BI developers. Skills needed: Expertise in data modelling tools (ER/Studio, Erwin, IDM/ARDM models, CPG / Manufacturing/Sales/Finance/Supplier/Customer domains ) Experience with at least one MPP database technology such as Databricks Lakehouse, Redshift, Synapse, Teradata, or Snowflake. Experience with version control systems like GitHub and deployment & CI tools. Experience of metadata management, data lineage, and data glossaries is a plus. Working knowledge of agile development, including DevOps and DataOps concepts. Working knowledge of SAP data models, particularly in the context of HANA and S/4HANA, Retails Data like IRI, Nielsen Retail. C5i is proud to be an equal opportunity employer. We are committed to equal employment opportunity regardless of race, color, religion, sex, sexual orientation, age, marital status, disability, gender identity, etc. If you have a disability or special need that requires accommodation, please keep us informed about the same at the hiring stages for us to factor necessary accommodations. Job Type: Full-time Pay: ₹1,500,000.00 - ₹2,500,000.00 per year Work Location: In person

Posted 3 weeks ago

Apply

4.0 - 9.0 years

6 - 11 Lacs

Chennai

Work from Office

Naukri logo

Responsibility: Developand set up the transformation of data from sources to enable analysis anddecision making. Maintain data flow from source to the designated target without affecting the crucialdata flow and to play a critical part in the data supply chain, by ensuringstakeholders can access and manipulate data for routine and ad hoc analysis. Implement projects focused on collecting, aggregating, storing, reconciling, and makingdata accessible from disparate sources. Provide support during the full lifecycle of data from ingesting through analytics toaction. Analyzeand organize raw data. Evaluate business needs and objectives. Interprettrends and patterns. Conduct complex data analysis and report on results. Coordinate with source team and end-user and develop solutions. Implementdata governance policies and support data-versioning processes. Maintain security and dataprivacy. Requirements Must Have: Proven hands-on experience inbuilding complex analytical queries in Teradata. 4+ years of extensive programming experience in Teradata Tools and Utilities. Hands-on experience in Teradata utilities such as Fast Load, Multi Load, BTEQ, and TPT. Experience in data quality management and best practices across data solution implementations. Experience in development testing and deployment, coding standards, and best practices. Experience in preparing technical design documentation. Strong team collaboration and experience working with remote teams. Knowledge in data modelling and database management such as performance tuning of the Enterprise Data Warehouse, Data Mart, and Business Intelligence Reporting environments, andsupport the integration of those systems with other applications. Good to have: Should be good in Unix Shellscripting. Experience in DataTransformation using ETL/ELT tools. Experience in differentrelational databases (i.e. Teradata, Oracle, PostgreSQL). experience with CI/CD development and deployment tools (i.e. Maven, Jenkins, Git, Kubernetes).

Posted 3 weeks ago

Apply

0 years

2 - 8 Lacs

Chennai

On-site

GlassDoor logo

Job Description: About Us At Bank of America, we are guided by a common purpose to help make financial lives better through the power of every connection. Responsible Growth is how we run our company and how we deliver for our clients, teammates, communities and shareholders every day. One of the keys to driving Responsible Growth is being a great place to work for our teammates around the world. We’re devoted to being a diverse and inclusive workplace for everyone. We hire individuals with a broad range of backgrounds and experiences and invest heavily in our teammates and their families by offering competitive benefits to support their physical, emotional, and financial well-being. Bank of America believes both in the importance of working together and offering flexibility to our employees. We use a multi-faceted approach for flexibility, depending on the various roles in our organization. Working at Bank of America will give you a great career with opportunities to learn, grow and make an impact, along with the power to make a difference. Join us! Global Business Services Global Business Services delivers Technology and Operations capabilities to Lines of Business and Staff Support Functions of Bank of America through a centrally managed, globally integrated delivery model and globally resilient operations. Global Business Services is recognized for flawless execution, sound risk management, operational resiliency, operational excellence and innovation. In India, we are present in five locations and operate as BA Continuum India Private Limited (BACI), a non-banking subsidiary of Bank of America Corporation and the operating company for India operations of Global Business Services. Process Overview* EIT is a centralized group within Global Risk Management responsible for independent testing of operational processes within the eight lines of business and enterprise control functions at Bank of America to ensure the company is in compliance with domestic and international laws, rules and regulations, and that risk and controls procedures are operating effectively. Job Description* The Sampling and Test Development Specialist II, with minimal supervision, works in collaboration with the Test Owners, Front Line Units, other Sampling and Test Development resources along with the Test Execution Teams to design and assess the quality of both manual and automated tests, validate data sourcing, conduct required sampling governance or distribute samples for testing, and design or revise sampling procedure documentation, with expert level efficiency and quality. This includes driving test structure to support automation. They will make required changes to new and existing test scripts and test plan documentation, as well as sample and data requirements and maintain integrity within the system of record. The Sampling and Test Development Specialist II will independently gather test scripting and data requirements and work with data partners to ensure appropriate test design and sampling requirements are incorporated in to the Test Plans Evaluates if pilot testing is required and participate in testing as needed and participates in other phases of testing (intake, execution, reporting) to provide expertise and feedback on areas assigned. Maintains SOR (System of Record) tracking of test status per standards. Provides peer coaching and direction where needed. Responsibilities* This role is responsible for accessing pertinent databases or acquiring raw data from third party sources along with all associated documentation and for documenting business procedures within testing scripts. The Sampling and Test Development Specialist II often acts independently to uncover and resolve issues associated with procurement of data to be used for testing and the structure and design of complex tests. This role will deliver high-quality results and manage, manipulate and summarize large quantities of data. The Sampling and Test Development Specialist II must participate in and occasionally lead additional projects across Sampling and Test Development including escalation of areas requiring process refinement and revision and taking leadership role to affect changes when needed. Requirements* Education : Graduates or Post-Graduates in Computer Science, Software Engineering, Statistics. Tech/B.E./B.Sc.(Statistics)/B.C.A./M.C.A/M.Sc. (Statistics) Certifications If Any - NA Experience Range : 4-6 yrs Foundational skills* Advanced understanding of automation tools and ability to influence test owners to define ways to structure tests in an automated fashion. Advanced knowledge of data warehouse and mining concepts and baseline understanding of SAS/SQL query language and syntax Experience building queries to source data from a variety of different types of data sources such as DB2, Teradata, Oracle, SQL Server, Hadoop, Hive, Python Proficiency with MS Office suite with an emphasis on Excel to perform data analytics, pivot tables, lookups and data analytics Proven ability to leverage automation efficiencies, tools, and capabilities where possible. Experience building data acquisition routines in a tool such as Trifacta, Alteryx, MicroStrategy, Tableau, Cognos, Python (or other similar business intelligence applications) Strong research, analytical, problem-solving, and technical skills Demonstrated project management skills; Ability to handle multiple competing priorities with demonstrated success at achieving SLA (Service Level Agreements) Strong partnership and influencing skills. Excellent verbal and written communication skills as well as interpersonal skills Self-starter, organized, versatile, capable of performing work independently with minimal direction. Ability to think independently, solve complex problems, and develop integrated solutions. Ability to translate business objectives to comprehensive test requirements. Demonstrated ability to interface effectively with Senior management. Strong team player Desired skills: Compliance or Risk certification a plus Work Timings: 1.30 PM - 10.30 PM Job Location: Chennai

Posted 3 weeks ago

Apply

12.0 years

0 Lacs

Gurugram, Haryana, India

Remote

Linkedin logo

Overview Enterprise Data Operations Assoc Manager Job Overview: As Data Modelling Assoc Manager, you will be the key technical expert overseeing data modeling and drive a strong vision for how data modelling can proactively create a positive impact on the business. You'll be empowered to create & lead a strong team of data modelers who create data models for deploying in Data Foundation layer and ingesting data from various source systems, rest data on the PepsiCo Data Lake, and enable exploration and access for analytics, visualization, machine learning, and product development efforts across the company. As a member of the data modelling team, you will create data models for very large and complex data applications in public cloud environments directly impacting the design, architecture, and implementation of PepsiCo's flagship data products around topics like revenue management, supply chain, manufacturing, and logistics. You will independently be analyzing project data needs, identifying data storage and integration needs/issues, and driving opportunities for data model reuse, satisfying project requirements. Role will advocate Enterprise Architecture, Data Design, and D&A standards, and best practices. You will be a key technical expert performing all aspects of Data Modelling working closely with Data Governance, Data Engineering and Data Architects teams. You will provide technical guidance to junior members of the team as and when needed. The primary responsibilities of this role are to work with data product owners, data management owners, and data engineering teams to create physical and logical data models with an extensible philosophy to support future, unknown use cases with minimal rework. You'll be working in a hybrid environment with in-house, on-premises data sources as well as cloud and remote systems. You will establish data design patterns that will drive flexible, scalable, and efficient data models to maximize value and reuse. Responsibilities Responsibilities: Independently complete conceptual, logical and physical data models for any supported platform, including SQL Data Warehouse, EMR, Spark, Data Bricks, Snowflake, Azure Synapse or other Cloud data warehousing technologies. Governs data design/modeling - documentation of metadata (business definitions of entities and attributes) and constructions database objects, for baseline and investment funded projects, as assigned. Provides and/or supports data analysis, requirements gathering, solution development, and design reviews for enhancements to, or new, applications/reporting. Supports assigned project contractors (both on- & off-shore), orienting new contractors to standards, best practices, and tools. Advocates existing Enterprise Data Design standards; assists in establishing and documenting new standards. Contributes to project cost estimates, working with senior members of team to evaluate the size and complexity of the changes or new development. Ensure physical and logical data models are designed with an extensible philosophy to support future, unknown use cases with minimal rework. Develop a deep understanding of the business domain and enterprise technology inventory to craft a solution roadmap that achieves business objectives, maximizes reuse. Partner with IT, data engineering and other teams to ensure the enterprise data model incorporates key dimensions needed for the proper management: business and financial policies, security, local-market regulatory rules, consumer privacy by design principles (PII management) and all linked across fundamental identity foundations. Drive collaborative reviews of design, code, data, security features implementation performed by data engineers to drive data product development. Assist with data planning, sourcing, collection, profiling, and transformation. Create Source To Target Mappings for ETL and BI developers. Show expertise for data at all levels: low-latency, relational, and unstructured data stores; analytical and data lakes; data streaming (consumption/production), data in-transit. Develop reusable data models based on cloud-centric, code-first approaches to data management and cleansing. Partner with the data science team to standardize their classification of unstructured data into standard structures for data discovery and action by business customers and stakeholders. Support data lineage and mapping of source system data to canonical data stores for research, analysis and productization. Qualifications Qualifications: 12+ years of overall technology experience that includes at least 6+ years of data modelling and systems architecture. 6+ years of experience with Data Lake Infrastructure, Data Warehousing, and Data Analytics tools. 6+ years of experience developing enterprise data models. 6+ years in cloud data engineering experience in at least one cloud (Azure, AWS, GCP). 6+ years of experience with building solutions in the retail or in the supply chain space. Expertise in data modelling tools (ER/Studio, Erwin, IDM/ARDM models). Fluent with Azure cloud services. Azure Certification is a plus. Experience scaling and managing a team of 5+ data modelers Experience with integration of multi cloud services with on-premises technologies. Experience with data profiling and data quality tools like Apache Griffin, Deequ, and Great Expectations. Experience with at least one MPP database technology such as Redshift, Synapse, Teradata, or Snowflake. Experience with version control systems like GitHub and deployment & CI tools. Experience with Azure Data Factory, Databricks and Azure Machine learning is a plus. Experience of metadata management, data lineage, and data glossaries is a plus. Working knowledge of agile development, including DevOps and DataOps concepts. Familiarity with business intelligence tools (such as PowerBI). Skills, Abilities, Knowledge: Excellent communication skills, both verbal and written, along with the ability to influence and demonstrate confidence in communications with senior level management. Proven track record of leading, mentoring, hiring and scaling data teams. Strong change manager. Comfortable with change, especially that which arises through company growth. Ability to understand and translate business requirements into data and technical requirements. High degree of organization and ability to manage multiple, competing projects and priorities simultaneously. Positive and flexible attitude to enable adjusting to different needs in an ever-changing environment. Strong leadership, organizational and interpersonal skills; comfortable managing trade-offs. Foster a team culture of accountability, communication, and self-management. Proactively drives impact and engagement while bringing others along. Consistently attain/exceed individual and team goals Ability to lead others without direct authority in a matrixed environment. Differentiating Competencies Required Ability to work with virtual teams (remote work locations); lead team of technical resources (employees and contractors) based in multiple locations across geographies Lead technical discussions, driving clarity of complex issues/requirements to build robust solutions Strong communication skills to meet with business, understand sometimes ambiguous, needs, and translate to clear, aligned requirements Able to work independently with business partners to understand requirements quickly, perform analysis and lead the design review sessions. Highly influential and having the ability to educate challenging stakeholders on the role of data and its purpose in the business. Places the user in the center of decision making. Teams up and collaborates for speed, agility, and innovation. Experience with and embraces agile methodologies. Strong negotiation and decision-making skill. Experience managing and working with globally distributed teams. Show more Show less

Posted 3 weeks ago

Apply

10.0 years

0 Lacs

Hyderabad, Telangana, India

Remote

Linkedin logo

Overview Provide the job title you would like to be displayed on the job posting: Data Platform Engineer - Transformation & Modernization Job Overview: A Data Platform Engineer to be a key player in our transformation and modernization programs, leading the migration of applications from legacy systems to Azure-based architectures. This role involves designing, implementing, and optimizing scalable, cloud-native data solutions using Databricks, Azure DevOps (ADO), and Agile development methodologies. As an active contributor to code development, you will help drive automation, operational excellence, and data quality across our platforms. You will collaborate with data science and product teams to create solutions that enhance our data-driven decision-making capabilities. Responsibilities Responsibilities: Lead the migration and modernization of data platforms, moving applications and pipelines to Azure-based solutions. Actively contribute to code development in projects and services. Manage and scale data pipelines from internal and external data sources to support new product launches and ensure high data quality. Develop automation and monitoring frameworks to capture key metrics and operational KPIs for pipeline performance. Implement best practices around systems integration, security, performance, and data management. Collaborate with internal teams, including data science and product teams, to drive solutioning and proof-of-concept (PoC) discussions. Develop and optimize procedures to transition data into production. Define and manage SLAs for data products and operational processes. Prototype and build scalable solutions for data engineering and analytics. Research and apply state-of-the-art methodologies in data and Platform engineering. Create and maintain technical documentation for knowledge sharing. Develop reusable packages and libraries to enhance development efficiency. Qualifications Qualifications: Bachelor’s degree in Computer Science, MIS, Business Management, or related field 10 + years’ experience in Information Technology 4 + years of Azure, AWS and Cloud technologies Experience in data platform engineering, with a focus on cloud transformation and modernization. Strong knowledge of Azure services, including Databricks, Azure Data Factory, Synapse Analytics, and Azure DevOps (ADO). Proficiency in SQL, Python, and Spark for data engineering tasks. Hands-on experience building and scaling data pipelines in cloud environments. Experience with CI/CD pipeline management in Azure DevOps (ADO). Understanding of data governance, security, and compliance best practices. Experience working in an Agile development environment. Prior experience in migrating applications from legacy platforms to the cloud. Knowledge of Terraform or Infrastructure-as-Code (IaC) for cloud resource management. Familiarity with Kafka, Event Hubs, or other real-time data streaming solutions. Experience with lagacy RDBMS (Oracl, DB2, Teradata) Background in supporting data science models in production. Does the person hired for this job need to be based in a PepsiCo office, or can they be remote?: Employee must be based in a Pepsico office Primary Work Location: Hyderabad HUB-IND Is this role approved for relocation?: No Would you like to initially post this job internally-only or both internally and externally?: Post both internally and externally Show more Show less

Posted 3 weeks ago

Apply

10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Overview Data Engineering Assoc Manager (L09). Responsibilities Enhance and maintain data pipelines on EDF Requirement analysis, data analysis Work on application migration from Teradata to EDF Lead a team of data engineers and testers Qualifications Data engineer with 10+ years of experience Show more Show less

Posted 3 weeks ago

Apply

6.0 - 10.0 years

6 - 10 Lacs

Hyderabad, Greater Noida

Work from Office

Naukri logo

Work closely with source data application teams and product owners to design, implement and support analytics solutions that provide insights to make better decisions. Implement data migration and data engineering solutions using Azure products and services: (Azure Data Lake Storage, Azure Data Factory, Azure Functions, Event Hub, Azure Stream Analytics, Azure Databricks, etc.) and traditional data warehouse tools. Perform multiple aspects involved in the development lifecycle. Design, cloud engineering (Infrastructure, network, security, and administration), ingestion, preparation, data modeling, testing, CICD pipelines, performance tuning, deployments, consumption, BI, alerting, prod support. Provide technical leadership and collaborate within a team environment as well as work independently. Be a part of a DevOps team that completely owns and supports their product. Implement batch and streaming data pipelines using cloud technologies. Leads development of coding standards, best practices and privacy and security guidelines. Mentors' others on technical and domain skills to create multi-functional teams All you'll need for success. Minimum Qualifications: Education & Prior Job Experience 1. Bachelor's degree in Computer Science, Computer Engineering, Technology, Information Systems (CIS/MIS), Engineering or related technical discipline, or equivalent experience/training 2. 3 years software solution development using agile, DevOps, operating in a product model that includes designing, developing, and implementing large-scale applications or data engineering solutions 3. 3 years Data Engineering experience using SQL 4. 2 years of cloud development (prefer Microsoft Azure) including Azure EventHub, Azure Data Factory, Azure Databricks, Azure DevOps, Azure Blob Storage, Azure Power Apps and Power BI. 5. Combination of Development, Administration & Support experience in several of the following tools/platforms required: a. Scripting: Python, PySpark, Unix, SQL b. Data Platforms: Teradata, SQL Server c. Azure Data Explorer. Administration skills are a plus d. Azure Cloud Technologies: Top 3 Mandatory Skills and Experience: SQL, Python, PySpark

Posted 3 weeks ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Overview Primary focus would be to lead development work within Azure Data Lake environment and other related ETL technologies, with the responsibility of ensuring on time and on budget delivery; Satisfying project requirements, while adhering to enterprise architecture standards. This role will also have L3 responsibilities for ETL processes Responsibilities Delivery of key Enterprise Data Warehouse and Azure Data Lake projects within time and budget Contribute to solution design and build to ensure scalability, performance and reuse of data and other components Ensure on time and on budget delivery which satisfies project requirements, while adhering to enterprise architecture standards. Possess strong problem-solving abilities with a focus on managing to business outcomes through collaboration with multiple internal and external parties Enthusiastic, willing, able to learn and continuously develop skills and techniques - enjoys change and seeks continuous improvement A clear communicator both written and verbal with good presentational skills, fluent and proficient in the English language Customer focused and a team player Qualifications Experience Bachelor’s degree in Computer Science, MIS, Business Management, or related field 3+ years’ experience in Information Technology 1+ years’ experience in Azure Data Lake Technical Skills Proven experience development activities in Data, BI or Analytics projects Solutions Delivery experience - knowledge of system development lifecycle, integration, and sustainability Strong knowledge of Teradata architecture and SQL Good knowledge of Azure data factory or Databricks Knowledge of Presto / Denodo / Infoworks is desirable Knowledge of data warehousing concepts and data catalog tools (Alation) Show more Show less

Posted 3 weeks ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Overview This role serves as an Associate Analyst for the GTM Data analytics COE project development team. This role is one of the go-to resource for building/ maintaining key reports, data pipelines and advanced analytics necessary to bring insights to light for senior leaders and Sector and field end users. Responsibilities The COE’s core competencies are a mastery of data visualization, data engineering, data transformation, predictive and prescriptive analytics Enhance data discovery, processes, testing, and data acquisition from multiple platforms. Apply detailed knowledge of PepsiCo’s applications for root-cause problem-solving. Ensure compliance with PepsiCo IT governance rules and design best practices. Participate in project planning with stakeholders to analyze business opportunities and define end-to-end processes. Translate operational requirements into actionable data presentations. Support data recovery and integrity issue resolution between business and PepsiCo IT. Provide performance reporting for the GTM function, including ad-hoc requests using internal, shipment data systems Develop on-demand reports and scorecards for improved agility and visualization. Collate and analyze large data sets to extract meaningful insights on performance trends and opportunities. Present insights and recommendations to the GTM Leadership team regularly. Manage expectations through effective communication with headquarters partners. Ensure timely and accurate data delivery per service level agreements (SLA). Collaborate across functions to gather insights for action-oriented analysis. Identify and act on opportunities to improve work delivery. Implement process improvements, reporting standardization, and optimal technology use. Foster an inclusive and collaborative environment. Provide baseline support for monitoring SPA mailboxes, work intake, and other ad-hoc requests.Additionally, the role will provide baseline support for monitoring work intake & other adhoc requests, queries Qualifications Undergrad degree in Business or related technology 3-4 Yrs working experience in Power BI 1-2 Yrs working experience in SQL and Python Preferred qualifications: Information technology or analytics experience is a plus Familiarity with Power BI/ Tableau, Python, SQL, Teradata, Azure, MS Fabric Requires a level of analytical, critical thinking, and problem-solving skills as well as great attention to detail Strong time management skills, ability to multitask, set priorities, and plan Show more Show less

Posted 3 weeks ago

Apply

6.0 - 10.0 years

7 - 11 Lacs

Greater Noida

Work from Office

Naukri logo

Work closely with source data application teams and product owners to design, implement and support analytics solutions that provide insights to make better decisions. Implement data migration and data engineering solutions using Azure products and services: (Azure Data Lake Storage, Azure Data Factory, Azure Functions, Event Hub, Azure Stream Analytics, Azure Databricks, etc.) and traditional data warehouse tools. Perform multiple aspects involved in the development lifecycle. Design, cloud engineering (Infrastructure, network, security, and administration), ingestion, preparation, data modeling, testing, CICD pipelines, performance tuning, deployments, consumption, BI, alerting, prod support. Provide technical leadership and collaborate within a team environment as well as work independently. Be a part of a DevOps team that completely owns and supports their product. Implement batch and streaming data pipelines using cloud technologies. Leads development of coding standards, best practices and privacy and security guidelines. Mentors' others on technical and domain skills to create multi-functional teams All you'll need for success. Minimum Qualifications: Education & Prior Job Experience 1. Bachelor's degree in Computer Science, Computer Engineering, Technology, Information Systems (CIS/MIS), Engineering or related technical discipline, or equivalent experience/training 2. 3 years software solution development using agile, DevOps, operating in a product model that includes designing, developing, and implementing large-scale applications or data engineering solutions 3. 3 years Data Engineering experience using SQL 4. 2 years of cloud development (prefer Microsoft Azure) including Azure EventHub, Azure Data Factory, Azure Databricks, Azure DevOps, Azure Blob Storage, Azure Power Apps and Power BI. 5. Combination of Development, Administration & Support experience in several of the following tools/platforms required: a. Scripting: Python, PySpark, Unix, SQL b. Data Platforms: Teradata, SQL Server c. Azure Data Explorer. Administration skills are a plus d. Azure Cloud Technologies: Top 3 Mandatory Skills and Experience: SQL, Python, PySpark

Posted 3 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies