Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
15.0 - 21.0 years
0 Lacs
noida, uttar pradesh
On-site
As a Data Architect with over 15 years of experience, your primary responsibility will be to lead the design and implementation of scalable, secure, and high-performing data architectures. You will collaborate with business, engineering, and product teams to develop robust data solutions that support business intelligence, analytics, and AI initiatives. Your key responsibilities will include designing and implementing enterprise-grade data architectures using cloud platforms such as AWS, Azure, or GCP. You will lead the definition of data architecture standards, guidelines, and best practices while architecting scalable data solutions like data lakes, data warehouses, and real-time streaming platforms. Collaborating with data engineers, analysts, and data scientists, you will ensure optimal solutions are delivered based on data requirements. In addition, you will oversee data modeling activities encompassing conceptual, logical, and physical data models. It will be your duty to ensure data security, privacy, and compliance with relevant regulations like GDPR and HIPAA. Defining and implementing data governance strategies alongside stakeholders and evaluating data-related tools and technologies are also integral parts of your role. To excel in this position, you should possess at least 15 years of experience in data architecture, data engineering, or database development. Strong experience in architecting data solutions on major cloud platforms like AWS, Azure, or GCP is essential. Proficiency in data management principles, data modeling, ETL/ELT pipelines, and modern data platforms/tools such as Snowflake, Databricks, and Apache Spark is required. Familiarity with programming languages like Python, SQL, or Java, as well as real-time data processing frameworks like Kafka, Kinesis, or Azure Event Hub, will be beneficial. Moreover, experience in implementing data governance, data cataloging, and data quality frameworks is important. Knowledge of DevOps practices, CI/CD pipelines for data, and Infrastructure as Code (IaC) is a plus. Excellent problem-solving, communication, and stakeholder management skills are necessary for this role. A Bachelor's or Master's degree in Computer Science, Information Technology, or a related field is preferred, along with certifications like Cloud Architect or Data Architect (AWS/Azure/GCP). Join us at Infogain, a human-centered digital platform and software engineering company, where you will have the opportunity to work on cutting-edge data and AI projects in a collaborative and inclusive work environment. Experience competitive compensation and benefits while contributing to experience-led transformation for our clients in various industries.,
Posted 1 week ago
15.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
hackajob is collaborating with J.P. Morgan to connect them with exceptional tech professionals for this role. If you are a software engineering leader ready to take the reins and drive impact, we’ve got an opportunity just for you. As a Director of Software Engineering at JPMorgan Chase within the Commercial & Investment Banking, you will lead a technical domain, managing teams, technologies, and projects across multiple departments. Your deep expertise in software, applications, technical processes, and product management will be vital in managing numerous complex projects and initiatives, serving as the key decision maker for your teams while promoting innovation and effective solution delivery. With over 15 years of experience in developing Risk Management Systems for the Markets business, you have proven success in leading large teams and collaborating effectively with diverse internal stakeholders. Job Responsibilities Lead and manage large software development teams, fostering a culture of innovation, collaboration, and excellence Provide strategic direction and mentorship to team members, ensuring alignment with organizational goals. Establish and maintain effective partnerships with internal stakeholders across various lines of business. Oversee the planning, execution, and delivery of complex software projects, ensuring they meet business requirements, timelines, and budget constraints. Implement best practices in project management and software development methodologies. Leverage extensive technical expertise in Python, AWS cloud platforms to steer the creation of scalable and robust software solutions. Possess a strong understanding of data architectures to ensure optimal design and implementation. Work with Apache Spark, EMR, Glue, and Parquet to process and analyze large datasets, promoting insights and decision-making through advanced data analytics. Optimize Redshift-based data pipelines for performance, managing over 100+ TB of data, and ensuring efficient data processing and retrieval Handle petabytes of data in data lake and data lakehouse environments like Iceberg, implementing best practices for data management, storage, and retrieval. Leverage experience with JPMorgan's in-house risk management system, Athena, across various lines of business in markets such as Market business. Demonstrate a comprehensive understanding of the trade lifecycle, risk management, and financial products in investment banking, including options, swaps, equities, bonds, and repo. Collaborate with cross-functional teams to ensure seamless integration and functionality of software solutions. Promote innovation by staying abreast of emerging technologies and industry trends. Develop and implement strategies that enhance the efficiency and effectiveness of software development processes. Champion Agile methodologies to enhance project delivery, team collaboration, and continuous improvement. Facilitate Agile ceremonies and ensure adherence to Agile principles and practices. Required Qualifications, Capabilities And Skills Formal training or certification on large scale technology program concepts and 10+ years applied experience. In addition, 5+ years of experience leading technologists to manage, anticipate and solve complex technical items within your domain of expertise Proven track record of leading large global teams (50+ developers). Technical Skills: Expertise in Python, AWS cloud platform (S3, Lambda, Kinesis, EC2, DynamoDB, EventBridge, MKS), big data technologies, and data lake/lakehouse architectures (Iceberg). Architect level expertise in data solutions. Modern skills in Cloud and AI. Risk Management System: Extensive experience with risk management system across various lines of business. Financial Products Knowledge: Strong understanding of trade lifecycle, risk management, trade model, trade booking and financial products in investment banking. Leadership Skills: Exceptional leadership, communication, and interpersonal skills. Ability to inspire and motivate teams to achieve high performance. Preferred Qualifications, Capabilities And Skills Experience in JPMorgan's Athena RMS is preferred.
Posted 1 week ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
About The Job Key Result Areas And Activities : Design and Development : Design, implement, and manage Redshift clusters for high availability, performance, and security. Performance Optimization : Monitor and optimize database performance, including query tuning and resource management. Backup and Recovery : Develop and maintain database backup and recovery strategies. Security Enforcement : Implement and enforce database security policies and procedures. Cost-Performance Balance : Ensure an optimal balance between cost and performance. Collaboration with Development Teams : Work with development teams to design and optimize database schemas and queries. Perform database migrations, upgrades, and patching. Issue Resolution : Troubleshoot and resolve database-related issues, providing support to development and operations teams. Automate routine database tasks using scripting languages and tools. Must-Have Strong understanding of database design, performance tuning, and optimization techniques Proficiency in SQL and experience with database scripting languages (e.g., Python, Shell) Experience with database backup and recovery, security, and high availability solutions Familiarity with AWS services and tools, including S3, EC2, IAM, and CloudWatch Operating System - Any flavor of Linux, Windows Core Redshift Administration Skills -Cluster Management, Performance Optimization, workload management (WLM), vacuuming/analyzing tables for optimal performance, IAM policies, role[1]based access control, Backup & Recovery, automated backups, and restoration strategies. SQL Query Optimization, distribution keys, sort keys, and compression encoding Knowledge of COPY and UNLOAD commands, S3 integration, and best practices for bulk data loading Scripting & Automation for automating routine DBA tasks Expertise in debugging slow queries, troubleshooting system tables (ref:hirist.tech)
Posted 1 week ago
6.0 years
0 Lacs
Jaipur, Rajasthan, India
On-site
Job Summary We are looking for a skilled and motivated Software Engineer with strong experience in data engineering and ETL processes. The ideal candidate should be comfortable working with any object-oriented programming language, possess strong SQL skills, and have hands-on experience with AWS services like S3 and Redshift. Experience in Ruby and working knowledge of Linux are a plus. Key Responsibilities Design, build, and maintain robust ETL pipelines to handle large volumes of data. Work closely with cross-functional teams to gather data requirements and deliver scalable solutions. Write clean, maintainable, and efficient code using object-oriented programming and SOLID principles. Optimize SQL queries and data models for performance and reliability. Use AWS services (S3, Redshift, etc.) to develop and deploy data solutions. Troubleshoot issues in data pipelines and perform root cause analysis. Collaborate with DevOps/infra teams for deployment, monitoring, and scaling data Skills : 6+ years of experience in Data Engineering. Programming : Proficiency in any object-oriented language (e.g., Java, Python, etc.) Bonus: Experience in Ruby is a big plus. SQL : Moderate to advanced skills in writing complex queries and handling data transformations. AWS : Must have hands-on experience with services like S3 and Redshift. Linux : Familiarity with Linux-based systems is good to Qualifications : Experience working in a data/ETL-focused role. Familiarity with version control systems like Git. Understanding of data warehouse concepts and performance tuning. (ref:hirist.tech)
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
You will be responsible for designing, developing, and implementing data-centric software solutions using various technologies. This includes conducting code reviews, recommending best coding practices, and providing effort estimates for the proposed solutions. Additionally, you will design audit business-centric software solutions and maintain comprehensive documentation for all proposed solutions. As a key member of the team, you will lead architect and design efforts for product development and application development for relevant use cases. You will provide guidance and support to team members and clients, implementing best practices of data engineering and architectural solution design, development, testing, and documentation. Your role will require you to participate in team meetings, brainstorming sessions, and project planning activities. It is essential to stay up-to-date with the latest advancements in the data engineering area to drive innovation and maintain a competitive edge. You will stay hands-on with the design, development, and validation of systems and models deployed. Collaboration with audit professionals to understand business, regulatory, and risk requirements, as well as key alignment considerations for audit, is a crucial aspect of the role. Driving efforts in the data engineering and architecture practice area will be a key responsibility. In terms of mandatory technical and functional skills, you should have a deep understanding of RDBMS (MS SQL Server, ORACLE, etc.), strong programming skills in T-SQL, and proven experience in ETL and reporting (MSBI stack/COGNOS/INFORMATICA, etc.). Additionally, experience with cloud-centric databases (AZURE SQL/AWS RDS), ADF (AZURE Data Factory), data warehousing skills using SYNAPSE/Redshift, understanding and implementation experience of datalakes, and experience in large data processing/ingestion using Databricks APIs, Lakehouse, etc., are required. Knowledge in MPP databases like SnowFlake/Postgres-XL is also essential. Preferred technical and functional skills include understanding financial accounting, experience with NoSQL using MONGODB/COSMOS, Python coding experience, and an aptitude towards emerging data platforms technologies like MS AZURE Fabric. Key behavioral attributes required for this role include strong analytical, problem-solving, and critical-thinking skills, excellent collaboration skills, the ability to work effectively in a team-oriented environment, excellent written and verbal communication skills, and the willingness to learn new technologies and work on them.,
Posted 1 week ago
1.0 - 31.0 years
1 - 2 Lacs
Jaipur
On-site
🧠 About the Role We are seeking a proactive and detail-oriented Apache Superset & SQL Expert with 1+ years of experience in the healthcare domain. You’ll be responsible for building insightful BI dashboards and maintaining complex data pipelines to support mission-critical analytics for healthcare operations and compliance reporting. ✅ Key Responsibilities Develop and maintain advanced Apache Superset dashboards tailored for healthcare KPIs and operational metrics Write, optimise, and maintain complex SQL queries to extract and transform data from multiple healthcare systems Collaborate with data engineering and clinical teams to define and model datasets for visualisation Ensure dashboards comply with healthcare data governance, privacy (e.g., HIPAA), and audit requirements Monitor performance, implement row-level security, and maintain a robust Superset configuration Translate clinical and operational requirements into meaningful visual stories 🧰 Required Skills & Experience 1+ years of domain experience in healthcare analytics or working with healthcare datasets (EHR, claims, patient outcomes, etc.) 3+ years of experience working with Apache Superset in a production environment Strong command over SQL, including query tuning, joins, aggregations, and complex transformations Hands-on experience with data modelling and relational database design Solid understanding of clinical terminology, healthcare KPIs, and reporting workflows Experience in working with PostgreSQL, MySQL, or other SQL-based databases Strong documentation, communication, and stakeholder-facing skills 🌟 Nice-to-Have Familiarity with HIPAA, HL7/FHIR data structures, or other regulatory standards Experience with Python, Flask, or Superset plugin development Exposure to modern healthcare data platforms, dbt, or Airflow Experience integrating Superset with EMR, clinical data lakes, or warehouse systems like Redshift or BigQuery
Posted 1 week ago
7.0 - 12.0 years
0 Lacs
maharashtra
On-site
As a Lead Data Engineer, you will be responsible for leveraging your 7 to 12+ years of hands-on experience in SQL database design, data architecture, ETL, Data Warehousing, Data Mart, Data Lake, Big Data, Cloud (AWS), and Data Governance domains. Your expertise in a modern programming language such as Scala, Python, or Java, with a preference for Spark/ Pyspark, will be crucial in this role. Your role will require you to have experience with configuration management and version control apps like Git, along with familiarity working within a CI/CD framework. If you have experience in building frameworks, it will be considered a significant advantage. A minimum of 8 years of recent hands-on SQL programming experience in a Big Data environment is necessary, with a preference for experience in Hadoop/Hive. Proficiency in PostgreSQL, RDBMS, NoSQL, and columnar databases will be beneficial for this role. Your hands-on experience in AWS Cloud data engineering components, including API Gateway, Glue, IoT Core, EKS, ECS, S3, RDS, Redshift, and EMR, will play a vital role in developing and maintaining ETL applications and data pipelines using big data technologies. Experience with Apache Kafka, Spark, and Airflow is a must-have for this position. If you are excited about this opportunity and possess the required skills and experience, please share your CV with us at omkar@hrworksindia.com. We look forward to potentially welcoming you to our team. Regards, Omkar,
Posted 1 week ago
4.0 - 8.0 years
0 Lacs
karnataka
On-site
You are an experienced Senior QA Specialist being sought to join a dynamic team for a critical AWS to GCP migration project. Your primary responsibility will involve the rigorous testing of data pipelines and data integrity in GCP cloud to ensure seamless reporting and analytics capabilities. Your key responsibilities will include designing and executing test plans to validate data pipelines re-engineered from AWS to GCP, ensuring data integrity and accuracy. You will work closely with data engineering teams to understand AVRO, ORC, and Parquet file structures in AWS S3, and analyze the data in external tables created in Athena used for reporting. It will be essential to ensure that schema and data in Bigquery match against Athena to support reporting in PowerBI. Additionally, you will be required to test and validate Spark pipelines and other big data workflows in GCP. Documenting all test results and collaborating with development teams to resolve discrepancies will also be part of your responsibilities. Furthermore, providing support to UAT business users during UAT testing is expected. To excel in this role, you should possess proven experience in QA testing within a big data DWBI ecosystem. Strong familiarity with cloud platforms such as AWS, GCP, or Azure, with hands-on experience in at least one is necessary. Deep knowledge of data warehousing solutions like BigQuery, Redshift, Synapse, or Snowflake is essential. Expertise in testing data pipelines and understanding different file formats like Avro and Parquet is required. Experience with reporting tools such as PowerBI or similar is preferred. Your excellent problem-solving skills and ability to work independently will be valuable, along with strong communication skills and the ability to collaborate effectively across teams.,
Posted 1 week ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
Remote
We’re looking for a talented technology leader to join our passionate engineering team as a Senior Software Engineer and help us scale and grow our cloud-based systems and technologies with a keen eye towards software quality and operational excellence. As a tech “unicorn” with the headquarters in San Diego, this is a amazing opportunity for the right person to join and guide the technical vision of this pre-IPO software company as we make history in the sales enablement space! As the Senior Software Engineer, you will play a vital part in driving solid cloud architecture and ensure best engineering practices across multiple engineering teams. You, along with your globally dispersed teammates, will collaborate to build micro-service based systems to support multiple different products to support the systems for sharing and collaboration between sales and marketing departments of our customers. You will work closely with our product leads, engineering leads, team and contribute to the best practices for CI/CD. You will also mentor more Jr. engineers and help grow a strong engineering team. This is an opportunity to work as a technical thought-leader and share ideas to build the best-in-class microservice that wows our internal and external stakeholders with its functionality and simplicity. At Seismic, we’re committed to providing benefits and perks for the whole self. To explore our benefits available in each country, please visit the Global Benefits page. Please be aware we have noticed an increase in hiring scams potentially targeting Seismic candidates. Read our full statement on our Careers page. Seismic is the global leader in AI-powered enablement, empowering go-to-market leaders to drive strategic growth and deliver exceptional customer experiences at scale. The Seismic Enablement Cloud™ is the only unified AI-powered platform that prepares customer-facing teams with the skills, content, tools, and insights needed to maximize every buyer interaction and strengthen client relationships. Trusted by more than 2,000 organizations worldwide, Seismic helps businesses achieve measurable outcomes and accelerate revenue growth. Seismic is headquartered in San Diego with offices across North America, Europe, Asia and Australia. Learn more at seismic.com. Seismic is committed to building an inclusive workplace that ignites growth for our employees and creates a culture of belonging that allows all employees to be seen and valued for who they are. Learn more about DEI at Seismic here. Develop, improve, and maintain, our microservice and ensure seamless integration to the rest of the Seismic platform. Help grow a new local engineering team while collaborating and driving technical and architectural decisions across multiple remote teams. Collaborate with globally-dispersed product managers, designers, and software engineers to rapidly build, test, and deploy code to create innovative solutions and add values to our customers' experience with Seismic. Explore new technologies and industry trends and bring your findings to life in our products. Participate in and contribute towards code reviews, bug/issue triage, and documentation. Contribute to troubleshooting and continuous quality improvements. Bachelor's degree in Computer Science, similar technical field of study, or equivalent practical experience. 5+ years of software engineering experience and a passion for building and innovating – you stay up to date with the latest technologies and trends in development. Must have a strong familiarity within .NET Core, and C# or similar object-oriented languages and frameworks. Data warehouse experience with Snowflake, or similar (AWS Redshift, Apache Iceberg, Clickhouse, etc). Familiarity with RESTFul microservice-based APIs Experience with the SCRUM and the AGILE development process. Familiarity and comfortability developing in cloud-based environments (Azure, AWS, Google Cloud, etc.) Optional: Experience with HTML/CSS/JS and modern SPA frameworks (React Vue.js, etc.). Optional: Experience with 3rd party integrations Optional: familiarity with Meeting systems like Zoom, WebEx, MS Teams Optional: familiarity with CRM systems like Salesforce, Microsoft Dynamics 365, Hubspot. Seen as an active contributor in the team problem-solving-process – you aren't afraid to share your opinions in a low-ego manner or roll up your sleeves and write critical path code or refactor a significant piece of code. Deep experience across multiple software projects, driving end-to-end software development lifecycle of an architecturally complex system or product. Ability to think tactically as well as strategically, respecting what came before you and always thinking longer-term. Highly focused on operational excellence and software quality, with experience in CI/CD and best operational practices. Your technical skills are sought after as you develop in a pragmatic and efficient manner. You enjoy solving challenging problems, all while having a blast with equally passionate and talented team members. Conversant in AI engineering. You’ve been experimenting with building ai solutions/integrations using LLMs, prompts, Copilots, Agentic ReAct workflows, etc. If you are an individual with a disability and would like to request a reasonable accommodation as part of the application or recruiting process, please click here. Headquartered in San Diego and with employees across the globe, Seismic is the global leader in sales enablement , backed by firms such as Permira, Ameriprise Financial, EDBI, Lightspeed Venture Partners, and T. Rowe Price. Seismic also expanded its team and product portfolio with the strategic acquisitions of SAVO, Percolate, Grapevine6, and Lessonly. Our board of directors is composed of several industry luminaries including John Thompson, former Chairman of the Board for Microsoft. Seismic is an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to gender, age, race, religion, or any other classification which is protected by applicable law. Please note this job description is not designed to cover or contain a comprehensive listing of activities, duties or responsibilities that are required of the employee for this job. Duties, responsibilities and activities may change at any time with or without notice.
Posted 1 week ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description What Will You Do? Build and own ETL data pipelines that will power all our reporting & analytics needs Develop clean, safe, testable and cost-efficient solutions Build fast and reliable pipelines with underlying data model that can scale according to business needs and growth Understand the system you are building, foresee possible shortcomings and be able to resolve or compromise appropriately Mentor Junior engineers in data Quality/pipelines etc. Company Overview Fanatics is building the leading global digital sports platform to ignite and harness the passions of fans and maximize the presence and reach for hundreds of partners globally. Leveraging these long-standing partnerships, a database of more than 80 million global consumers and a trusted, beyond recognizable brand name, Fanatics is expanding its position as the global leader for licensed sports merchandise to now becoming a next-gen digital sports platform, featuring an array of offerings across the sports ecosystem. The Fanatics family of companies currently includes Fanatics Commerce, a vertically-integrated licensed merchandise business that has changed the way fans purchase their favorite team apparel, jerseys, headwear and hardgoods through a tech-infused approach to making and quickly distributing fan gear in today’s 24/7 mobile-first economy; Fanatics Collectibles, a transformative company that is building a new model for the hobby and giving collectors an end-to-end collectibles experience; and Fanatics Betting & Gaming, a mobile betting, gaming and retail Sportsbook platform. all major Fanatics’ partners include professional sports leagues (NFL, MLB, NBA, NHL, NASCAR, MLS, PGA) and hundreds of collegiate and professional teams, which include several of the biggest global soccer clubs. As a market leader with more than 8,000 employees, and hundreds of partners, suppliers, and vendors worldwide, we take responsibility for driving toward more ethical and sustainable practices. We are committed to building an inclusive Fanatics community, reflecting and representing society at every level of the business, including our employees, vendors, partners and fans. Fanatics is also dedicated to making a positive impact in the communities where we all live, work, and play through strategic philanthropic initiatives. At Fanatics, we’re a diverse, passionate group of employees aiming to ignite pride and passion in the fans we outfit, celebrate and support. We recognize that diversity helps drive and foster innovation, and through our IDEA program (inclusion, diversity, equality and advocacy) at Fanatics we provide employees with tools and resources to feel connected and engaged in who they are and what they do to support the ultimate fan experience. Job Requirements Must have 5+ years of experience in Data Engineering field, with a proven track record of exposure in Big Data technologies such as Hadoop, Amazon EMR, Hive, Spark. Expertise in SQL technologies and at least one major Data Warehouse technology (Snowflake, RedShift, BigQuery etc.). Must have experience in building data platform – designing and building data model, integrate data from many sources, build ETL and data-flow pipelines, and support all parts of the data platform. Programming proficiency in Python and Scala, with experience writing modular, reusable, and testable code, including robust error handling and logging in data engineering applications. Hands-on experience with AWS cloud services, particularly in areas such as S3, Lambda, Glue, EC2, RDS, and IAM. Experience with orchestration tools such as Apache Airflow, for scheduling, monitoring, and managing data pipelines in a production environment. Familiarity with CI/CD practices, automated deployment pipelines, and version control systems (e.g., Git, GitHub/GitLab), ensuring reliable and repeatable data engineering workflows. Data Analysis skill – can make arguments with data and proper visualization. Energetic, enthusiastic, detail-oriented, and passionate about producing high-quality analytics deliverable. Must have experience in developing application with high performance and low latency. Ability to take ownership of initiatives and drive them independently from conception to delivery, including post-deployment monitoring and support. Strong communication and interpersonal skills with the ability to build relationships with stakeholders, understand business requirements, and translate them into technical solutions. Comfortable working cross-functionally in a multi-team environment, collaborating with data analysts, product managers, and engineering teams to deliver end-to-end data solutions. Job Description We are seeking a Sr. Data Engineer who has strong design, developments skills and upkeeps scalability, availability and excellence when building the next generation of our data pipelines and platform. You are an expert in various data processing technologies and data stores, appreciate the value of clear communication and collaboration, and devote to continual capacity planning and performance fine-tuning for emerging business growth. As the Senior Data Engineer, you will be mentoring Junior engineers in the team. Good To Have Experience in Web Services, API integration, Data exchanges with third parties is preferred. Experience in Snowflake is a big plus. Experience in NoSQL technologies (MongoDB, FoundationDB, Redis) is a plus. We would appreciate candidates who can demonstrate business-side functional understanding and effectively communicate the business context alongside their technical expertise.
Posted 1 week ago
4.0 - 6.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Role AWS Data Engineer Job Summary We are looking for a skilled AWS Data Engineer with strong experience in building and managing cloud based ETL pipelines using AWS EMR Glue and Python PySpark along with data analytics expertise in Amazon Athena Redshift The ideal candidate will be responsible for designing developing and maintaining scalable data solutions in a cloud native environment Responsibilities Design and implement ETL workflows using AWS EMR Glue Python and PySpark Develop and optimize queries using Amazon Athena and Redshift Build scalable data pipelines to ingest transform and load data from various sources Ensure data quality integrity and security across AWS services Collaborate with data analysts data scientists and business stakeholders to deliver data solutions Monitor and troubleshoot ETL jobs and cloud infrastructure performance Automate data workflows and integrate with CI CD pipelines Required Skills And Qualifications Hands on experience with AWS EMR Glue and Athena Redshift Strong programming skills in Python with Pandas numpy PySpark and SQL Experience with ETL design implementation and optimization Familiarity with S3 Lambda CloudWatch and other AWS services Experience with schema design partitioning and query optimization in Athena Proficiency in version control Git and agile development practices Work Location Chennai Bangalore Hyderabad Tier Level 3 4 Experience 4 6 years
Posted 1 week ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Description The Maintenance Automation Platform (MAP) team within the Global Reliability and Maintenance Engineering (RME) Central Team is looking for an exceptional business intelligence engineer to join us. In this role, you will work on an analytical team to bust myths, create insights, and produce recommendations to help Central RME to deliver world class service to the Amazon network. As part of the team will be involved in all phases of research, experiment, design and analysis, including defining research questions, designing experiments, identifying data requirements, and communicating insights and recommendations. You'll also be expected to continuously learn new systems, tools, and industry best practices to analyze big data and enhance our analytics. These are exciting fast-paced businesses in which we get to work on extremely interesting analytical problems, in an environment where you get to learn from other engineers and apply business intelligence to help leadership make informed decisions. Your work focuses on complex and/or ambiguous problem areas in existing or new BI initiatives. You take the long term view of your team's solutions and how they fit into the team’s architecture. You consider where each solution is in its lifecycle and where appropriate, proactively fix architecture deficiencies You understand capabilities and limitations of the systems you work with (e.g. cluster size, concurrent users, data classification). You are able to explain these limitations to technical and non-technical audiences, helping them understand what’s currently possible and which efforts need a technology investment You take ownership of team infrastructure, providing a system-wide view and design guidance. You make things simpler. You drive BI engineering best practices (e.g. Operational Excellence, code reviews, syntax and naming convention, metric definitions, alarms) and set standards. You collaborate with customers and other internal partners to refine the problem into specific deliverables, and you understand the business context well enough to recommend alternatives and anticipate future requests. In addition to stakeholders, you may work with partner teams (business and technical) and Data Engineers/Data Scientists/BA/SDES/other BIEs to design and deliver the right solution. You contribute to the professional development of colleagues, improving their business and technical knowledge and their understanding of BI engineering best practices. Key job responsibilities Own the development, and maintenance of ongoing metrics, reports, analyses, dashboards on the key drivers of our business Partner with operations and business teams to consult, develop and implement KPI’s, automated reporting solutions and infrastructure improvements to meet business needs Develop and maintain scaled, automated, user-friendly systems, reports, dashboards, etc. that will support business needs Perform both ad-hoc and strategic analyses Strong verbal/written communication and presentation skills, including an ability to effectively communicate with both business and technical teams. Basic Qualifications 3+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience Experience with data visualization using Tableau, Quicksight, or similar tools Experience with data modeling, warehousing and building ETL pipelines Experience in Statistical Analysis packages such as R, SAS and Matlab Experience using SQL to pull data from a database or data warehouse and scripting experience (Python) to process data for modeling Preferred Qualifications Experience with AWS solutions such as EC2, DynamoDB, S3, and Redshift Experience in data mining, ETL, etc. and using databases in a business environment with large-scale, complex datasets Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI HYD 13 SEZ Job ID: A3031639
Posted 1 week ago
2.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Description When you attract people who have the DNA of pioneers and the DNA of explorers, you build a company of like-minded people who want to invent. And that’s what they think about when they get up in the morning: how are we going to work backwards from customers and build a great service or a great product” – Jeff Bezos Amazon.com’s success is built on a foundation of customer obsession. Have you ever thought about what it takes to successfully deliver millions of packages to Amazon customers seamlessly every day like a clock work? In order to make that happen, behind those millions of packages, billions of decision gets made by machines and humans. What is the accuracy of customer provided address? Do we know exact location of the address on Map? Is there a safe place? Can we make unattended delivery? Would signature be required? If the address is commercial property? Do we know open business hours of the address? What if customer is not home? Is there an alternate delivery address? Does customer have any special preference? What are other addresses that also have packages to be delivered on the same day? Are we optimizing delivery associate’s route? Does delivery associate know locality well enough? Is there an access code to get inside building? And the list simply goes on. At the core of all of it lies quality of underlying data that can help make those decisions in time. The person in this role will be a strong influencer who will ensure goal alignment with Technology, Operations, and Finance teams. This role will serve as the face of the organization to global stakeholders. This position requires a results-oriented, high-energy, dynamic individual with both stamina and mental quickness to be able to work and thrive in a fast-paced, high-growth global organization. Excellent communication skills and executive presence to get in front of VPs and SVPs across Amazon will be imperative. Key Strategic Objectives: Amazon is seeking an experienced leader to own the vision for quality improvement through global address management programs. As a Business Intelligence Engineer of Amazon last mile quality team, you will be responsible for shaping the strategy and direction of customer-facing products that are core to the customer experience. As a key member of the last mile leadership team, you will continually raise the bar on both quality and performance. You will bring innovation, a strategic perspective, a passionate voice, and an ability to prioritize and execute on a fast-moving set of priorities, competitive pressures, and operational initiatives. You will partner closely with product and technology teams to define and build innovative and delightful experiences for customers. You must be highly analytical, able to work extremely effectively in a matrix organization, and have the ability to break complex problems down into steps that drive product development at Amazon speed. You will set the tempo for defect reduction through continuous improvement and drive accountability across multiple business units in order to deliver large scale high visibility/ high impact projects. You will lead by example to be just as passionate about operational performance and predictability as you will be about all other aspects of customer experience. The Successful Candidate Will Be Able To Effectively manage customer expectations and resolve conflicts that balance client and company needs. Develop process to effectively maintain and disseminate project information to stakeholders. Be successful in a delivery focused environment and determining the right processes to make the team successful. This opportunity requires excellent technical, problem solving, and communication skills. The candidate is not just a policy maker/spokesperson but drives to get things done. Possess superior analytical abilities and judgment. Use quantitative and qualitative data to prioritize and influence, show creativity, experimentation and innovation, and drive projects with urgency in this fast-paced environment. Partner with key stakeholders to develop the vision and strategy for customer experience on our platforms. Influence product roadmaps based on this strategy along with your teams. Support the scalable growth of the company by developing and enabling the success of the Operations leadership team. Serve as a role model for Amazon Leadership Principles inside and outside the organization Actively seek to implement and distribute best practices across the operation Basic Qualifications 2+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience Experience with data visualization using Tableau, Quicksight, or similar tools Experience with scripting language (e.g., Python, Java, or R) Experience building and maintaining basic data artifacts (e.g., ETL, data models, queries) Experience applying basic statistical methods (e.g. regression) to difficult business problems Experience gathering business requirements, using industry standard business intelligence tool(s) to extract data, formulate metrics and build reports Track record of generating key business insights and collaborating with stakeholders Preferred Qualifications Knowledge of how to improve code quality and optimizes BI processes (e.g. speed, cost, reliability) Knowledge of data modeling and data pipeline design Experience in designing and implementing custom reporting systems using automation tools Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - Amazon Dev Center India - Hyderabad Job ID: A3031609
Posted 1 week ago
12.0 - 17.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. Business Analyst (Global Commercial Operations) What You Will Do Let’s do this. Let’s change the world. In this vital role you will play a key role in successfully leading the engagement model between Amgen's Technology organization and Global Commercial Operations. Collaborate with global commercial operations Business SMEs, Data Engineers, Data Scientists and Product Managers to lead business analysis activities, ensuring alignment with engineering and product goals on the Data & AI Product Teams Become a global commercial operations domain authority in Data & AI technology capabilities by researching, deploying, and sustaining features built according to Amgen’s Quality System Lead the voice of the customer assessment to define business processes and product needs Work with Product Managers and customers to define scope and value for new developments Collaborate with Engineering and Product Management to prioritize release scopes and refine the Product backlog Ensure non-functional requirements are included and prioritized in the Product and Release Backlogs Facilitate the breakdown of Epics into Features and Sprint-Sized User Stories and participate in backlog reviews with the development team Clearly express features in User Stories/requirements so all team members and partners understand how they fit into the product backlog Ensure Acceptance Criteria and Definition of Done are well-defined Work closely with Business SME’s, Data Scientists, ML Engineers to understand the requirements around Data product requirements, KPI’s etc. Analyzing the source systems and create the STTM documents. Develop and implement effective product demonstrations for internal and external partners Maintain accurate documentation of configurations, processes, and changes Understand end-to-end data pipeline design and dataflow Apply knowledge of data structures to diagnose data issues for resolution by data engineering team What We Expect Of You We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Doctorate degree / Master's degree / Bachelor's degree and 12 to 17 years of experience in global commercial operations Information Systems Preferred Qualifications: Must-Have Skills Excellent problem-solving skills and a passion for solving complex challenges in for AI-driven technologies Experience with Agile software development methodologies (Scrum) Superb communication skills and the ability to work with senior leadership with confidence and clarity Has experience with writing user requirements and acceptance criteria in agile project management systems such as JIRA Experience in managing product features for PI planning and developing product roadmaps and user journeys Good-to-Have Skills: Demonstrated expertise in data and analytics and related technology concepts Understanding of data and analytics software systems strategy, governance, and infrastructure Familiarity with low-code, no-code test automation software Technical thought leadership Able to communicate technical or complex subject matters in business terms Jira Align experience Experience of DevOps, Continuous Integration, and Continuous Delivery methodology Soft Skills: Able to work under minimal supervision Excellent analytical and gap/fit assessment skills Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation Ability to manage multiple priorities successfully Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills Technical Skills: Experience with cloud-based data technologies (e.g., Databricks, Redshift, S3 buckets) AWS (similar cloud-based platforms) Experience with design patterns, data structures, test-driven development Knowledge of NLP techniques for text analysis and sentiment analysis What You Can Expect Of Us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Posted 1 week ago
5.0 - 10.0 years
7 - 17 Lacs
Kolkata, Hyderabad, Pune
Work from Office
Airflow Data Engineer in AWS platform Job Title Apache Airflow Data Engineer ROLE” as per TCS Role Master • 4-8 years of experience in AWS, Apache Airflow (on Astronomer platform), Python, Pyspark, SQL • Good hands-on knowledge on SQL and Data Warehousing life cycle is an absolute requirement. • Experience in creating data pipelines and orchestrating using Apache Airflow • Significant experience with data migrations and development of Operational Data Stores, Enterprise Data Warehouses, Data Lake and Data Marts. • Good to have: Experience with cloud ETL and ELT in one of the tools like DBT/Glue/EMR or Matillion or any other ELT tool • Excellent communication skills to liaise with Business & IT stakeholders. • Expertise in planning execution of a project and efforts estimation. • Exposure to working in Agile ways of working. Candidate for this position to be offered with TAIC or TCSL as Entity Data warehousing , pyspark , Github, AWS data platform, Glue, EMR, RedShift, databricks,Data Marts. DBT/Glue/EMR or Matillion, data engineering, data modelling, data consumption
Posted 1 week ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY-Consulting – AWS Staff-Senior The opportunity We are looking for a skilled AWS Data Engineer to join our growing data team. This role involves building and managing scalable data pipelines that ingest, process, and store data from various sources using modern AWS technologies. You will work with both batch and streaming data and contribute to a robust, scalable data architecture to support analytics, BI, and data science use cases. As a problem-solver with the keen ability to diagnose a client’s unique needs, one should be able to see the gap between where clients currently are and where they need to be. The candidate should be capable of creating a blueprint to help clients achieve their end goal. Key Responsibilities: Design and implement data ingestion pipelines from various sources including on-premise Oracle databases, batch files, and Confluent Kafka. Develop Python producers and AWS Glue jobs for batch data processing. Build and manage Spark streaming applications on Amazon EMR. Architect and maintain Medallion Architecture-based data lakes on Amazon S3. Develop and maintain data sinks in Redshift and Oracle. Automate and orchestrate workflows using Apache Airflow. Monitor, debug, and optimize data pipelines for performance and reliability. Collaborate with cross-functional teams including data analysts, scientists, and DevOps. Required Skills and Experience: Good programming skills in Python and Spark (Pyspark). Hands on Experience with Amazon S3, Glue, EMR. Good SQL knowledge on Amazon Redshift and Oracle Proven experience in handling streaming data with Kafka and building real-time pipelines. Good understanding of data modeling, ETL frameworks, and performance tuning. Experience with workflow orchestration tools like Airflow. Nice-to-Have Skills: Infrastructure as Code using Terraform. Experience with AWS services like SNS, SQS, DynamoDB, DMS, Athena, and Lake Formation. Familiarity with DataSync for file movement and medallion architecture for data lakes. Monitoring and alerting using CloudWatch, Datadog, or Splunk. Qualifications : BTech / MTech / MCA / MBA EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 1 week ago
4.0 years
0 Lacs
Pune, Maharashtra, India
On-site
The Role The Data Engineer is accountable for developing high quality data products to support the Bank’s regulatory requirements and data driven decision making. A Mantas Scenario Developer will serve as an example to other team members, work closely with customers, and remove or escalate roadblocks. By applying their knowledge of data architecture standards, data warehousing, data structures, and business intelligence they will contribute to business outcomes on an agile team. Responsibilities Developing and supporting scalable, extensible, and highly available data solutions Deliver on critical business priorities while ensuring alignment with the wider architectural vision Identify and help address potential risks in the data supply chain Follow and contribute to technical standards Design and develop analytical data models Required Qualifications & Work Experience First Class Degree in Engineering/Technology (4-year graduate course) 5 to 8 years’ experience implementing data-intensive solutions using agile methodologies Experience of relational databases and using SQL for data querying, transformation and manipulation Experience of modelling data for analytical consumers Hands on Mantas expert throughout the full development life cycle, including: requirements analysis, functional design, technical design, programming, testing, documentation, implementation, and on-going technical support Translate business needs (BRD) into effective technical solutions and documents (FRD/TSD) Ability to automate and streamline the build, test and deployment of data pipelines Experience in cloud native technologies and patterns A passion for learning new technologies, and a desire for personal growth, through self-study, formal classes, or on-the-job training Excellent communication and problem-solving skills T echnical Skills (Must Have) ETL: Hands on experience of building data pipelines. Proficiency in two or more data integration platforms such as Ab Initio, Apache Spark, Talend and Informatica Mantas: Expert in Oracle Mantas/FCCM, Scenario Manager, Scenario Development, thorough knowledge and hands on experience in Mantas FSDM, DIS, Batch Scenario Manager Big Data: Experience of ‘big data’ platforms such as Hadoop, Hive or Snowflake for data storage and processing Data Warehousing & Database Management: Understanding of Data Warehousing concepts, Relational (Oracle, MSSQL, MySQL) and NoSQL (MongoDB, DynamoDB) database design Data Modeling & Design: Good exposure to data modeling techniques; design, optimization and maintenance of data models and data structures Languages: Proficient in one or more programming languages commonly used in data engineering such as Python, Java or Scala DevOps: Exposure to concepts and enablers - CI/CD platforms, version control, automated quality control management Technical Skills (Valuable) Ab Initio: Experience developing Co>Op graphs; ability to tune for performance. Demonstrable knowledge across full suite of Ab Initio toolsets e.g., GDE, Express>IT, Data Profiler and Conduct>IT, Control>Center, Continuous>Flows Cloud: Good exposure to public cloud data platforms such as S3, Snowflake, Redshift, Databricks, BigQuery, etc. Demonstratable understanding of underlying architectures and trade-offs Data Quality & Controls: Exposure to data validation, cleansing, enrichment and data controls Containerization: Fair understanding of containerization platforms like Docker, Kubernetes File Formats: Exposure in working on Event/File/Table Formats such as Avro, Parquet, Protobuf, Iceberg, Delta Others: Basics of Job scheduler like Autosys. Basics of Entitlement management Certification on any of the above topics would be an advantage. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Digital Software Engineering ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster.
Posted 1 week ago
8.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Who We Are BCG partners with clients from the private, public, and not‐for profit sectors in all regions of the globe to identify their highest value opportunities, address their most critical challenges, and transform their enterprises. We work with the most innovative companies globally, many of which rank among the world’s 500 largest corporations. Our global presence makes us one of only a few firms that can deliver a truly unified team for our clients – no matter where they are located. Our ~22,000 employees, located in 90+ offices in 50+ countries, enable us to work in collaboration with our clients, to tailor our solutions to each organization. We value and utilize the unique talents that each of these individuals brings to BCG; the wide variety of backgrounds of our consultants, specialists, and internal staff reflects the importance we place on diversity. Our employees hold degrees across a full range of disciplines – from business administration and economics to biochemistry, engineering, computer science, psychology, medicine, and law. What You'll Do BCG X develops innovative and AI driven solutions for the Fortune 500 in their highest‐value use cases. The BCG X Software group productizes repeat use‐cases, creating both reusable components as well as single‐tenant and multi‐tenant SaaS offerings that are commercialized through the BCG consulting business. BCG X is currently looking for a Software Engineering Architect to drive impact and change for the firms engineering and analytics engine and bring new products to BCG clients globally. This Will Include Serving as a leader within BCG X and specifically the KEY Impact Management by BCG X Tribe (Transformation, Post-Merger-Integration related software and data products) overseeing the delivery of high-quality software: driving technical roadmap, architectural decisions and mentoring engineers Influencing and serving as a key decision maker in BCG X technology selection & strategy Active “hands-on” role, building intelligent analytical products to solve problems, write elegant code, and iterate quickly Overall responsibility for the engineering and architecture alignment of all solutions delivered within the tribe. Responsible for technology roadmap of existing and new components delivered. Architecting and implementing backend and frontend solutions primarily using .NET, C#, MS SQL Server, Angular, and other technologies best suited for the goals, including open source i.e. Node, Django, Flask, Python where needed. What You'll Bring 8+ years of technology and software engineering experience in a complex and fast-paced business environment (ideally agile environment) with exposure to a variety of technologies and solutions, with at least 5 year’ experience in Architect role. Experience with a wide range of Application and Data architectures, platforms and tools including: Service Oriented Architecture, Clean Architecture, Software as a Service, Web Services, Object-Oriented Languages (like C# or Java), SQL Databases (like Oracle or SQL Server), Relational, Non-relational Databases, Hands on experience with analytics tools and reporting tools, Data Science experience etc. Thoroughly up to date in technology: Modern cloud architectures including AWS, Azure, GCP, Kubernetes Very strong particularly in .NET, C#, MS SQL Server, Angular technologies Open source stacks including NodeJs, React, Angular, Flask are good to have CI/CD / DevSecOps / GitOps toolchains and development approaches Knowledge in machine learning & AI frameworks Big data pipelines and systems: Spark, Snowflake, Kafka, Redshift, Synapse, Airflow At least Bachelors degree; Master’s degree and/or MBA preferred Team player with excellent work habits and interpersonal skills Care deeply about product quality, reliability, and scalability Passion about the people and culture side of engineering teams Outstanding written and oral communications skills The ability to travel, depending on project requirements.#BCGXjob Boston Consulting Group is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, age, religion, sex, sexual orientation, gender identity / expression, national origin, disability, protected veteran status, or any other characteristic protected under national, provincial, or local law, where applicable, and those with criminal histories will be considered in a manner consistent with applicable state and local laws. BCG is an E - Verify Employer. Click here for more information on E-Verify.
Posted 1 week ago
5.0 years
0 Lacs
Bengaluru, Karnataka, India
Remote
Introduction: A Career at HARMAN Corporate We’re a global, multi-disciplinary team that’s putting the innovative power of technology to work and transforming tomorrow. At HARMAN Corporate, you are integral to our company’s award-winning success. Enrich your managerial and organizational talents – from finance, quality, and supply chain to human resources, IT, sales, and strategy Augment your comprehensive skillset with expert training across decision-making, change management, leadership, and business development Obtain 360-degree support throughout your career life cycle, from early-stage to seasoned leader About The Role The People Intelligence Data Engineer will play a crucial role in driving data-driven decision-making within the HR function. This role involves leveraging tools to make data available for modeling, uncover insights, predict trends, and optimize HR strategies. The ideal candidate will have a strong analytical mindset, a passion for data, and a deep understanding of HR processes. What You Will Do Build, maintain, and troubleshoot HR databases and data systems. Manage, pre-process, and analyze large datasets to support HR initiatives. Design and implement ETL (Extract, Transform, Load) or ELT (Extract, Load, Transform) processes to move HR data efficiently. Utilize SQL, Python, or other programming languages to analyze HR data, identify trends, and generate insights. Integrate various systems and databases to streamline data flow and accessibility. Work with the team and take the lead in updating existing datasets and implementing organizational changes to reflect in our datasets and, eventually, in our People Insight inventory of metrics and visuals. Collaborate with HR and other departments to identify opportunities for AI-driven improvements. Work with cross-functional teams to implement data-driven solutions that enhance employee experience and operational efficiency. Provide data-driven insights and recommendations to support the strategic planning process. Contribute to the development and execution of HR strategies through robust data analysis and reporting. Investigate and stay current with emerging AI and intelligent tools, assessing their potential applications in the HR domain. Manage multiple tasks and projects simultaneously, demonstrating an eagerness to contribute and make a positive impact. Support colleagues in improving daily work processes and strategic planning. Work effectively within a team, fostering a collaborative and supportive environment. What You Need To Be Successful 5+ years of experience with Workday or equivalent systems Proficient in writing queries for relational databases Previous experience in HR analytics or a related field is preferred Proficiency in Python (e.g., Pandas, NumPy), R, or other relevant languages Knowledge of data warehousing platforms such as Snowflake, Amazon Redshift, or BigQuery Experience with HR analytics platforms such as Workday, SAP SuccessFactors, or Oracle HR Cloud General IT knowledge, including database management and system integration Business Acumen – Applies understanding of the business and how one’s area integrates with others to achieve departmental objectives Problem Solving – Identifies and solves technical and operational problems; understands and recognizes the broader impact across the department Impact – Guided by policies and departmental plans; impacts the team’s ability to achieve service, quality, and timeliness objectives Interpersonal Skills – Guides and influences others, internally or externally, to adopt a different point of view Bonus Points if You Have Bachelor’s degree, or equivalent work experience, with a preferred concentration in Computer Science, Information Management, Data Science, or a related field People analytics experience focusing on internal customers Excellent communication and presentation skills to effectively convey complex data insights to non-technical audiences Understanding of the Workday system and associated Workday products Understanding of HR operations and experience in employee data management What Makes You Eligible Be willing to work in an office What We Offer Flexible work environment, allowing for full-time remote work globally for positions that can be performed outside a HARMAN or customer location Access to employee discounts on world-class Harman and Samsung products (JBL, HARMAN Kardon, AKG, etc.) Extensive training opportunities through our own HARMAN University Competitive wellness benefits Tuition reimbursement “Be Brilliant” employee recognition and rewards program An inclusive and diverse work environment that fosters and encourages professional and personal development You Belong Here HARMAN is committed to making every employee feel welcomed, valued, and empowered. No matter what role you play, we encourage you to share your ideas, voice your distinct perspective, and bring your whole self with you – all within a support-minded culture that celebrates what makes each of us unique. We also recognize that learning is a lifelong pursuit and want you to flourish. We proudly offer added opportunities for training, development, and continuing education, further empowering you to live the career you want. About HARMAN: Where Innovation Unleashes Next-Level Technology Ever since the 1920s, we’ve been amplifying the sense of sound. Today, that legacy endures, with integrated technology platforms that make the world smarter, safer, and more connected. Across automotive, lifestyle, and digital transformation solutions, we create innovative technologies that turn ordinary moments into extraordinary experiences. Our renowned automotive and lifestyle solutions can be found everywhere, from the music we play in our cars and homes to venues that feature today’s most sought-after performers, while our digital transformation solutions serve humanity by addressing the world’s ever-evolving needs and demands. Marketing our award-winning portfolio under 16 iconic brands, such as JBL, Mark Levinson, and Revel, we set ourselves apart by exceeding the highest engineering and design standards for our customers, our partners and each other. If you’re ready to innovate and do work that makes a lasting impact, join our talent community today!
Posted 1 week ago
3.0 years
0 Lacs
Bengaluru East, Karnataka, India
On-site
Company Description At Nielsen, we are passionate about our work to power a better media future for all people by providing powerful insights that drive client decisions and deliver extraordinary results. Our talented, global workforce is dedicated to capturing audience engagement with content - wherever and whenever it’s consumed. Together, we are proudly rooted in our deep legacy as we stand at the forefront of the media revolution. When you join Nielsen, you will join a dynamic team committed to excellence, perseverance, and the ambition to make an impact together. We champion you, because when you succeed, we do too. We enable your best to power our future. Job Description This role will be part of a team that develops software that processes data captured every day from over a quarter of a million Computer and Mobile devices worldwide. Measuring panelists activities as they surf the Internet via Browsers, or utilizing Mobile App’s download from Apple’s and Google’s store. The Nielsen software meter used to capture this usage data has been optimized to be unobtrusive yet gather many biometric data points that the backend system can use to identify who is using the device, and also detect fraudulent behavior. The Software Engineer is ultimately responsible for delivering technical solutions: starting from the project's onboard until post launch support and including design, development, testing. It is expected to coordinate, support and work with multiple delocalized project teams in multiple regions. As a member of the technical staff with our Digital Meter Processing team, you will further develop the backend system that processes massive amounts of data every day, across 3 different AWS regions. Your role will involve designing, implementing, and maintaining robust, scalable solutions that leverage a Java based system that runs in an AWS environment. You will play a key role in shaping the technical direction of our projects and mentoring other team members. Responsibilities System Deployment: Conceive, design and build new features in the existing backend processing pipelines. CI/CD Implementation Design and implement CI/CD pipelines for automated build, test, and deployment processes. Ensure continuous integration and delivery of features, improvements, and bug fixes. Code Quality And Best Practices Enforce coding standards, best practices, and design principles. Conduct code reviews and provide constructive feedback to maintain high code quality. Performance Optimization Identify and address performance bottlenecks in both reading, processing and writing data to the backend data stores. Mentorship And Collaboration Mentor junior engineers, providing guidance on technical aspects and best practices. Collaborate with cross-functional teams to ensure a cohesive and unified approach to software development. Security And Compliance Implement security best practices for all tiers of the system. Ensure compliance with industry standards and regulations related to AWS platform security. Qualifications Key Skills Bachelor's or Master’s degree in Computer Science, Software Engineering, or a related field. Proven experience, minimum 3 years, in high-volume data processing development expertise using ETL tools such as AWS Glue or PySpark, Java, SQL and databases such as Postgres Minimum 2 years development on an AWS platform Strong understanding of CI/CD principles and tools. GitLab a plus Excellent problem-solving and debugging skills. Strong communication and collaboration skills with ability to communicate complex technical concepts and align organization on decisions Sound problem-solving skills with the ability to quickly process complex information and present it clearly and simply Utilizes team collaboration to create innovative solutions efficiently Other Desirable Skills Knowledge of networking principles and security best practices. AWS certifications Experience with Data Warehouses, ETL, and/or Data Lakes very desirable Experience with RedShift, Airflow, Python, Lambda, Prometheus, Grafana, & OpsGeni a bonus Exposure to the Google Cloud Platform (GCP) Additional Information Please be aware that job-seekers may be at risk of targeting by scammers seeking personal data or money. Nielsen recruiters will only contact you through official job boards, LinkedIn, or email with a nielsen.com domain. Be cautious of any outreach claiming to be from Nielsen via other messaging platforms or personal email addresses. Always verify that email communications come from an @nielsen.com address. If you're unsure about the authenticity of a job offer or communication, please contact Nielsen directly through our official website or verified social media channels.
Posted 1 week ago
4.5 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
BOLD is seeking an ETL Specialist who will help build the architecture and maintain a robust, scalable, and sustainable business intelligence platform. Assisted by the Data Team, this role will work with highly scalable systems, complex data models, and a large amount of transactional data. Job Description ABOUT THIS TEAM BOLD Business Intelligence(BI) team is a centralized team responsible for managing all aspects of the organization's BI strategy, projects and systems. BI team enables business leaders to make data-driven decisions by providing reports and analysis. The team is responsible for developing and manage a latency-free credible enterprise data warehouse which is a data source for decision making and input to various functions of the organization like Product, Finance, Marketing, Customer Support etc. BI team has four sub-components as Data analysis, ETL, Data Visualization and QA. It manages deliveries through Snowflake, Sisense and Microstrategy as main infrastructure solutions. Other technologies including Python, R, Airflow are also used in ETL, QA and data visualizations. WHAT YOU’LL DO Architect, develop, and maintain a highly scalable data warehouse and build/maintain ETL processes. Utilize Python and Airflow to integrate data from across the business into data warehouse. Integrate third party data into the data warehouse like google analytics, google ads, Iterable WHAT YOU’LL NEED Experience working as an ETL developer in a Data Engineering, Data Warehousing or Business Intelligence team Understanding of data integration/data engineering architecture and should be aware of ETL standards, methodologies, guidelines and techniques Hands on with python programming language and its packages like Pandas, NumPy Strong understanding of SQL queries, aggregate functions, Complex joins and performance tuning Should have good exposure of Databases like Redshift/SQL Server/Oracle/PostgreSQL (any one of these) Broad understanding of data warehousing and dimensional modelling concepts EXPERIENCE- Senior Software Engineer, ETL: 4.5 years+ BENEFITS Outstanding Compensation Competitive salary Tax-friendly compensation structure Bi-annual bonus Annual Appraisal Equity in company 100% Full Health Benefits Group Mediclaim, personal accident, & term life insurance Group Mediclaim benefit (including parents' coverage) Practo Plus health membership for employees and family Personal accident and term life insurance coverage Flexible Time Away 24 days paid leaves Declared fixed holidays Paternity and maternity leave Compassionate and marriage leave Covid leave (up to 7 days) ADDITIONAL BENEFITS Internet and home office reimbursement In-office catered lunch, meals, and snacks Certification policy Cab pick-up and drop-off facility About BOLD We Transform Work Lives As an established global organization, BOLD helps people find jobs. Our story is one of growth, success, and professional fulfillment. We create digital products that have empowered millions of people in 180 countries to build stronger resumes, cover letters, and CVs. The result of our work helps people interview confidently, finding the right job in less time. Our employees are experts, learners, contributors, and creatives. We Celebrate And Promote Diversity And Inclusion We value our position as an Equal Opportunity Employer. We hire based on qualifications, merit, and our business needs. We don't discriminate regarding race, color, religion, gender, pregnancy, national origin or citizenship, ancestry, age, physical or mental disability, veteran status, sexual orientation, gender identity or expression, marital status, genetic information, or any other applicable characteristic protected by law.
Posted 1 week ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Description About Amazon.com: Amazon.com strives to be Earth's most customer-centric company where people can find and discover virtually anything they want to buy online. By giving customers more of what they want - low prices, vast selection, and convenience - Amazon.com continues to grow and evolve as a world-class e-commerce platform. Amazon's evolution from Web site to e-commerce partner to development platform is driven by the spirit of innovation that is part of the company's DNA. The world's brightest technology minds come to Amazon.com to research and develop technology that improves the lives of shoppers and sellers around the world. About Team The RBS team is an integral part of Amazon online product lifecycle and buying operations. The team is designed to ensure Amazon remains competitive in the online retail space with the best price, wide selection and good product information. The team’s primary role is to create and enhance retail selection on the worldwide Amazon online catalog. The tasks handled by this group have a direct impact on customer buying decisions and online user experience. Overview Of The Role An candidate will be a self-starter who is passionate about discovering and solving complicated problems, learning complex systems, working with numbers, and organizing and communicating data and reports. You will be detail-oriented and organized, capable of handling multiple projects at once, and capable of dealing with ambiguity and rapidly changing priorities. You will have expertise in process optimizations and systems thinking and will be required to engage directly with multiple internal teams to drive business projects/automation for the RBS team. Candidates must be successful both as individual contributors and in a team environment, and must be customer-centric. Our environment is fast-paced and requires someone who is flexible, detail-oriented, and comfortable working in a deadline-driven work environment. Responsibilities Include Works across team(s) and Ops organization at country, regional and/or cross regional level to drive improvements and enables to implement solutions for customer, cost savings in process workflow, systems configuration and performance metrics. Basic Qualifications Bachelor's degree in Computer Science, Information Technology, or a related field Proficiency in automation using Python Excellent oral and written communication skills Experience with SQL, ETL processes, or data transformation Preferred Qualifications Experience with scripting and automation tools Familiarity with Infrastructure as Code (IaC) tools such as AWS CDK Knowledge of AWS services such as SQS, SNS, CloudWatch and DynamoDB Understanding of DevOps practices, including CI/CD pipelines and monitoring solutions Understanding of cloud services, serverless architecture, and systems integration Key job responsibilities As a Business Intelligence Engineer in the team, you will collaborate closely with business partners, architect, design, implement, and BI projects & Automations. Responsibilities Design, development and ongoing operations of scalable, performant data warehouse (Redshift) tables, data pipelines, reports and dashboards. Development of moderately to highly complex data processing jobs using appropriate technologies (e.g. SQL, Python, Spark, AWS Lambda, etc.) Development of dashboards and reports. Collaborating with stakeholders to understand business domains, requirements, and expectations. Additionally, working with owners of data source systems to understand capabilities and limitations. Deliver minimally to moderately complex data analysis; collaborating as needed with Data Science as complexity increases. Actively manage the timeline and deliverables of projects, anticipate risks and resolve issues. Adopt best practices in reporting and analysis: data integrity, test design, analysis, validation, and documentation. Internal Job Description Retail Business Service, ARTS is a growing team that supports the Retail Efficiency and Paid Services business and tech teams. There is ample growth opportunity in this role for someone who exhibits Ownership and Insist on the Highest Standards, and has strong engineering and operational best practices experience. Basic Qualifications 5+ years of relevant professional experience in business intelligence, analytics, statistics, data engineering, data science or related field. Experience with Data modeling, SQL, ETL, Data Warehousing and Data Lakes. Strong experience with engineering and operations best practices (version control, data quality/testing, monitoring, etc.) Expert-level SQL. Proficiency with one or more general purpose programming languages (e.g. Python, Java, Scala, etc.) Knowledge of AWS products such as Redshift, Quicksight, and Lambda. Excellent verbal/written communication & data presentation skills, including ability to succinctly summarize key findings and effectively communicate with both business and technical teams. Preferred Qualifications Experience with data-specific programming languages/packages such as R or Python Pandas. Experience with AWS solutions such as EC2, DynamoDB, S3, and EMR. Knowledge of machine learning techniques and concepts. Basic Qualifications 3+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience Experience with data visualization using Tableau, Quicksight, or similar tools Experience with data modeling, warehousing and building ETL pipelines Experience in Statistical Analysis packages such as R, SAS and Matlab Experience using SQL to pull data from a database or data warehouse and scripting experience (Python) to process data for modeling Preferred Qualifications Experience with AWS solutions such as EC2, DynamoDB, S3, and Redshift Experience in data mining, ETL, etc. and using databases in a business environment with large-scale, complex datasets Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI MAA 15 SEZ Job ID: A2994013
Posted 1 week ago
3.0 years
0 Lacs
Hyderābād
Remote
About Us: Please be aware we have noticed an increase in hiring scams potentially targeting Seismic candidates. Read our full statement on our Careers page. Seismic is the global leader in AI-powered enablement, empowering go-to-market leaders to drive strategic growth and deliver exceptional customer experiences at scale. The Seismic Enablement Cloud™ is the only unified AI-powered platform that prepares customer-facing teams with the skills, content, tools, and insights needed to maximize every buyer interaction and strengthen client relationships. Trusted by more than 2,000 organizations worldwide, Seismic helps businesses achieve measurable outcomes and accelerate revenue growth. Seismic is headquartered in San Diego with offices across North America, Europe, Asia and Australia. Learn more at seismic.com. Seismic is committed to building an inclusive workplace that ignites growth for our employees and creates a culture of belonging that allows all employees to be seen and valued for who they are. Learn more about DEI at Seismic here . Overview: At Seismic, we're proud of our engineering culture where technical excellence and innovation drive everything we do. We're a remote-first data engineering team responsible for the critical data pipeline that powers insights for over 2,300 customers worldwide. Our team manages all data ingestion processes, leveraging technologies like Apache Kafka, Spark, various C# microservices services, and a shift-left data mesh architecture to transform diverse data streams into the valuable reporting models that our customers rely on daily to make data-driven decisions. Additionally, we're evolving our analytics platform to include AI-powered agentic workflows. Who You Are: Have working knowledge of one OO language, preferably C#, but won’t hold your Java expertise against you (you’re the type of person who’s interested in learning and becoming an expert at new things). Additionally, we’ve been using Python more and more, and bonus points if you’re familiar with Scala. Have experience with architecturally complex distributed systems. Highly focused on operational excellence and quality – you have a passion to write clean and well tested code and believe in the testing pyramid. Outstanding verbal and written communication skills with the ability to work with others at all levels, effective at working with geographically remote and culturally diverse teams. You enjoy solving challenging problems, all while having a blast with equally passionate team members. Conversant in AI engineering. You’ve been experimenting with building ai solutions/integrations using LLMs, prompts, Copilots, Agentic ReAct workflows, etc. Who you are:: BS or MS in Computer Science, similar technical field of study, or equivalent practical experience. 3+ years of software development experience within a SaaS business. Must have a familiarity with .NET Core, and C# and frameworks. Experience in data engineering - building and managing Data Pipelines, ETL processes, and familiarity with various technologies that drive them: Kafka, FiveTran (Optional), Spark/Scala (Optional), etc. Data warehouse experience with Snowflake, or similar (AWS Redshift, Apache Iceberg, Clickhouse, etc). Familiarity with RESTFul microservice-based APIs Experience in modern CI/CD pipelines and infrastructure (Jenkins, Github Actions, Terraform, Kubernetes) a big plu (or equivalent) Experience with the SCRUM and the AGILE development process. Familiarity developing in cloud-based environments Optional: Experience with 3rd party integrations Optional: familiarity with Meeting systems like Zoom, WebEx, MS Teams Optional: familiarity with CRM systems like Salesforce, Microsoft Dynamics 365, Hubspot. What you'll be doing:: Collaborating with experienced software engineers, data scientists and product managers to rapidly build, test, and deploy code to create innovative solutions and add value to our customers' experience. Building large scale platform infrastructure and REST APIs serving machine learning driven content recommendations to Seismic products. Leveraging the power of context in third-party applications such as CRMs to drive machine learning algorithms and models. Helping build next-gen Agentic tooling for reporting and insights Processing large amounts of internal and external system data for analytics, caching, modeling and more. Identifying performance bottlenecks and implementing solutions for them. Participating in code reviews, system design reviews, agile ceremonies, bug triage and on-call rotations. Job Posting Footer: If you are an individual with a disability and would like to request a reasonable accommodation as part of the application or recruiting process, please click here. Headquartered in San Diego and with employees across the globe, Seismic is the global leader in sales enablement , backed by firms such as Permira, Ameriprise Financial, EDBI, Lightspeed Venture Partners, and T. Rowe Price. Seismic also expanded its team and product portfolio with the strategic acquisitions of SAVO, Percolate, Grapevine6, and Lessonly. Our board of directors is composed of several industry luminaries including John Thompson, former Chairman of the Board for Microsoft. Seismic is an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to gender, age, race, religion, or any other classification which is protected by applicable law. Please note this job description is not designed to cover or contain a comprehensive listing of activities, duties or responsibilities that are required of the employee for this job. Duties, responsibilities and activities may change at any time with or without notice. Linkedin Posting Section: #LI-ST1
Posted 1 week ago
10.0 years
3 - 8 Lacs
Hyderābād
On-site
Key Responsibilities: Design, develop, test, deploy and troubleshoot SQL scripts, Stored Procedures that implement complex ETL processes. Must have experience in Query Optimization. Must be able to understand complex business logic, fine tune long-running queries Must be expert at reverse engineering and extracting business requirements and rules from the code. Scheduling and monitoring the ETL jobs using SSIS. Build and review functional and business specifications Expert understanding of PostgreSQL, Stored Procedures, Views, and Functions. Provide the estimates for the ETL work Database/Data Warehouse performance tuning, monitoring and troubleshooting expertise. Expertise in query construction, including writing and optimizing complex SQL queries that contain sub queries, joins, and derived tables Troubleshooting and fixing the code Unit testing the code and documentation of results Must be expert at providing non-functional requirements. Help creating test data and provide support to QA process. Work with gate keeper on promoting the data stage jobs from QA to Production. Build, patch and data fix in production environment. Ensure very high availability to scale of 99.999%. Establish coding practices, best practices, and SoPs. Participate in code reviews and enforce relevant process. Strong analytical and thinking capabilities, good communication skills to conduct requirement gathering sessions/interview customers Ability to perform independent research to understand the product requirements and customer needs Communicates effectively with the project teams and other stakeholders. Translate technical details to non-technical audience. Expert at creating architectural artifacts for Data Warehouse. Team, effort management. Ability to set expectations for the client and the team. Ensure all deliverables are delivered in time at highest quality. Qualifications and Experience: Years Of Experience: 10+ years Technical Skills: ETL-SSIS SQL Stored Procedure, Functions, Triggers etc Query Optimization Server monitoring ETL: -AWS-Glue DBMS: -AWS Aurora MySQL, AWS Redshift, PostgreSQL Cloud Services: -AWS cloud services, including EC2, RDS, S3, and IAM. Data Skills: – SQL Performance tuning. Coding: – knowledge of programming language like C#, Python or Java. To oversee dev resources. ABOUT QUALIZEAL QualiZeal is North America’s fastest-growing independent digital quality engineering services company.With a diverse portfolio of digital transformation services encompassing Quality Engineering, Digital Engineering, Advisory and Transformation, and Emerging Technology Testing, QualiZeal empowers organizations of all sizes globally to achieve quality excellence and sustainable market leadership. With a headcount of 850+ elite software quality engineers and trusted by 70+ global enterprises over 200+ projects across 12+ industries, QualiZeal is widely recognized for its pioneering innovation and industryleading client and employee Net Promoter Scores (NPS). QualiZeal has been acknowledged as Gold Stevie® Award winner for 2025 in the Fastest Growing Tech Company of the Year category. Additionally, QualiZeal has also been named the winner of the Bronze Stevie® Award for 2024 Tech Startup of the Year (Services) category in the 22nd Annual American Business Awards®. We also have been recognized as a Major Contender in Everest Group’s Specialist and Next-Generation QE Services PEAK® Matrix 2023 reports and as a notable provider in Forrester’s Continuous Automation and Testing (CAT) Services Landscape Report (Q4 2023).
Posted 1 week ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Description Profit Intelligence team in Amazon Retail is seeking a seasoned and talented Senior Data Engineer to join the Historical Contribution Profit (HCP) team. HCP is a fast growing team with a mandate to build tools to automate profit-and-loss forecasting and planning for the Physical Consumer business. We are building the next generation data and Business Intelligence solutions using big data technologies and native AWS (NAWS). As a Data Engineer in Amazon, you will be working in a large, extremely complex and dynamic data environment. You should be passionate about working with big data and are able to learn new technologies rapidly and evaluate them critically. You should have excellent communication skills and be able to work with business owners to translate business requirements into system solutions. You are a self-starter, comfortable with ambiguity, and working in a fast-paced and ever-changing environment. Ideally, you are also experienced with at least one of the programming languages such as Java, C++, Spark/Scala, Python, etc. Major Responsibilities Work with a team of product and program managers, engineering leaders, and business leaders to build data architectures and platforms to support business Design, develop, and operate high-scalable, high-performance, low-cost, and accurate data pipelines in distributed data processing platforms Recognize and adopt best practices in data processing, reporting, and analysis: data integrity, test design, analysis, validation, and documentation Keep up to date with big data technologies, evaluate and make decisions around the use of new or existing software products to design the data architecture Design, build and own all the components of a high-volume data warehouse end to end. Provide end-to-end data engineering support for project lifecycle execution (design, execution and risk assessment) Continually improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers Interface with other technology teams to extract, transform, and load (ETL) data from a wide variety of data sources Own the functional and nonfunctional scaling of software systems in your ownership area. Implement big data solutions for distributed computing. Key job responsibilities As a DE on our team, you will be responsible for leading the data modelling, database design, and launch of some of the core data pipelines. You will have significant influence on our overall strategy by helping define the data model, drive the database design, and spearhead the best practices to delivery high quality products. About The Team Profit intelligence systems measures, predicts true profit(/loss) for each item as a result of a specific shipment to an Amazon customer. Profit Intelligence is all about providing intelligent ways for Amazon to understand profitability across retail business. What are the hidden factors driving the growth or profitability across millions of shipments each day? We compute the profitability of each and every shipment that gets shipped out of Amazon. Guess what, we predict the profitability of future possible shipments too. We are a team of agile, can-do engineers, who believe that not only are moon shots possible but that they can be done before lunch. All it takes is finding new ideas that challenge our preconceived notions of how things should be done. Process and procedure matter less than ideas and the practical work of getting stuff done. This is a place for exploring the new and taking risks. We push the envelope in using cloud services in AWS as well as the latest in distributed systems, forecasting algorithms, and data mining. Basic Qualifications 3+ years of data engineering experience Experience with data modeling, warehousing and building ETL pipelines Experience with SQL Preferred Qualifications Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases) Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI - Tamil Nadu Job ID: A3031357
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
31458 Jobs | Dublin
Wipro
16542 Jobs | Bengaluru
EY
10788 Jobs | London
Accenture in India
10711 Jobs | Dublin 2
Amazon
8660 Jobs | Seattle,WA
Uplers
8559 Jobs | Ahmedabad
IBM
7988 Jobs | Armonk
Oracle
7535 Jobs | Redwood City
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi
Capgemini
6091 Jobs | Paris,France