Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
1.0 - 31.0 years
1 - 2 Lacs
Jaipur
On-site
🧠 About the Role We are seeking a proactive and detail-oriented Apache Superset & SQL Expert with 1+ years of experience in the healthcare domain. You’ll be responsible for building insightful BI dashboards and maintaining complex data pipelines to support mission-critical analytics for healthcare operations and compliance reporting. ✅ Key Responsibilities Develop and maintain advanced Apache Superset dashboards tailored for healthcare KPIs and operational metrics Write, optimise, and maintain complex SQL queries to extract and transform data from multiple healthcare systems Collaborate with data engineering and clinical teams to define and model datasets for visualisation Ensure dashboards comply with healthcare data governance, privacy (e.g., HIPAA), and audit requirements Monitor performance, implement row-level security, and maintain a robust Superset configuration Translate clinical and operational requirements into meaningful visual stories 🧰 Required Skills & Experience 1+ years of domain experience in healthcare analytics or working with healthcare datasets (EHR, claims, patient outcomes, etc.) 3+ years of experience working with Apache Superset in a production environment Strong command over SQL, including query tuning, joins, aggregations, and complex transformations Hands-on experience with data modelling and relational database design Solid understanding of clinical terminology, healthcare KPIs, and reporting workflows Experience in working with PostgreSQL, MySQL, or other SQL-based databases Strong documentation, communication, and stakeholder-facing skills 🌟 Nice-to-Have Familiarity with HIPAA, HL7/FHIR data structures, or other regulatory standards Experience with Python, Flask, or Superset plugin development Exposure to modern healthcare data platforms, dbt, or Airflow Experience integrating Superset with EMR, clinical data lakes, or warehouse systems like Redshift or BigQuery
Posted 1 week ago
7.0 - 12.0 years
0 Lacs
maharashtra
On-site
As a Lead Data Engineer, you will be responsible for leveraging your 7 to 12+ years of hands-on experience in SQL database design, data architecture, ETL, Data Warehousing, Data Mart, Data Lake, Big Data, Cloud (AWS), and Data Governance domains. Your expertise in a modern programming language such as Scala, Python, or Java, with a preference for Spark/ Pyspark, will be crucial in this role. Your role will require you to have experience with configuration management and version control apps like Git, along with familiarity working within a CI/CD framework. If you have experience in building frameworks, it will be considered a significant advantage. A minimum of 8 years of recent hands-on SQL programming experience in a Big Data environment is necessary, with a preference for experience in Hadoop/Hive. Proficiency in PostgreSQL, RDBMS, NoSQL, and columnar databases will be beneficial for this role. Your hands-on experience in AWS Cloud data engineering components, including API Gateway, Glue, IoT Core, EKS, ECS, S3, RDS, Redshift, and EMR, will play a vital role in developing and maintaining ETL applications and data pipelines using big data technologies. Experience with Apache Kafka, Spark, and Airflow is a must-have for this position. If you are excited about this opportunity and possess the required skills and experience, please share your CV with us at omkar@hrworksindia.com. We look forward to potentially welcoming you to our team. Regards, Omkar,
Posted 1 week ago
4.0 - 8.0 years
0 Lacs
karnataka
On-site
You are an experienced Senior QA Specialist being sought to join a dynamic team for a critical AWS to GCP migration project. Your primary responsibility will involve the rigorous testing of data pipelines and data integrity in GCP cloud to ensure seamless reporting and analytics capabilities. Your key responsibilities will include designing and executing test plans to validate data pipelines re-engineered from AWS to GCP, ensuring data integrity and accuracy. You will work closely with data engineering teams to understand AVRO, ORC, and Parquet file structures in AWS S3, and analyze the data in external tables created in Athena used for reporting. It will be essential to ensure that schema and data in Bigquery match against Athena to support reporting in PowerBI. Additionally, you will be required to test and validate Spark pipelines and other big data workflows in GCP. Documenting all test results and collaborating with development teams to resolve discrepancies will also be part of your responsibilities. Furthermore, providing support to UAT business users during UAT testing is expected. To excel in this role, you should possess proven experience in QA testing within a big data DWBI ecosystem. Strong familiarity with cloud platforms such as AWS, GCP, or Azure, with hands-on experience in at least one is necessary. Deep knowledge of data warehousing solutions like BigQuery, Redshift, Synapse, or Snowflake is essential. Expertise in testing data pipelines and understanding different file formats like Avro and Parquet is required. Experience with reporting tools such as PowerBI or similar is preferred. Your excellent problem-solving skills and ability to work independently will be valuable, along with strong communication skills and the ability to collaborate effectively across teams.,
Posted 1 week ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
Remote
We’re looking for a talented technology leader to join our passionate engineering team as a Senior Software Engineer and help us scale and grow our cloud-based systems and technologies with a keen eye towards software quality and operational excellence. As a tech “unicorn” with the headquarters in San Diego, this is a amazing opportunity for the right person to join and guide the technical vision of this pre-IPO software company as we make history in the sales enablement space! As the Senior Software Engineer, you will play a vital part in driving solid cloud architecture and ensure best engineering practices across multiple engineering teams. You, along with your globally dispersed teammates, will collaborate to build micro-service based systems to support multiple different products to support the systems for sharing and collaboration between sales and marketing departments of our customers. You will work closely with our product leads, engineering leads, team and contribute to the best practices for CI/CD. You will also mentor more Jr. engineers and help grow a strong engineering team. This is an opportunity to work as a technical thought-leader and share ideas to build the best-in-class microservice that wows our internal and external stakeholders with its functionality and simplicity. At Seismic, we’re committed to providing benefits and perks for the whole self. To explore our benefits available in each country, please visit the Global Benefits page. Please be aware we have noticed an increase in hiring scams potentially targeting Seismic candidates. Read our full statement on our Careers page. Seismic is the global leader in AI-powered enablement, empowering go-to-market leaders to drive strategic growth and deliver exceptional customer experiences at scale. The Seismic Enablement Cloud™ is the only unified AI-powered platform that prepares customer-facing teams with the skills, content, tools, and insights needed to maximize every buyer interaction and strengthen client relationships. Trusted by more than 2,000 organizations worldwide, Seismic helps businesses achieve measurable outcomes and accelerate revenue growth. Seismic is headquartered in San Diego with offices across North America, Europe, Asia and Australia. Learn more at seismic.com. Seismic is committed to building an inclusive workplace that ignites growth for our employees and creates a culture of belonging that allows all employees to be seen and valued for who they are. Learn more about DEI at Seismic here. Develop, improve, and maintain, our microservice and ensure seamless integration to the rest of the Seismic platform. Help grow a new local engineering team while collaborating and driving technical and architectural decisions across multiple remote teams. Collaborate with globally-dispersed product managers, designers, and software engineers to rapidly build, test, and deploy code to create innovative solutions and add values to our customers' experience with Seismic. Explore new technologies and industry trends and bring your findings to life in our products. Participate in and contribute towards code reviews, bug/issue triage, and documentation. Contribute to troubleshooting and continuous quality improvements. Bachelor's degree in Computer Science, similar technical field of study, or equivalent practical experience. 5+ years of software engineering experience and a passion for building and innovating – you stay up to date with the latest technologies and trends in development. Must have a strong familiarity within .NET Core, and C# or similar object-oriented languages and frameworks. Data warehouse experience with Snowflake, or similar (AWS Redshift, Apache Iceberg, Clickhouse, etc). Familiarity with RESTFul microservice-based APIs Experience with the SCRUM and the AGILE development process. Familiarity and comfortability developing in cloud-based environments (Azure, AWS, Google Cloud, etc.) Optional: Experience with HTML/CSS/JS and modern SPA frameworks (React Vue.js, etc.). Optional: Experience with 3rd party integrations Optional: familiarity with Meeting systems like Zoom, WebEx, MS Teams Optional: familiarity with CRM systems like Salesforce, Microsoft Dynamics 365, Hubspot. Seen as an active contributor in the team problem-solving-process – you aren't afraid to share your opinions in a low-ego manner or roll up your sleeves and write critical path code or refactor a significant piece of code. Deep experience across multiple software projects, driving end-to-end software development lifecycle of an architecturally complex system or product. Ability to think tactically as well as strategically, respecting what came before you and always thinking longer-term. Highly focused on operational excellence and software quality, with experience in CI/CD and best operational practices. Your technical skills are sought after as you develop in a pragmatic and efficient manner. You enjoy solving challenging problems, all while having a blast with equally passionate and talented team members. Conversant in AI engineering. You’ve been experimenting with building ai solutions/integrations using LLMs, prompts, Copilots, Agentic ReAct workflows, etc. If you are an individual with a disability and would like to request a reasonable accommodation as part of the application or recruiting process, please click here. Headquartered in San Diego and with employees across the globe, Seismic is the global leader in sales enablement , backed by firms such as Permira, Ameriprise Financial, EDBI, Lightspeed Venture Partners, and T. Rowe Price. Seismic also expanded its team and product portfolio with the strategic acquisitions of SAVO, Percolate, Grapevine6, and Lessonly. Our board of directors is composed of several industry luminaries including John Thompson, former Chairman of the Board for Microsoft. Seismic is an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to gender, age, race, religion, or any other classification which is protected by applicable law. Please note this job description is not designed to cover or contain a comprehensive listing of activities, duties or responsibilities that are required of the employee for this job. Duties, responsibilities and activities may change at any time with or without notice.
Posted 1 week ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description What Will You Do? Build and own ETL data pipelines that will power all our reporting & analytics needs Develop clean, safe, testable and cost-efficient solutions Build fast and reliable pipelines with underlying data model that can scale according to business needs and growth Understand the system you are building, foresee possible shortcomings and be able to resolve or compromise appropriately Mentor Junior engineers in data Quality/pipelines etc. Company Overview Fanatics is building the leading global digital sports platform to ignite and harness the passions of fans and maximize the presence and reach for hundreds of partners globally. Leveraging these long-standing partnerships, a database of more than 80 million global consumers and a trusted, beyond recognizable brand name, Fanatics is expanding its position as the global leader for licensed sports merchandise to now becoming a next-gen digital sports platform, featuring an array of offerings across the sports ecosystem. The Fanatics family of companies currently includes Fanatics Commerce, a vertically-integrated licensed merchandise business that has changed the way fans purchase their favorite team apparel, jerseys, headwear and hardgoods through a tech-infused approach to making and quickly distributing fan gear in today’s 24/7 mobile-first economy; Fanatics Collectibles, a transformative company that is building a new model for the hobby and giving collectors an end-to-end collectibles experience; and Fanatics Betting & Gaming, a mobile betting, gaming and retail Sportsbook platform. all major Fanatics’ partners include professional sports leagues (NFL, MLB, NBA, NHL, NASCAR, MLS, PGA) and hundreds of collegiate and professional teams, which include several of the biggest global soccer clubs. As a market leader with more than 8,000 employees, and hundreds of partners, suppliers, and vendors worldwide, we take responsibility for driving toward more ethical and sustainable practices. We are committed to building an inclusive Fanatics community, reflecting and representing society at every level of the business, including our employees, vendors, partners and fans. Fanatics is also dedicated to making a positive impact in the communities where we all live, work, and play through strategic philanthropic initiatives. At Fanatics, we’re a diverse, passionate group of employees aiming to ignite pride and passion in the fans we outfit, celebrate and support. We recognize that diversity helps drive and foster innovation, and through our IDEA program (inclusion, diversity, equality and advocacy) at Fanatics we provide employees with tools and resources to feel connected and engaged in who they are and what they do to support the ultimate fan experience. Job Requirements Must have 5+ years of experience in Data Engineering field, with a proven track record of exposure in Big Data technologies such as Hadoop, Amazon EMR, Hive, Spark. Expertise in SQL technologies and at least one major Data Warehouse technology (Snowflake, RedShift, BigQuery etc.). Must have experience in building data platform – designing and building data model, integrate data from many sources, build ETL and data-flow pipelines, and support all parts of the data platform. Programming proficiency in Python and Scala, with experience writing modular, reusable, and testable code, including robust error handling and logging in data engineering applications. Hands-on experience with AWS cloud services, particularly in areas such as S3, Lambda, Glue, EC2, RDS, and IAM. Experience with orchestration tools such as Apache Airflow, for scheduling, monitoring, and managing data pipelines in a production environment. Familiarity with CI/CD practices, automated deployment pipelines, and version control systems (e.g., Git, GitHub/GitLab), ensuring reliable and repeatable data engineering workflows. Data Analysis skill – can make arguments with data and proper visualization. Energetic, enthusiastic, detail-oriented, and passionate about producing high-quality analytics deliverable. Must have experience in developing application with high performance and low latency. Ability to take ownership of initiatives and drive them independently from conception to delivery, including post-deployment monitoring and support. Strong communication and interpersonal skills with the ability to build relationships with stakeholders, understand business requirements, and translate them into technical solutions. Comfortable working cross-functionally in a multi-team environment, collaborating with data analysts, product managers, and engineering teams to deliver end-to-end data solutions. Job Description We are seeking a Sr. Data Engineer who has strong design, developments skills and upkeeps scalability, availability and excellence when building the next generation of our data pipelines and platform. You are an expert in various data processing technologies and data stores, appreciate the value of clear communication and collaboration, and devote to continual capacity planning and performance fine-tuning for emerging business growth. As the Senior Data Engineer, you will be mentoring Junior engineers in the team. Good To Have Experience in Web Services, API integration, Data exchanges with third parties is preferred. Experience in Snowflake is a big plus. Experience in NoSQL technologies (MongoDB, FoundationDB, Redis) is a plus. We would appreciate candidates who can demonstrate business-side functional understanding and effectively communicate the business context alongside their technical expertise.
Posted 1 week ago
4.0 - 6.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Role AWS Data Engineer Job Summary We are looking for a skilled AWS Data Engineer with strong experience in building and managing cloud based ETL pipelines using AWS EMR Glue and Python PySpark along with data analytics expertise in Amazon Athena Redshift The ideal candidate will be responsible for designing developing and maintaining scalable data solutions in a cloud native environment Responsibilities Design and implement ETL workflows using AWS EMR Glue Python and PySpark Develop and optimize queries using Amazon Athena and Redshift Build scalable data pipelines to ingest transform and load data from various sources Ensure data quality integrity and security across AWS services Collaborate with data analysts data scientists and business stakeholders to deliver data solutions Monitor and troubleshoot ETL jobs and cloud infrastructure performance Automate data workflows and integrate with CI CD pipelines Required Skills And Qualifications Hands on experience with AWS EMR Glue and Athena Redshift Strong programming skills in Python with Pandas numpy PySpark and SQL Experience with ETL design implementation and optimization Familiarity with S3 Lambda CloudWatch and other AWS services Experience with schema design partitioning and query optimization in Athena Proficiency in version control Git and agile development practices Work Location Chennai Bangalore Hyderabad Tier Level 3 4 Experience 4 6 years
Posted 1 week ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Description The Maintenance Automation Platform (MAP) team within the Global Reliability and Maintenance Engineering (RME) Central Team is looking for an exceptional business intelligence engineer to join us. In this role, you will work on an analytical team to bust myths, create insights, and produce recommendations to help Central RME to deliver world class service to the Amazon network. As part of the team will be involved in all phases of research, experiment, design and analysis, including defining research questions, designing experiments, identifying data requirements, and communicating insights and recommendations. You'll also be expected to continuously learn new systems, tools, and industry best practices to analyze big data and enhance our analytics. These are exciting fast-paced businesses in which we get to work on extremely interesting analytical problems, in an environment where you get to learn from other engineers and apply business intelligence to help leadership make informed decisions. Your work focuses on complex and/or ambiguous problem areas in existing or new BI initiatives. You take the long term view of your team's solutions and how they fit into the team’s architecture. You consider where each solution is in its lifecycle and where appropriate, proactively fix architecture deficiencies You understand capabilities and limitations of the systems you work with (e.g. cluster size, concurrent users, data classification). You are able to explain these limitations to technical and non-technical audiences, helping them understand what’s currently possible and which efforts need a technology investment You take ownership of team infrastructure, providing a system-wide view and design guidance. You make things simpler. You drive BI engineering best practices (e.g. Operational Excellence, code reviews, syntax and naming convention, metric definitions, alarms) and set standards. You collaborate with customers and other internal partners to refine the problem into specific deliverables, and you understand the business context well enough to recommend alternatives and anticipate future requests. In addition to stakeholders, you may work with partner teams (business and technical) and Data Engineers/Data Scientists/BA/SDES/other BIEs to design and deliver the right solution. You contribute to the professional development of colleagues, improving their business and technical knowledge and their understanding of BI engineering best practices. Key job responsibilities Own the development, and maintenance of ongoing metrics, reports, analyses, dashboards on the key drivers of our business Partner with operations and business teams to consult, develop and implement KPI’s, automated reporting solutions and infrastructure improvements to meet business needs Develop and maintain scaled, automated, user-friendly systems, reports, dashboards, etc. that will support business needs Perform both ad-hoc and strategic analyses Strong verbal/written communication and presentation skills, including an ability to effectively communicate with both business and technical teams. Basic Qualifications 3+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience Experience with data visualization using Tableau, Quicksight, or similar tools Experience with data modeling, warehousing and building ETL pipelines Experience in Statistical Analysis packages such as R, SAS and Matlab Experience using SQL to pull data from a database or data warehouse and scripting experience (Python) to process data for modeling Preferred Qualifications Experience with AWS solutions such as EC2, DynamoDB, S3, and Redshift Experience in data mining, ETL, etc. and using databases in a business environment with large-scale, complex datasets Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI HYD 13 SEZ Job ID: A3031639
Posted 2 weeks ago
2.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Description When you attract people who have the DNA of pioneers and the DNA of explorers, you build a company of like-minded people who want to invent. And that’s what they think about when they get up in the morning: how are we going to work backwards from customers and build a great service or a great product” – Jeff Bezos Amazon.com’s success is built on a foundation of customer obsession. Have you ever thought about what it takes to successfully deliver millions of packages to Amazon customers seamlessly every day like a clock work? In order to make that happen, behind those millions of packages, billions of decision gets made by machines and humans. What is the accuracy of customer provided address? Do we know exact location of the address on Map? Is there a safe place? Can we make unattended delivery? Would signature be required? If the address is commercial property? Do we know open business hours of the address? What if customer is not home? Is there an alternate delivery address? Does customer have any special preference? What are other addresses that also have packages to be delivered on the same day? Are we optimizing delivery associate’s route? Does delivery associate know locality well enough? Is there an access code to get inside building? And the list simply goes on. At the core of all of it lies quality of underlying data that can help make those decisions in time. The person in this role will be a strong influencer who will ensure goal alignment with Technology, Operations, and Finance teams. This role will serve as the face of the organization to global stakeholders. This position requires a results-oriented, high-energy, dynamic individual with both stamina and mental quickness to be able to work and thrive in a fast-paced, high-growth global organization. Excellent communication skills and executive presence to get in front of VPs and SVPs across Amazon will be imperative. Key Strategic Objectives: Amazon is seeking an experienced leader to own the vision for quality improvement through global address management programs. As a Business Intelligence Engineer of Amazon last mile quality team, you will be responsible for shaping the strategy and direction of customer-facing products that are core to the customer experience. As a key member of the last mile leadership team, you will continually raise the bar on both quality and performance. You will bring innovation, a strategic perspective, a passionate voice, and an ability to prioritize and execute on a fast-moving set of priorities, competitive pressures, and operational initiatives. You will partner closely with product and technology teams to define and build innovative and delightful experiences for customers. You must be highly analytical, able to work extremely effectively in a matrix organization, and have the ability to break complex problems down into steps that drive product development at Amazon speed. You will set the tempo for defect reduction through continuous improvement and drive accountability across multiple business units in order to deliver large scale high visibility/ high impact projects. You will lead by example to be just as passionate about operational performance and predictability as you will be about all other aspects of customer experience. The Successful Candidate Will Be Able To Effectively manage customer expectations and resolve conflicts that balance client and company needs. Develop process to effectively maintain and disseminate project information to stakeholders. Be successful in a delivery focused environment and determining the right processes to make the team successful. This opportunity requires excellent technical, problem solving, and communication skills. The candidate is not just a policy maker/spokesperson but drives to get things done. Possess superior analytical abilities and judgment. Use quantitative and qualitative data to prioritize and influence, show creativity, experimentation and innovation, and drive projects with urgency in this fast-paced environment. Partner with key stakeholders to develop the vision and strategy for customer experience on our platforms. Influence product roadmaps based on this strategy along with your teams. Support the scalable growth of the company by developing and enabling the success of the Operations leadership team. Serve as a role model for Amazon Leadership Principles inside and outside the organization Actively seek to implement and distribute best practices across the operation Basic Qualifications 2+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience Experience with data visualization using Tableau, Quicksight, or similar tools Experience with scripting language (e.g., Python, Java, or R) Experience building and maintaining basic data artifacts (e.g., ETL, data models, queries) Experience applying basic statistical methods (e.g. regression) to difficult business problems Experience gathering business requirements, using industry standard business intelligence tool(s) to extract data, formulate metrics and build reports Track record of generating key business insights and collaborating with stakeholders Preferred Qualifications Knowledge of how to improve code quality and optimizes BI processes (e.g. speed, cost, reliability) Knowledge of data modeling and data pipeline design Experience in designing and implementing custom reporting systems using automation tools Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - Amazon Dev Center India - Hyderabad Job ID: A3031609
Posted 2 weeks ago
12.0 - 17.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. Business Analyst (Global Commercial Operations) What You Will Do Let’s do this. Let’s change the world. In this vital role you will play a key role in successfully leading the engagement model between Amgen's Technology organization and Global Commercial Operations. Collaborate with global commercial operations Business SMEs, Data Engineers, Data Scientists and Product Managers to lead business analysis activities, ensuring alignment with engineering and product goals on the Data & AI Product Teams Become a global commercial operations domain authority in Data & AI technology capabilities by researching, deploying, and sustaining features built according to Amgen’s Quality System Lead the voice of the customer assessment to define business processes and product needs Work with Product Managers and customers to define scope and value for new developments Collaborate with Engineering and Product Management to prioritize release scopes and refine the Product backlog Ensure non-functional requirements are included and prioritized in the Product and Release Backlogs Facilitate the breakdown of Epics into Features and Sprint-Sized User Stories and participate in backlog reviews with the development team Clearly express features in User Stories/requirements so all team members and partners understand how they fit into the product backlog Ensure Acceptance Criteria and Definition of Done are well-defined Work closely with Business SME’s, Data Scientists, ML Engineers to understand the requirements around Data product requirements, KPI’s etc. Analyzing the source systems and create the STTM documents. Develop and implement effective product demonstrations for internal and external partners Maintain accurate documentation of configurations, processes, and changes Understand end-to-end data pipeline design and dataflow Apply knowledge of data structures to diagnose data issues for resolution by data engineering team What We Expect Of You We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Doctorate degree / Master's degree / Bachelor's degree and 12 to 17 years of experience in global commercial operations Information Systems Preferred Qualifications: Must-Have Skills Excellent problem-solving skills and a passion for solving complex challenges in for AI-driven technologies Experience with Agile software development methodologies (Scrum) Superb communication skills and the ability to work with senior leadership with confidence and clarity Has experience with writing user requirements and acceptance criteria in agile project management systems such as JIRA Experience in managing product features for PI planning and developing product roadmaps and user journeys Good-to-Have Skills: Demonstrated expertise in data and analytics and related technology concepts Understanding of data and analytics software systems strategy, governance, and infrastructure Familiarity with low-code, no-code test automation software Technical thought leadership Able to communicate technical or complex subject matters in business terms Jira Align experience Experience of DevOps, Continuous Integration, and Continuous Delivery methodology Soft Skills: Able to work under minimal supervision Excellent analytical and gap/fit assessment skills Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation Ability to manage multiple priorities successfully Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills Technical Skills: Experience with cloud-based data technologies (e.g., Databricks, Redshift, S3 buckets) AWS (similar cloud-based platforms) Experience with design patterns, data structures, test-driven development Knowledge of NLP techniques for text analysis and sentiment analysis What You Can Expect Of Us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Posted 2 weeks ago
5.0 - 10.0 years
7 - 17 Lacs
Kolkata, Hyderabad, Pune
Work from Office
Airflow Data Engineer in AWS platform Job Title Apache Airflow Data Engineer ROLE” as per TCS Role Master • 4-8 years of experience in AWS, Apache Airflow (on Astronomer platform), Python, Pyspark, SQL • Good hands-on knowledge on SQL and Data Warehousing life cycle is an absolute requirement. • Experience in creating data pipelines and orchestrating using Apache Airflow • Significant experience with data migrations and development of Operational Data Stores, Enterprise Data Warehouses, Data Lake and Data Marts. • Good to have: Experience with cloud ETL and ELT in one of the tools like DBT/Glue/EMR or Matillion or any other ELT tool • Excellent communication skills to liaise with Business & IT stakeholders. • Expertise in planning execution of a project and efforts estimation. • Exposure to working in Agile ways of working. Candidate for this position to be offered with TAIC or TCSL as Entity Data warehousing , pyspark , Github, AWS data platform, Glue, EMR, RedShift, databricks,Data Marts. DBT/Glue/EMR or Matillion, data engineering, data modelling, data consumption
Posted 2 weeks ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY-Consulting – AWS Staff-Senior The opportunity We are looking for a skilled AWS Data Engineer to join our growing data team. This role involves building and managing scalable data pipelines that ingest, process, and store data from various sources using modern AWS technologies. You will work with both batch and streaming data and contribute to a robust, scalable data architecture to support analytics, BI, and data science use cases. As a problem-solver with the keen ability to diagnose a client’s unique needs, one should be able to see the gap between where clients currently are and where they need to be. The candidate should be capable of creating a blueprint to help clients achieve their end goal. Key Responsibilities: Design and implement data ingestion pipelines from various sources including on-premise Oracle databases, batch files, and Confluent Kafka. Develop Python producers and AWS Glue jobs for batch data processing. Build and manage Spark streaming applications on Amazon EMR. Architect and maintain Medallion Architecture-based data lakes on Amazon S3. Develop and maintain data sinks in Redshift and Oracle. Automate and orchestrate workflows using Apache Airflow. Monitor, debug, and optimize data pipelines for performance and reliability. Collaborate with cross-functional teams including data analysts, scientists, and DevOps. Required Skills and Experience: Good programming skills in Python and Spark (Pyspark). Hands on Experience with Amazon S3, Glue, EMR. Good SQL knowledge on Amazon Redshift and Oracle Proven experience in handling streaming data with Kafka and building real-time pipelines. Good understanding of data modeling, ETL frameworks, and performance tuning. Experience with workflow orchestration tools like Airflow. Nice-to-Have Skills: Infrastructure as Code using Terraform. Experience with AWS services like SNS, SQS, DynamoDB, DMS, Athena, and Lake Formation. Familiarity with DataSync for file movement and medallion architecture for data lakes. Monitoring and alerting using CloudWatch, Datadog, or Splunk. Qualifications : BTech / MTech / MCA / MBA EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 2 weeks ago
4.0 years
0 Lacs
Pune, Maharashtra, India
On-site
The Role The Data Engineer is accountable for developing high quality data products to support the Bank’s regulatory requirements and data driven decision making. A Mantas Scenario Developer will serve as an example to other team members, work closely with customers, and remove or escalate roadblocks. By applying their knowledge of data architecture standards, data warehousing, data structures, and business intelligence they will contribute to business outcomes on an agile team. Responsibilities Developing and supporting scalable, extensible, and highly available data solutions Deliver on critical business priorities while ensuring alignment with the wider architectural vision Identify and help address potential risks in the data supply chain Follow and contribute to technical standards Design and develop analytical data models Required Qualifications & Work Experience First Class Degree in Engineering/Technology (4-year graduate course) 5 to 8 years’ experience implementing data-intensive solutions using agile methodologies Experience of relational databases and using SQL for data querying, transformation and manipulation Experience of modelling data for analytical consumers Hands on Mantas expert throughout the full development life cycle, including: requirements analysis, functional design, technical design, programming, testing, documentation, implementation, and on-going technical support Translate business needs (BRD) into effective technical solutions and documents (FRD/TSD) Ability to automate and streamline the build, test and deployment of data pipelines Experience in cloud native technologies and patterns A passion for learning new technologies, and a desire for personal growth, through self-study, formal classes, or on-the-job training Excellent communication and problem-solving skills T echnical Skills (Must Have) ETL: Hands on experience of building data pipelines. Proficiency in two or more data integration platforms such as Ab Initio, Apache Spark, Talend and Informatica Mantas: Expert in Oracle Mantas/FCCM, Scenario Manager, Scenario Development, thorough knowledge and hands on experience in Mantas FSDM, DIS, Batch Scenario Manager Big Data: Experience of ‘big data’ platforms such as Hadoop, Hive or Snowflake for data storage and processing Data Warehousing & Database Management: Understanding of Data Warehousing concepts, Relational (Oracle, MSSQL, MySQL) and NoSQL (MongoDB, DynamoDB) database design Data Modeling & Design: Good exposure to data modeling techniques; design, optimization and maintenance of data models and data structures Languages: Proficient in one or more programming languages commonly used in data engineering such as Python, Java or Scala DevOps: Exposure to concepts and enablers - CI/CD platforms, version control, automated quality control management Technical Skills (Valuable) Ab Initio: Experience developing Co>Op graphs; ability to tune for performance. Demonstrable knowledge across full suite of Ab Initio toolsets e.g., GDE, Express>IT, Data Profiler and Conduct>IT, Control>Center, Continuous>Flows Cloud: Good exposure to public cloud data platforms such as S3, Snowflake, Redshift, Databricks, BigQuery, etc. Demonstratable understanding of underlying architectures and trade-offs Data Quality & Controls: Exposure to data validation, cleansing, enrichment and data controls Containerization: Fair understanding of containerization platforms like Docker, Kubernetes File Formats: Exposure in working on Event/File/Table Formats such as Avro, Parquet, Protobuf, Iceberg, Delta Others: Basics of Job scheduler like Autosys. Basics of Entitlement management Certification on any of the above topics would be an advantage. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Digital Software Engineering ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster.
Posted 2 weeks ago
8.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Who We Are BCG partners with clients from the private, public, and not‐for profit sectors in all regions of the globe to identify their highest value opportunities, address their most critical challenges, and transform their enterprises. We work with the most innovative companies globally, many of which rank among the world’s 500 largest corporations. Our global presence makes us one of only a few firms that can deliver a truly unified team for our clients – no matter where they are located. Our ~22,000 employees, located in 90+ offices in 50+ countries, enable us to work in collaboration with our clients, to tailor our solutions to each organization. We value and utilize the unique talents that each of these individuals brings to BCG; the wide variety of backgrounds of our consultants, specialists, and internal staff reflects the importance we place on diversity. Our employees hold degrees across a full range of disciplines – from business administration and economics to biochemistry, engineering, computer science, psychology, medicine, and law. What You'll Do BCG X develops innovative and AI driven solutions for the Fortune 500 in their highest‐value use cases. The BCG X Software group productizes repeat use‐cases, creating both reusable components as well as single‐tenant and multi‐tenant SaaS offerings that are commercialized through the BCG consulting business. BCG X is currently looking for a Software Engineering Architect to drive impact and change for the firms engineering and analytics engine and bring new products to BCG clients globally. This Will Include Serving as a leader within BCG X and specifically the KEY Impact Management by BCG X Tribe (Transformation, Post-Merger-Integration related software and data products) overseeing the delivery of high-quality software: driving technical roadmap, architectural decisions and mentoring engineers Influencing and serving as a key decision maker in BCG X technology selection & strategy Active “hands-on” role, building intelligent analytical products to solve problems, write elegant code, and iterate quickly Overall responsibility for the engineering and architecture alignment of all solutions delivered within the tribe. Responsible for technology roadmap of existing and new components delivered. Architecting and implementing backend and frontend solutions primarily using .NET, C#, MS SQL Server, Angular, and other technologies best suited for the goals, including open source i.e. Node, Django, Flask, Python where needed. What You'll Bring 8+ years of technology and software engineering experience in a complex and fast-paced business environment (ideally agile environment) with exposure to a variety of technologies and solutions, with at least 5 year’ experience in Architect role. Experience with a wide range of Application and Data architectures, platforms and tools including: Service Oriented Architecture, Clean Architecture, Software as a Service, Web Services, Object-Oriented Languages (like C# or Java), SQL Databases (like Oracle or SQL Server), Relational, Non-relational Databases, Hands on experience with analytics tools and reporting tools, Data Science experience etc. Thoroughly up to date in technology: Modern cloud architectures including AWS, Azure, GCP, Kubernetes Very strong particularly in .NET, C#, MS SQL Server, Angular technologies Open source stacks including NodeJs, React, Angular, Flask are good to have CI/CD / DevSecOps / GitOps toolchains and development approaches Knowledge in machine learning & AI frameworks Big data pipelines and systems: Spark, Snowflake, Kafka, Redshift, Synapse, Airflow At least Bachelors degree; Master’s degree and/or MBA preferred Team player with excellent work habits and interpersonal skills Care deeply about product quality, reliability, and scalability Passion about the people and culture side of engineering teams Outstanding written and oral communications skills The ability to travel, depending on project requirements.#BCGXjob Boston Consulting Group is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, age, religion, sex, sexual orientation, gender identity / expression, national origin, disability, protected veteran status, or any other characteristic protected under national, provincial, or local law, where applicable, and those with criminal histories will be considered in a manner consistent with applicable state and local laws. BCG is an E - Verify Employer. Click here for more information on E-Verify.
Posted 2 weeks ago
5.0 years
0 Lacs
Bengaluru, Karnataka, India
Remote
Introduction: A Career at HARMAN Corporate We’re a global, multi-disciplinary team that’s putting the innovative power of technology to work and transforming tomorrow. At HARMAN Corporate, you are integral to our company’s award-winning success. Enrich your managerial and organizational talents – from finance, quality, and supply chain to human resources, IT, sales, and strategy Augment your comprehensive skillset with expert training across decision-making, change management, leadership, and business development Obtain 360-degree support throughout your career life cycle, from early-stage to seasoned leader About The Role The People Intelligence Data Engineer will play a crucial role in driving data-driven decision-making within the HR function. This role involves leveraging tools to make data available for modeling, uncover insights, predict trends, and optimize HR strategies. The ideal candidate will have a strong analytical mindset, a passion for data, and a deep understanding of HR processes. What You Will Do Build, maintain, and troubleshoot HR databases and data systems. Manage, pre-process, and analyze large datasets to support HR initiatives. Design and implement ETL (Extract, Transform, Load) or ELT (Extract, Load, Transform) processes to move HR data efficiently. Utilize SQL, Python, or other programming languages to analyze HR data, identify trends, and generate insights. Integrate various systems and databases to streamline data flow and accessibility. Work with the team and take the lead in updating existing datasets and implementing organizational changes to reflect in our datasets and, eventually, in our People Insight inventory of metrics and visuals. Collaborate with HR and other departments to identify opportunities for AI-driven improvements. Work with cross-functional teams to implement data-driven solutions that enhance employee experience and operational efficiency. Provide data-driven insights and recommendations to support the strategic planning process. Contribute to the development and execution of HR strategies through robust data analysis and reporting. Investigate and stay current with emerging AI and intelligent tools, assessing their potential applications in the HR domain. Manage multiple tasks and projects simultaneously, demonstrating an eagerness to contribute and make a positive impact. Support colleagues in improving daily work processes and strategic planning. Work effectively within a team, fostering a collaborative and supportive environment. What You Need To Be Successful 5+ years of experience with Workday or equivalent systems Proficient in writing queries for relational databases Previous experience in HR analytics or a related field is preferred Proficiency in Python (e.g., Pandas, NumPy), R, or other relevant languages Knowledge of data warehousing platforms such as Snowflake, Amazon Redshift, or BigQuery Experience with HR analytics platforms such as Workday, SAP SuccessFactors, or Oracle HR Cloud General IT knowledge, including database management and system integration Business Acumen – Applies understanding of the business and how one’s area integrates with others to achieve departmental objectives Problem Solving – Identifies and solves technical and operational problems; understands and recognizes the broader impact across the department Impact – Guided by policies and departmental plans; impacts the team’s ability to achieve service, quality, and timeliness objectives Interpersonal Skills – Guides and influences others, internally or externally, to adopt a different point of view Bonus Points if You Have Bachelor’s degree, or equivalent work experience, with a preferred concentration in Computer Science, Information Management, Data Science, or a related field People analytics experience focusing on internal customers Excellent communication and presentation skills to effectively convey complex data insights to non-technical audiences Understanding of the Workday system and associated Workday products Understanding of HR operations and experience in employee data management What Makes You Eligible Be willing to work in an office What We Offer Flexible work environment, allowing for full-time remote work globally for positions that can be performed outside a HARMAN or customer location Access to employee discounts on world-class Harman and Samsung products (JBL, HARMAN Kardon, AKG, etc.) Extensive training opportunities through our own HARMAN University Competitive wellness benefits Tuition reimbursement “Be Brilliant” employee recognition and rewards program An inclusive and diverse work environment that fosters and encourages professional and personal development You Belong Here HARMAN is committed to making every employee feel welcomed, valued, and empowered. No matter what role you play, we encourage you to share your ideas, voice your distinct perspective, and bring your whole self with you – all within a support-minded culture that celebrates what makes each of us unique. We also recognize that learning is a lifelong pursuit and want you to flourish. We proudly offer added opportunities for training, development, and continuing education, further empowering you to live the career you want. About HARMAN: Where Innovation Unleashes Next-Level Technology Ever since the 1920s, we’ve been amplifying the sense of sound. Today, that legacy endures, with integrated technology platforms that make the world smarter, safer, and more connected. Across automotive, lifestyle, and digital transformation solutions, we create innovative technologies that turn ordinary moments into extraordinary experiences. Our renowned automotive and lifestyle solutions can be found everywhere, from the music we play in our cars and homes to venues that feature today’s most sought-after performers, while our digital transformation solutions serve humanity by addressing the world’s ever-evolving needs and demands. Marketing our award-winning portfolio under 16 iconic brands, such as JBL, Mark Levinson, and Revel, we set ourselves apart by exceeding the highest engineering and design standards for our customers, our partners and each other. If you’re ready to innovate and do work that makes a lasting impact, join our talent community today!
Posted 2 weeks ago
3.0 years
0 Lacs
Bengaluru East, Karnataka, India
On-site
Company Description At Nielsen, we are passionate about our work to power a better media future for all people by providing powerful insights that drive client decisions and deliver extraordinary results. Our talented, global workforce is dedicated to capturing audience engagement with content - wherever and whenever it’s consumed. Together, we are proudly rooted in our deep legacy as we stand at the forefront of the media revolution. When you join Nielsen, you will join a dynamic team committed to excellence, perseverance, and the ambition to make an impact together. We champion you, because when you succeed, we do too. We enable your best to power our future. Job Description This role will be part of a team that develops software that processes data captured every day from over a quarter of a million Computer and Mobile devices worldwide. Measuring panelists activities as they surf the Internet via Browsers, or utilizing Mobile App’s download from Apple’s and Google’s store. The Nielsen software meter used to capture this usage data has been optimized to be unobtrusive yet gather many biometric data points that the backend system can use to identify who is using the device, and also detect fraudulent behavior. The Software Engineer is ultimately responsible for delivering technical solutions: starting from the project's onboard until post launch support and including design, development, testing. It is expected to coordinate, support and work with multiple delocalized project teams in multiple regions. As a member of the technical staff with our Digital Meter Processing team, you will further develop the backend system that processes massive amounts of data every day, across 3 different AWS regions. Your role will involve designing, implementing, and maintaining robust, scalable solutions that leverage a Java based system that runs in an AWS environment. You will play a key role in shaping the technical direction of our projects and mentoring other team members. Responsibilities System Deployment: Conceive, design and build new features in the existing backend processing pipelines. CI/CD Implementation Design and implement CI/CD pipelines for automated build, test, and deployment processes. Ensure continuous integration and delivery of features, improvements, and bug fixes. Code Quality And Best Practices Enforce coding standards, best practices, and design principles. Conduct code reviews and provide constructive feedback to maintain high code quality. Performance Optimization Identify and address performance bottlenecks in both reading, processing and writing data to the backend data stores. Mentorship And Collaboration Mentor junior engineers, providing guidance on technical aspects and best practices. Collaborate with cross-functional teams to ensure a cohesive and unified approach to software development. Security And Compliance Implement security best practices for all tiers of the system. Ensure compliance with industry standards and regulations related to AWS platform security. Qualifications Key Skills Bachelor's or Master’s degree in Computer Science, Software Engineering, or a related field. Proven experience, minimum 3 years, in high-volume data processing development expertise using ETL tools such as AWS Glue or PySpark, Java, SQL and databases such as Postgres Minimum 2 years development on an AWS platform Strong understanding of CI/CD principles and tools. GitLab a plus Excellent problem-solving and debugging skills. Strong communication and collaboration skills with ability to communicate complex technical concepts and align organization on decisions Sound problem-solving skills with the ability to quickly process complex information and present it clearly and simply Utilizes team collaboration to create innovative solutions efficiently Other Desirable Skills Knowledge of networking principles and security best practices. AWS certifications Experience with Data Warehouses, ETL, and/or Data Lakes very desirable Experience with RedShift, Airflow, Python, Lambda, Prometheus, Grafana, & OpsGeni a bonus Exposure to the Google Cloud Platform (GCP) Additional Information Please be aware that job-seekers may be at risk of targeting by scammers seeking personal data or money. Nielsen recruiters will only contact you through official job boards, LinkedIn, or email with a nielsen.com domain. Be cautious of any outreach claiming to be from Nielsen via other messaging platforms or personal email addresses. Always verify that email communications come from an @nielsen.com address. If you're unsure about the authenticity of a job offer or communication, please contact Nielsen directly through our official website or verified social media channels.
Posted 2 weeks ago
4.5 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
BOLD is seeking an ETL Specialist who will help build the architecture and maintain a robust, scalable, and sustainable business intelligence platform. Assisted by the Data Team, this role will work with highly scalable systems, complex data models, and a large amount of transactional data. Job Description ABOUT THIS TEAM BOLD Business Intelligence(BI) team is a centralized team responsible for managing all aspects of the organization's BI strategy, projects and systems. BI team enables business leaders to make data-driven decisions by providing reports and analysis. The team is responsible for developing and manage a latency-free credible enterprise data warehouse which is a data source for decision making and input to various functions of the organization like Product, Finance, Marketing, Customer Support etc. BI team has four sub-components as Data analysis, ETL, Data Visualization and QA. It manages deliveries through Snowflake, Sisense and Microstrategy as main infrastructure solutions. Other technologies including Python, R, Airflow are also used in ETL, QA and data visualizations. WHAT YOU’LL DO Architect, develop, and maintain a highly scalable data warehouse and build/maintain ETL processes. Utilize Python and Airflow to integrate data from across the business into data warehouse. Integrate third party data into the data warehouse like google analytics, google ads, Iterable WHAT YOU’LL NEED Experience working as an ETL developer in a Data Engineering, Data Warehousing or Business Intelligence team Understanding of data integration/data engineering architecture and should be aware of ETL standards, methodologies, guidelines and techniques Hands on with python programming language and its packages like Pandas, NumPy Strong understanding of SQL queries, aggregate functions, Complex joins and performance tuning Should have good exposure of Databases like Redshift/SQL Server/Oracle/PostgreSQL (any one of these) Broad understanding of data warehousing and dimensional modelling concepts EXPERIENCE- Senior Software Engineer, ETL: 4.5 years+ BENEFITS Outstanding Compensation Competitive salary Tax-friendly compensation structure Bi-annual bonus Annual Appraisal Equity in company 100% Full Health Benefits Group Mediclaim, personal accident, & term life insurance Group Mediclaim benefit (including parents' coverage) Practo Plus health membership for employees and family Personal accident and term life insurance coverage Flexible Time Away 24 days paid leaves Declared fixed holidays Paternity and maternity leave Compassionate and marriage leave Covid leave (up to 7 days) ADDITIONAL BENEFITS Internet and home office reimbursement In-office catered lunch, meals, and snacks Certification policy Cab pick-up and drop-off facility About BOLD We Transform Work Lives As an established global organization, BOLD helps people find jobs. Our story is one of growth, success, and professional fulfillment. We create digital products that have empowered millions of people in 180 countries to build stronger resumes, cover letters, and CVs. The result of our work helps people interview confidently, finding the right job in less time. Our employees are experts, learners, contributors, and creatives. We Celebrate And Promote Diversity And Inclusion We value our position as an Equal Opportunity Employer. We hire based on qualifications, merit, and our business needs. We don't discriminate regarding race, color, religion, gender, pregnancy, national origin or citizenship, ancestry, age, physical or mental disability, veteran status, sexual orientation, gender identity or expression, marital status, genetic information, or any other applicable characteristic protected by law.
Posted 2 weeks ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Description About Amazon.com: Amazon.com strives to be Earth's most customer-centric company where people can find and discover virtually anything they want to buy online. By giving customers more of what they want - low prices, vast selection, and convenience - Amazon.com continues to grow and evolve as a world-class e-commerce platform. Amazon's evolution from Web site to e-commerce partner to development platform is driven by the spirit of innovation that is part of the company's DNA. The world's brightest technology minds come to Amazon.com to research and develop technology that improves the lives of shoppers and sellers around the world. About Team The RBS team is an integral part of Amazon online product lifecycle and buying operations. The team is designed to ensure Amazon remains competitive in the online retail space with the best price, wide selection and good product information. The team’s primary role is to create and enhance retail selection on the worldwide Amazon online catalog. The tasks handled by this group have a direct impact on customer buying decisions and online user experience. Overview Of The Role An candidate will be a self-starter who is passionate about discovering and solving complicated problems, learning complex systems, working with numbers, and organizing and communicating data and reports. You will be detail-oriented and organized, capable of handling multiple projects at once, and capable of dealing with ambiguity and rapidly changing priorities. You will have expertise in process optimizations and systems thinking and will be required to engage directly with multiple internal teams to drive business projects/automation for the RBS team. Candidates must be successful both as individual contributors and in a team environment, and must be customer-centric. Our environment is fast-paced and requires someone who is flexible, detail-oriented, and comfortable working in a deadline-driven work environment. Responsibilities Include Works across team(s) and Ops organization at country, regional and/or cross regional level to drive improvements and enables to implement solutions for customer, cost savings in process workflow, systems configuration and performance metrics. Basic Qualifications Bachelor's degree in Computer Science, Information Technology, or a related field Proficiency in automation using Python Excellent oral and written communication skills Experience with SQL, ETL processes, or data transformation Preferred Qualifications Experience with scripting and automation tools Familiarity with Infrastructure as Code (IaC) tools such as AWS CDK Knowledge of AWS services such as SQS, SNS, CloudWatch and DynamoDB Understanding of DevOps practices, including CI/CD pipelines and monitoring solutions Understanding of cloud services, serverless architecture, and systems integration Key job responsibilities As a Business Intelligence Engineer in the team, you will collaborate closely with business partners, architect, design, implement, and BI projects & Automations. Responsibilities Design, development and ongoing operations of scalable, performant data warehouse (Redshift) tables, data pipelines, reports and dashboards. Development of moderately to highly complex data processing jobs using appropriate technologies (e.g. SQL, Python, Spark, AWS Lambda, etc.) Development of dashboards and reports. Collaborating with stakeholders to understand business domains, requirements, and expectations. Additionally, working with owners of data source systems to understand capabilities and limitations. Deliver minimally to moderately complex data analysis; collaborating as needed with Data Science as complexity increases. Actively manage the timeline and deliverables of projects, anticipate risks and resolve issues. Adopt best practices in reporting and analysis: data integrity, test design, analysis, validation, and documentation. Internal Job Description Retail Business Service, ARTS is a growing team that supports the Retail Efficiency and Paid Services business and tech teams. There is ample growth opportunity in this role for someone who exhibits Ownership and Insist on the Highest Standards, and has strong engineering and operational best practices experience. Basic Qualifications 5+ years of relevant professional experience in business intelligence, analytics, statistics, data engineering, data science or related field. Experience with Data modeling, SQL, ETL, Data Warehousing and Data Lakes. Strong experience with engineering and operations best practices (version control, data quality/testing, monitoring, etc.) Expert-level SQL. Proficiency with one or more general purpose programming languages (e.g. Python, Java, Scala, etc.) Knowledge of AWS products such as Redshift, Quicksight, and Lambda. Excellent verbal/written communication & data presentation skills, including ability to succinctly summarize key findings and effectively communicate with both business and technical teams. Preferred Qualifications Experience with data-specific programming languages/packages such as R or Python Pandas. Experience with AWS solutions such as EC2, DynamoDB, S3, and EMR. Knowledge of machine learning techniques and concepts. Basic Qualifications 3+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience Experience with data visualization using Tableau, Quicksight, or similar tools Experience with data modeling, warehousing and building ETL pipelines Experience in Statistical Analysis packages such as R, SAS and Matlab Experience using SQL to pull data from a database or data warehouse and scripting experience (Python) to process data for modeling Preferred Qualifications Experience with AWS solutions such as EC2, DynamoDB, S3, and Redshift Experience in data mining, ETL, etc. and using databases in a business environment with large-scale, complex datasets Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI MAA 15 SEZ Job ID: A2994013
Posted 2 weeks ago
3.0 years
0 Lacs
Hyderābād
Remote
About Us: Please be aware we have noticed an increase in hiring scams potentially targeting Seismic candidates. Read our full statement on our Careers page. Seismic is the global leader in AI-powered enablement, empowering go-to-market leaders to drive strategic growth and deliver exceptional customer experiences at scale. The Seismic Enablement Cloud™ is the only unified AI-powered platform that prepares customer-facing teams with the skills, content, tools, and insights needed to maximize every buyer interaction and strengthen client relationships. Trusted by more than 2,000 organizations worldwide, Seismic helps businesses achieve measurable outcomes and accelerate revenue growth. Seismic is headquartered in San Diego with offices across North America, Europe, Asia and Australia. Learn more at seismic.com. Seismic is committed to building an inclusive workplace that ignites growth for our employees and creates a culture of belonging that allows all employees to be seen and valued for who they are. Learn more about DEI at Seismic here . Overview: At Seismic, we're proud of our engineering culture where technical excellence and innovation drive everything we do. We're a remote-first data engineering team responsible for the critical data pipeline that powers insights for over 2,300 customers worldwide. Our team manages all data ingestion processes, leveraging technologies like Apache Kafka, Spark, various C# microservices services, and a shift-left data mesh architecture to transform diverse data streams into the valuable reporting models that our customers rely on daily to make data-driven decisions. Additionally, we're evolving our analytics platform to include AI-powered agentic workflows. Who You Are: Have working knowledge of one OO language, preferably C#, but won’t hold your Java expertise against you (you’re the type of person who’s interested in learning and becoming an expert at new things). Additionally, we’ve been using Python more and more, and bonus points if you’re familiar with Scala. Have experience with architecturally complex distributed systems. Highly focused on operational excellence and quality – you have a passion to write clean and well tested code and believe in the testing pyramid. Outstanding verbal and written communication skills with the ability to work with others at all levels, effective at working with geographically remote and culturally diverse teams. You enjoy solving challenging problems, all while having a blast with equally passionate team members. Conversant in AI engineering. You’ve been experimenting with building ai solutions/integrations using LLMs, prompts, Copilots, Agentic ReAct workflows, etc. Who you are:: BS or MS in Computer Science, similar technical field of study, or equivalent practical experience. 3+ years of software development experience within a SaaS business. Must have a familiarity with .NET Core, and C# and frameworks. Experience in data engineering - building and managing Data Pipelines, ETL processes, and familiarity with various technologies that drive them: Kafka, FiveTran (Optional), Spark/Scala (Optional), etc. Data warehouse experience with Snowflake, or similar (AWS Redshift, Apache Iceberg, Clickhouse, etc). Familiarity with RESTFul microservice-based APIs Experience in modern CI/CD pipelines and infrastructure (Jenkins, Github Actions, Terraform, Kubernetes) a big plu (or equivalent) Experience with the SCRUM and the AGILE development process. Familiarity developing in cloud-based environments Optional: Experience with 3rd party integrations Optional: familiarity with Meeting systems like Zoom, WebEx, MS Teams Optional: familiarity with CRM systems like Salesforce, Microsoft Dynamics 365, Hubspot. What you'll be doing:: Collaborating with experienced software engineers, data scientists and product managers to rapidly build, test, and deploy code to create innovative solutions and add value to our customers' experience. Building large scale platform infrastructure and REST APIs serving machine learning driven content recommendations to Seismic products. Leveraging the power of context in third-party applications such as CRMs to drive machine learning algorithms and models. Helping build next-gen Agentic tooling for reporting and insights Processing large amounts of internal and external system data for analytics, caching, modeling and more. Identifying performance bottlenecks and implementing solutions for them. Participating in code reviews, system design reviews, agile ceremonies, bug triage and on-call rotations. Job Posting Footer: If you are an individual with a disability and would like to request a reasonable accommodation as part of the application or recruiting process, please click here. Headquartered in San Diego and with employees across the globe, Seismic is the global leader in sales enablement , backed by firms such as Permira, Ameriprise Financial, EDBI, Lightspeed Venture Partners, and T. Rowe Price. Seismic also expanded its team and product portfolio with the strategic acquisitions of SAVO, Percolate, Grapevine6, and Lessonly. Our board of directors is composed of several industry luminaries including John Thompson, former Chairman of the Board for Microsoft. Seismic is an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to gender, age, race, religion, or any other classification which is protected by applicable law. Please note this job description is not designed to cover or contain a comprehensive listing of activities, duties or responsibilities that are required of the employee for this job. Duties, responsibilities and activities may change at any time with or without notice. Linkedin Posting Section: #LI-ST1
Posted 2 weeks ago
10.0 years
3 - 8 Lacs
Hyderābād
On-site
Key Responsibilities: Design, develop, test, deploy and troubleshoot SQL scripts, Stored Procedures that implement complex ETL processes. Must have experience in Query Optimization. Must be able to understand complex business logic, fine tune long-running queries Must be expert at reverse engineering and extracting business requirements and rules from the code. Scheduling and monitoring the ETL jobs using SSIS. Build and review functional and business specifications Expert understanding of PostgreSQL, Stored Procedures, Views, and Functions. Provide the estimates for the ETL work Database/Data Warehouse performance tuning, monitoring and troubleshooting expertise. Expertise in query construction, including writing and optimizing complex SQL queries that contain sub queries, joins, and derived tables Troubleshooting and fixing the code Unit testing the code and documentation of results Must be expert at providing non-functional requirements. Help creating test data and provide support to QA process. Work with gate keeper on promoting the data stage jobs from QA to Production. Build, patch and data fix in production environment. Ensure very high availability to scale of 99.999%. Establish coding practices, best practices, and SoPs. Participate in code reviews and enforce relevant process. Strong analytical and thinking capabilities, good communication skills to conduct requirement gathering sessions/interview customers Ability to perform independent research to understand the product requirements and customer needs Communicates effectively with the project teams and other stakeholders. Translate technical details to non-technical audience. Expert at creating architectural artifacts for Data Warehouse. Team, effort management. Ability to set expectations for the client and the team. Ensure all deliverables are delivered in time at highest quality. Qualifications and Experience: Years Of Experience: 10+ years Technical Skills: ETL-SSIS SQL Stored Procedure, Functions, Triggers etc Query Optimization Server monitoring ETL: -AWS-Glue DBMS: -AWS Aurora MySQL, AWS Redshift, PostgreSQL Cloud Services: -AWS cloud services, including EC2, RDS, S3, and IAM. Data Skills: – SQL Performance tuning. Coding: – knowledge of programming language like C#, Python or Java. To oversee dev resources. ABOUT QUALIZEAL QualiZeal is North America’s fastest-growing independent digital quality engineering services company.With a diverse portfolio of digital transformation services encompassing Quality Engineering, Digital Engineering, Advisory and Transformation, and Emerging Technology Testing, QualiZeal empowers organizations of all sizes globally to achieve quality excellence and sustainable market leadership. With a headcount of 850+ elite software quality engineers and trusted by 70+ global enterprises over 200+ projects across 12+ industries, QualiZeal is widely recognized for its pioneering innovation and industryleading client and employee Net Promoter Scores (NPS). QualiZeal has been acknowledged as Gold Stevie® Award winner for 2025 in the Fastest Growing Tech Company of the Year category. Additionally, QualiZeal has also been named the winner of the Bronze Stevie® Award for 2024 Tech Startup of the Year (Services) category in the 22nd Annual American Business Awards®. We also have been recognized as a Major Contender in Everest Group’s Specialist and Next-Generation QE Services PEAK® Matrix 2023 reports and as a notable provider in Forrester’s Continuous Automation and Testing (CAT) Services Landscape Report (Q4 2023).
Posted 2 weeks ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Description Profit Intelligence team in Amazon Retail is seeking a seasoned and talented Senior Data Engineer to join the Historical Contribution Profit (HCP) team. HCP is a fast growing team with a mandate to build tools to automate profit-and-loss forecasting and planning for the Physical Consumer business. We are building the next generation data and Business Intelligence solutions using big data technologies and native AWS (NAWS). As a Data Engineer in Amazon, you will be working in a large, extremely complex and dynamic data environment. You should be passionate about working with big data and are able to learn new technologies rapidly and evaluate them critically. You should have excellent communication skills and be able to work with business owners to translate business requirements into system solutions. You are a self-starter, comfortable with ambiguity, and working in a fast-paced and ever-changing environment. Ideally, you are also experienced with at least one of the programming languages such as Java, C++, Spark/Scala, Python, etc. Major Responsibilities Work with a team of product and program managers, engineering leaders, and business leaders to build data architectures and platforms to support business Design, develop, and operate high-scalable, high-performance, low-cost, and accurate data pipelines in distributed data processing platforms Recognize and adopt best practices in data processing, reporting, and analysis: data integrity, test design, analysis, validation, and documentation Keep up to date with big data technologies, evaluate and make decisions around the use of new or existing software products to design the data architecture Design, build and own all the components of a high-volume data warehouse end to end. Provide end-to-end data engineering support for project lifecycle execution (design, execution and risk assessment) Continually improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers Interface with other technology teams to extract, transform, and load (ETL) data from a wide variety of data sources Own the functional and nonfunctional scaling of software systems in your ownership area. Implement big data solutions for distributed computing. Key job responsibilities As a DE on our team, you will be responsible for leading the data modelling, database design, and launch of some of the core data pipelines. You will have significant influence on our overall strategy by helping define the data model, drive the database design, and spearhead the best practices to delivery high quality products. About The Team Profit intelligence systems measures, predicts true profit(/loss) for each item as a result of a specific shipment to an Amazon customer. Profit Intelligence is all about providing intelligent ways for Amazon to understand profitability across retail business. What are the hidden factors driving the growth or profitability across millions of shipments each day? We compute the profitability of each and every shipment that gets shipped out of Amazon. Guess what, we predict the profitability of future possible shipments too. We are a team of agile, can-do engineers, who believe that not only are moon shots possible but that they can be done before lunch. All it takes is finding new ideas that challenge our preconceived notions of how things should be done. Process and procedure matter less than ideas and the practical work of getting stuff done. This is a place for exploring the new and taking risks. We push the envelope in using cloud services in AWS as well as the latest in distributed systems, forecasting algorithms, and data mining. Basic Qualifications 3+ years of data engineering experience Experience with data modeling, warehousing and building ETL pipelines Experience with SQL Preferred Qualifications Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases) Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI - Tamil Nadu Job ID: A3031357
Posted 2 weeks ago
5.0 years
4 - 6 Lacs
Hyderābād
On-site
DESCRIPTION Value Added Services Tech (VASTech) improves customer experience and reduces cost-to-serve by transforming selection discovery and addressing un-optimizations during fulfillment. We support three independent LOBs of Home Services, Regionalization and Retail Supply Chain Procurement workflows. We seek talented Engineers with expertise in large-scale systems to join our mission. Ideal candidates are passionate problem-solvers who can drive innovative solutions from design to deployment across teams. You will play a key role in developing global fulfillment models, requiring creativity and excellence. We need adaptable professionals who thrive in dynamic environments, focusing on enhancing customer experience. Join us to shape the future of e-commerce logistics and make a significant impact in this transformative field. BASIC QUALIFICATIONS 5+ years of data engineering experience Experience with data modeling, warehousing and building ETL pipelines Experience with SQL Experience managing a data or BI team Experience leading and influencing the data or BI strategy of your team or organization Experience in at least one modern scripting or programming language, such as Python, Java, Scala, or NodeJS Experience hiring, developing and promoting engineering talent PREFERRED QUALIFICATIONS Experience with big data technologies such as: Hadoop, Hive, Spark, EMR Experience with AWS Tools and Technologies (Redshift, S3, EC2) Bachelor's degree Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Job details IND, TS, Hyderabad Software Development
Posted 2 weeks ago
5.0 years
6 - 9 Lacs
Hyderābād
On-site
Data Analytics Engineer – CL4 Role Overview : As a Data Analytics Engineer , you will actively engage in your engineering craft, taking a hands-on approach to multiple high-visibility projects. Your expertise will be pivotal in delivering solutions that delight customers and users, while also driving tangible value for Deloitte's business investments. You will leverage your extensive engineering craftmanship across multiple programming languages and modern frameworks, consistently demonstrating your strong track record in delivering high-quality, outcome-focused solutions. The ideal candidate will be a dependable team player, collaborating with cross-functional teams to design, develop, and deploy advanced software solutions. Key Responsibilities : Outcome-Driven Accountability: Embrace and drive a culture of accountability for customer and business outcomes. Develop engineering solutions that solve complex problems with valuable outcomes, ensuring high-quality, lean designs and implementations. Technical Leadership and Advocacy: Serve as the technical advocate for products, ensuring code integrity, feasibility, and alignment with business and customer goals. Lead requirement analysis, component design, development, unit testing, integrations, and support. Engineering Craftsmanship: Maintain accountability for the integrity of code design, implementation, quality, data, and ongoing maintenance and operations. Stay hands-on, self-driven, and continuously learn new approaches, languages, and frameworks. Create technical specifications, and write high-quality, supportable, scalable code ensuring all quality KPIs are met or exceeded. Demonstrate collaborative skills to work effectively with diverse teams. Customer-Centric Engineering: Develop lean engineering solutions through rapid, inexpensive experimentation to solve customer needs. Engage with customers and product teams before, during, and after delivery to ensure the right solution is delivered at the right time. Incremental and Iterative Delivery: Adopt a mindset that favors action and evidence over extensive planning. Utilize a learning-forward approach to navigate complexity and uncertainty, delivering lean, supportable, and maintainable solutions. Cross-Functional Collaboration and Integration: Work collaboratively with empowered, cross-functional teams including product management, experience, and delivery. Integrate diverse perspectives to make well-informed decisions that balance feasibility, viability, usability, and value. Foster a collaborative environment that enhances team synergy and innovation. Advanced Technical Proficiency: Possess deep expertise in modern software engineering practices and principles, including Agile methodologies and DevSecOps to deliver daily product deployments using full automation from code check-in to production with all quality checks through SDLC lifecycle. Strive to be a role model, leveraging these techniques to optimize solutioning and product delivery. Demonstrate understanding of the full lifecycle product development, focusing on continuous improvement, and learning. Domain Expertise: Quickly acquire domain-specific knowledge relevant to the business or product. Translate business/user needs, architectures, and data designs into technical specifications and code. Be a valuable, flexible, and dedicated team member, supportive of teammates, and focused on quality and tech debt payoff. Effective Communication and Influence: Exhibit exceptional communication skills, capable of articulating complex technical concepts clearly and compellingly. Inspire and influence teammates and product teams through well-structured arguments and trade-offs supported by evidence. Create coherent narratives that align technical solutions with business objectives. Engagement and Collaborative Co-Creation: Engage and collaborate with product engineering teams at all organizational levels, including customers as needed. Build and maintain constructive relationships, fostering a culture of co-creation and shared momentum towards achieving product goals. Align diverse perspectives and drive consensus to create feasible solutions. The team : US Deloitte Technology Product Engineering has modernized software and product delivery, creating a scalable, cost-effective model that focuses on value/outcomes that leverages a progressive and responsive talent structure. As Deloitte’s primary internal development team, Product Engineering delivers innovative digital solutions to businesses, service lines, and internal operations with proven bottom-line results and outcomes. It helps power Deloitte’s success. It is the engine that drives Deloitte, serving many of the world’s largest, most respected companies. We develop and deploy cutting-edge internal and go-to-market solutions that help Deloitte operate effectively and lead in the market. Our reputation is built on a tradition of delivering with excellence. Key Qualifications : § A bachelor’s degree in computer science, software engineering, or a related discipline. An advanced degree (e.g., MS) is preferred but not required. Experience is the most relevant factor. § Strong data engineering foundation with deep understanding of data-structure, algorithms, code instrumentations, etc. § 5+ years proven experience with data ETL and ELT tools (such as ADF, Alteryx, cloud-native tools), data warehousing tools (such as SAP HANA, Snowflake, ADLS, Amazon Redshift, Google Cloud BigQuery). § 5+ years of experience with cloud-native engineering, using FaaS/PaaS/micro-services on cloud hyper-scalers like Azure, AWS, and GCP. § Strong understanding of methodologies & tools like, XP, Lean, SAFe, DevSecOps, SRE, ADO, GitHub, SonarQube, etc. § Strong preference will be given to candidates with experience in AI/ML and GenAI. § Excellent interpersonal and organizational skills, with the ability to handle diverse situations, complex projects, and changing priorities, behaving with passion, empathy, and care. How You will Grow: At Deloitte, our professional development plans focus on helping people at every level of their career to identify and use their strengths to do their best work every day and excel in everything they do. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India. Benefits to help you thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 306141
Posted 2 weeks ago
7.0 years
4 - 6 Lacs
Gurgaon
On-site
Who We Are as a Company: GoKwik was founded in 2020 with one simple mission, to democratize the shopping experience and increase the GMV realization for e-commerce businesses. The company is backed by Sequoia Capital India, Matrix Partners India, RTP Global, and marquee angels. GoKwik is an e-commerce enablement company focussed predominantly on solving crucial e-commerce issues like boosting conversion rates across the e-commerce funnel and reducing RTO for our partners. It leverages AI/ML technologies to solve hard-hitting problems like RTO to increase CoD conversion rates. GoKwik's 1-click Kwik checkout improves checkout conversion rates ensuring higher GMV realization and reduced marketing CAC. What we do is very unique with zero immediate competition in India today and therefore, building a team of Real Rockstars in their field to fuel this profit making start-up, which is growing at a rapid pace Summary: The Data engineering manager will be responsible for handling a team of data engineers and closely working with product managers, data scientists, https://www.spec-india.com/services/business-intelligence-services, and SDEs to design and launch data-driven strategies across the organization. He will also be responsible for designing overall data architecture that drives insights . You’ll spend time on the following : Lead a team of data engineers and guide them with the best data strategies in line with data needs. Identifying, designing, and implementing process improvements that include building/re-engineering data models, data architectures, pipelines, and data applications Continuously look for data optimization processes and oversee data management, governance, security, and analysis Hire the best talent, identify the potential of each team member and offer technical guidance, leadership, and advice on planning, designing, and implementing data solutions Manage data delivery through high-performing dashboards, visualizations, and reports Ensure data quality and security across every product vertical and related areas Design, create and launch new data models and pipelines as per needs Act as a project manager for data projects, mentor and grow the team by hiring skilled data engineers Work towards achieving high performance, operational excellence, accuracy, and reliability of the overall system Mentor and grow data warehouse, data modeling, and data visualization team to correctly establish a data-driven culture Utilize tools and technologies to create data architecture that supports new data initiatives and is useful in next-gen products Ensure test-driven products/pipelines that are easily maintainable and reusable Design and build an infrastructure for extraction, transformation, and loading of data from a wide range of data sources Overall build and maintain data foundations that include tools, infrastructure, and pipelines that help the marketing and sales team Increase automation and build analytic solutions at scale to serve the business requirements We’re Excited About You If You Have: Bachelor’s/Master’s degree in Computer Science, Mathematics or any alternative computer programming training At least 7 years of experience working in Data Engineering field Experience managing engineers and guiding a team of engineers through project planning, execution, and quality control stages. Strong project management and organizational skills and the ability to work independently in a fast-paced, quickly changing environment. Ability to keep up with several projects at once and understand the impact of projects within a larger system. Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL), as well as working familiarity with a variety of databases. Experience building data pipelines, architectures, and data sets from raw, loosely structured data. Experience building processes supporting data transformation, data structures, metadata, dependency, and workload management. Experience supporting and working with cross-functional teams in a dynamic environment. Experience with relational SQL databases, including Postgres and MySQL. Experience with object-oriented design in Python Experience with data pipeline and workflow management tools Experience with AWS cloud services: EC2, RDS, Redshift, Glue, S3 Both verbal and written communication skills Efficiency in handling technical issues expertly, producing new design ideas and substitutions where required Appreciate and add to a collaborative team effort Some Important Traits – We look out for a Person in this role Independent, resourceful, analytical, and able to solve problems effectively Ability to be flexible, agile, and thrive in chaos Excellent oral and written communication skills Our Core Value Cultures: Merchant 1st Innovation Talent The pace of our growth is incredible – if you want to tackle hard and interesting problems at scale, and create an impact within an entrepreneurial environment, Come join us!
Posted 2 weeks ago
3.0 years
0 Lacs
Chennai
On-site
DESCRIPTION Profit Intelligence team in Amazon Retail is seeking a seasoned and talented Senior Data Engineer to join the Historical Contribution Profit (HCP) team. HCP is a fast growing team with a mandate to build tools to automate profit-and-loss forecasting and planning for the Physical Consumer business. We are building the next generation data and Business Intelligence solutions using big data technologies and native AWS (NAWS). As a Data Engineer in Amazon, you will be working in a large, extremely complex and dynamic data environment. You should be passionate about working with big data and are able to learn new technologies rapidly and evaluate them critically. You should have excellent communication skills and be able to work with business owners to translate business requirements into system solutions. You are a self-starter, comfortable with ambiguity, and working in a fast-paced and ever-changing environment. Ideally, you are also experienced with at least one of the programming languages such as Java, C++, Spark/Scala, Python, etc. Major Responsibilities: Work with a team of product and program managers, engineering leaders, and business leaders to build data architectures and platforms to support business Design, develop, and operate high-scalable, high-performance, low-cost, and accurate data pipelines in distributed data processing platforms Recognize and adopt best practices in data processing, reporting, and analysis: data integrity, test design, analysis, validation, and documentation Keep up to date with big data technologies, evaluate and make decisions around the use of new or existing software products to design the data architecture Design, build and own all the components of a high-volume data warehouse end to end. Provide end-to-end data engineering support for project lifecycle execution (design, execution and risk assessment) Continually improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers Interface with other technology teams to extract, transform, and load (ETL) data from a wide variety of data sources Own the functional and nonfunctional scaling of software systems in your ownership area. Implement big data solutions for distributed computing. Key job responsibilities As a DE on our team, you will be responsible for leading the data modelling, database design, and launch of some of the core data pipelines. You will have significant influence on our overall strategy by helping define the data model, drive the database design, and spearhead the best practices to delivery high quality products. About the team Profit intelligence systems measures, predicts true profit(/loss) for each item as a result of a specific shipment to an Amazon customer. Profit Intelligence is all about providing intelligent ways for Amazon to understand profitability across retail business. What are the hidden factors driving the growth or profitability across millions of shipments each day? We compute the profitability of each and every shipment that gets shipped out of Amazon. Guess what, we predict the profitability of future possible shipments too. We are a team of agile, can-do engineers, who believe that not only are moon shots possible but that they can be done before lunch. All it takes is finding new ideas that challenge our preconceived notions of how things should be done. Process and procedure matter less than ideas and the practical work of getting stuff done. This is a place for exploring the new and taking risks. We push the envelope in using cloud services in AWS as well as the latest in distributed systems, forecasting algorithms, and data mining. BASIC QUALIFICATIONS 3+ years of data engineering experience Experience with data modeling, warehousing and building ETL pipelines Experience with SQL PREFERRED QUALIFICATIONS Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases) Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Job details IND, TN, Chennai Operations, IT, & Support Engineering
Posted 2 weeks ago
6.0 years
0 Lacs
Kochi, Kerala, India
On-site
We are seeking an experienced Data Engineer to design and implement scalable data solutions The ideal candidate will have deep expertise in cloud data warehousing, ETL/ELT processes, data modeling, and business intelligence. Requirements 6+ years of experience in data engineering; Deep expertise with AWS Redshift, including data modeling, query optimization, and cluster management Good understanding and experience in implementing Data Quality, lineage, data quality, data governance etc. Strong experience in Apache Airflow for workflow orchestration and scheduling. Proficiency with dbt for data transformation and modeling. Good experience in Azure data stack can also be considered. Experience creating dashboards and reports in Tableau; Excellent SQL skills and experience with Python, Java or Scala; Knowledge of data warehousing concepts and dimensional modeling. Strong communication skills and ability to work cross-functionally. Bachelor's or Master's degree in Computer Science, Engineering, or related field.
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
31458 Jobs | Dublin
Wipro
16542 Jobs | Bengaluru
EY
10788 Jobs | London
Accenture in India
10711 Jobs | Dublin 2
Amazon
8660 Jobs | Seattle,WA
Uplers
8559 Jobs | Ahmedabad
IBM
7988 Jobs | Armonk
Oracle
7535 Jobs | Redwood City
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi
Capgemini
6091 Jobs | Paris,France