Jobs
Interviews

3597 Redshift Jobs - Page 47

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0 years

0 Lacs

Jaipur, Rajasthan, India

Remote

Life at UiPath The people at UiPath believe in the transformative power of automation to change how the world works. We’re committed to creating category-leading enterprise software that unleashes that power. To make that happen, we need people who are curious, self-propelled, generous, and genuine. People who love being part of a fast-moving, fast-thinking growth company. And people who care—about each other, about UiPath, and about our larger purpose. Could that be you? Your mission We’re looking for a Support Engineer to join the team in Jaipur. It’s a customer-facing role that’s all about problem-solving – great for someone who enjoys helping others and has a real interest in tech. It’s especially suited to anyone thinking about a future in Data Engineering, Data Science or Platform Ops. You’ll need some basic knowledge of SQL and Python, but this isn’t a software development job. Day-to-day, you’ll be working closely with customers, getting to grips with their issues, and helping them troubleshoot problems on the Peak platform and with their deployed applications. You’ll need to be curious, proactive, and comfortable owning a problem from start to finish. If you’re someone who enjoys figuring things out, explaining things clearly, and digging into the root cause of an issue – this could be a great fit. What You'll Do At UiPath Resolve Technical Issues: Troubleshoot and resolve customer issues on the Peak platform, using your analytical and problem-solving skills. Own the Process: Take full ownership of problems - investigate, follow up, escalate when needed, and ensure resolution is delivered. Investigate Errors: Dig into application logs, API responses, and system outputs to identify and resolve errors within Peak-built customer applications. Write Useful Scripts: Use scripting (e.g. in Python or Bash) to automate routine support tasks, extract data, or investigate technical issues efficiently. Monitor Systems: Help monitor infrastructure and application health, proactively spotting and flagging unusual behaviour before it becomes a problem. Support Infrastructure Security: Assist with routine security updates and checks, ensuring our systems remain secure and up-to-date. Communicate Clearly: Provide timely, professional updates to both internal teams and customers. You’ll often be the bridge between technical and non-technical people. Contribute to Documentation: Help us build out internal documentation and guides to make solving future issues faster and easier. Be Part of the Team: Participate in a shared on-call rotation to support our customers when they need us most. What You'll Bring To The Team Educational Requirements: A computer science degree or a related field, or equivalent academic experience in technology. Technical Skills: Comfortable using Python, Bash, and SQL for scripting, querying data, and troubleshooting. Familiar with Linux and the command line, and confident navigating file systems or running basic system commands. Exposure to cloud platforms (e.g. AWS, GCP, Azure) is a bonus - especially if you’ve explored tools like Snowflake, Redshift, or other modern data warehouses. Any experience with managing or supporting data workflows, backups, restores, or investigating issues across datasets is a strong plus. Communication Skills: Strong verbal and written communication skills in English. Ability to explain technical concepts clearly and concisely to both technical and non-technical audiences. Personal Attributes: Well-organised with the ability to handle multiple tasks simultaneously. Strong problem-solving and analytical skills. Fast learner with the ability to adapt to new tools and technologies quickly. Excellent interpersonal skills and the ability to work effectively in a team environment. Maybe you don’t tick all the boxes above—but still think you’d be great for the job? Go ahead, apply anyway. Please. Because we know that experience comes in all shapes and sizes—and passion can’t be learned. Many of our roles allow for flexibility in when and where work gets done. Depending on the needs of the business and the role, the number of hybrid, office-based, and remote workers will vary from team to team. Applications are assessed on a rolling basis and there is no fixed deadline for this requisition. The application window may change depending on the volume of applications received or may close immediately if a qualified candidate is selected. We value a range of diverse backgrounds, experiences and ideas. We pride ourselves on our diversity and inclusive workplace that provides equal opportunities to all persons regardless of age, race, color, religion, sex, sexual orientation, gender identity, and expression, national origin, disability, neurodiversity, military and/or veteran status, or any other protected classes. Additionally, UiPath provides reasonable accommodations for candidates on request and respects applicants' privacy rights. To review these and other legal disclosures, visit our privacy policy.

Posted 4 weeks ago

Apply

12.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Senior Software Engineer – AWS Cloud Engineer As a Senior Software Engineer- AWS Cloud Engineer with Convera , looking for motivated and experienced Voice Engineers and professional who are eager to expand their expertise into the dynamic world of Amazon Connect—a cutting-edge, cloud- based contact center solution that offers complete customization with scalable cloud technology. If you're looking to advance your career in software development, AWS, or AI, this is the perfect opportunity to upskill and work on innovative solutions. You Will Be Responsible For In your role as a Senior AWS Cloud Engineer, you will: Architect and Develop Cloud Solutions: Lead the end-to-end design and development of robust data pipelines and data architectures using AWS tools and platforms, including AWS Glue, S3, RDS, Lambda, EMR, and Redshift. Analyze, implement, support, and provide recommendations for AWS cloud solutions Design, deploy, and manage AWS network infrastructure using VPC, Transit Gateway, Direct Connect, Route 53, and AWS Security Groups while also supporting on-premises networking technologies Architect and deploy AWS infrastructure for hosting new and existing line-of-business applications using EC2, Lambda, RDS, S3, EFS, and AWS Auto Scaling Ensure compliance with AWS Well-Architected Framework and security best practices using IAM, AWS Organizations, GuardDuty, and Security Hub Container Orchestration, deploy and manage containerized applications using AWS ECS and EKS. Event-Driven Serverless Architecture: Design and implement event-driven serverless architectures using AWS Lambda, API Gateway, SQS, SNS, and EventBridge. Implement and test system recovery strategies in accordance with the company’s AWS Backup, Disaster Recovery (DR), and Business Continuity (BC) plans Collaborate with AWS Technical Account Managers (TAMs) and customers to provide cloud strategy, cost optimization, and technology roadmaps that align with business objectives Design AWS cloud architectures following Well-Architected guidelines, leveraging CloudFormation, Terraform, and AWS Control Tower Actively participate in team meetings, project discussions, and cross-functional collaboration to enhance AWS cloud adoption and optimization Maintain customer runbooks, automating and improving them with AWS-native solutions such as AWS Systems Manager, CloudWatch, and Lambda Provide off-hours support on a rotational basis, including on-call responsibilities and scheduled maintenance windows Contribute to internal R&D projects, validating and testing new processes and/or tools and services for integration into Innovative Solutions’ offerings Lead or contribute to internal process improvement initiatives, leveraging various DevOps tools enhance automation and efficiency AWS Services within the scope of this role are not limited to the ones specifically called out in this list of responsibilities. A Successful Candidate For This Position Should Have Bachelor's degree in business or computer science and 12+ years of experience in software engineering or IT including at least four years of experience in a role in which the primary responsibility is git-based application code development and/or DevOps Engineering and/or the development, maintenance, and support of CI/CD pipelines or appropriate combination of industry related professional experience and education Proven experience with AWS services, such as EC2, S3, Lambda, CloudFormation, VPC, among others. Skilled in scripting languages, such as Python, Bash, or PowerShell. Experience with Infrastructure as Code (IaC) tools such as Terraform and AWS CloudFormation & monitoring and logging tools such as AWS CloudWatch and ELK stack. Strong understanding of cloud security best practices. Great communication and collaboration skills. Ability to work independently and with a team. Preferred Qualifications AWS Certified Solutions Architect – Associate or Professional. AWS Certified DevOps Engineer – Professional. HashiCorp Certified: Terraform Associate Experience with CI/CD pipelines and DevOps practices. Knowledge of scalable data architecture to ensure efficient and scalable data processing and storage solutions. About Convera Convera is the largest non-bank B2B cross-border payments company in the world. Formerly Western Union Business Solutions, we leverage decades of industry expertise and technology-led payment solutions to deliver smarter money movements to our customers – helping them capture more value with every transaction. Convera serves more than 30,000 customers ranging from small business owners to enterprise treasurers to educational institutions to financial institutions to law firms to NGOs. Our teams care deeply about the value we bring to our customers which makes Convera a rewarding place to work. This is an exciting time for our organization as we build our team with growth-minded, results-oriented people who are looking to move fast in an innovative environment. As a truly global company with employees in over 20 countries, we are passionate about diversity; we seek and celebrate people from different backgrounds, lifestyles, and unique points of view. We want to work with the best people and ensure we foster a culture of inclusion and belonging. We offer an abundance of competitive perks and benefits including: Competitive salary Opportunity to earn an annual Great career growth and development opportunities in a global organization A flexible approach to work

Posted 4 weeks ago

Apply

3.0 years

4 - 8 Lacs

Hyderābād

On-site

DevSecOps Engineer – CL3 Role Overview : As a DevSecOps Engineer , you will actively engage in your engineering craft, taking a hands-on approach to multiple high-visibility projects. Your expertise will be pivotal in delivering solutions that delight customers and users, while also driving tangible value for Deloitte's business investments. You will leverage your DevSecOps engineering craftsmanship across multiple programming languages, DevSecOps tools, and modern frameworks, consistently demonstrating your strong track record in delivering high-quality, outcome-focused CI/CD and automation solutions. The ideal candidate will be a dependable team player, collaborating with cross-functional teams to design, develop, and deploy advanced software solutions. Key Responsibilities : Outcome-Driven Accountability: Embrace and drive a culture of accountability for customer and business outcomes. Develop DevSecOps engineering solutions that solve complex automation problems with valuable outcomes, ensuring high-quality, lean, resilient and secure pipelines with low operating costs, meeting platform/technology KPIs. Technical Leadership and Advocacy: Serve as the technical advocate for DevSecOps modern practices, ensuring integrity, feasibility, and alignment with business and customer goals, NFRs, and applicable automation/integration/security practices—being responsible for designing and maintaining code repos, CI/CD pipelines, integrations (code quality, QE automation, security, etc.) and environments (sandboxes, dev, test, stage, production) through IaC, both for custom and package solutions, including identifying, assessing, and remediating vulnerabilities. Engineering Craftsmanship: Maintain accountability for the integrity and design of DevSecOps pipelines and environments as well as implement deployment techniques like Blue-Green, Canary to minimize down-time and enable A/B testing. Be always hands-on and actively engage with engineers to ensure DevSecOps practices are understood and can be implemented throughout the product development life cycle. Resolve any technical issues from implementation to production operations (e.g., triaging and troubleshooting production issues). Be self-driven to learn new technologies, experiment with engineers, and learn how to apply those new technologies on projects. Customer-Centric Engineering: Develop lean, and yet scalable and flexible, DevSecOps automations through rapid, inexpensive experimentation to solve customer needs, enabling version control, security, logging, feedback loops, continuous delivery, etc. Engage with customers and product teams to deliver the right automation, security, and deployment practices. Incremental and Iterative Delivery: Adopt a mindset that favors action and evidence over extensive planning. Utilize a leaning-forward approach to navigate complexity and uncertainty, delivering lean, supportable, and maintainable solutions. Cross-Functional Collaboration and Integration: Work collaboratively with empowered, cross-functional teams including product management, experience, engineering, delivery, infrastructure, and security. Integrate diverse perspectives to make well-informed decisions that balance feasibility, viability, usability, and value. Support a collaborative environment that enhances team synergy and innovation. Advanced Technical Proficiency: Possess basic knowledge in modern software engineering practices and principles, including Agile methodologies, DevSecOps, Continuous Integration/Continuous Deployment. Learn to be a role model, leveraging these techniques to optimize solutioning and product delivery, ensuring high-quality outcomes with minimal waste. Demonstrate understanding of the product development lifecycle, from conceptualization and design to implementation and scaling, with a focus on continuous improvement and learning. Domain Expertise: Quickly acquire domain-specific knowledge relevant to the business or product. Translate business/user needs into technical requirements and automations. Learn to navigate various enterprise functions such as product, experience, engineering, compliance, and security to drive product value and feasibility. Effective Communication and Influence: Exhibit exceptional communication skills, capable of articulating technical concepts clearly and compellingly. Support teammates and product teams through well-structured arguments and trade-offs supported by evidence, evaluations, and research. Learn to create a coherent narrative that align technical solutions with business objectives. Engagement and Collaborative Co-Creation: Able to engage and collaborate with product engineering teams, including customers as needed. Able to build and maintain constructive relationships, fostering a culture of co-creation and shared momentum towards achieving product goals. Support diverse perspectives and consensus to create feasible solutions. The team : US Deloitte Technology Product Engineering has modernized software and product delivery, creating a scalable, cost-effective model that focuses on value/outcomes by leveraging a progressive and responsive talent structure. As Deloitte’s primary internal development team, Product Engineering delivers innovative digital solutions to businesses, service lines, and internal operations with proven bottom-line results and outcomes. It helps power Deloitte’s success. It is the engine that drives Deloitte, serving many of the world’s largest, most respected companies. We develop and deploy cutting-edge internal and go-to-market solutions that help Deloitte operate effectively and lead in the market. Our reputation is built on a tradition of delivering with excellence. Key Qualifications : § A bachelor’s degree in computer science, software engineering, or a related discipline. An advanced degree (e.g., MS) is preferred but not required. Experience is the most relevant factor. § Good software engineering foundation with the understanding of OOP/OOD, functional programming, data structures and algorithms, software design patterns, code instrumentations, etc. § 3+ years proven experience with Python, Bash, PowerShell, JavaScript, C#, and Golang (preferred). § 3+ years proven experience with CI/CD tools (Azure DevOps and GitHub Enterprise) and Git (version control, branching, merging, handling pull requests) to automate build, test, and deployment processes. § 3+ years of hands-on experience in security tools automation SAST/DAST (SonarQube, Fortify, Mend), monitoring/logging (Prometheus, Grafana, Dynatrace), and other cloud-native tools on AWS, Azure, and GCP. § 3+ years of hands-on experience in using Infrastructure as Code (IaC) technologies like Terraform, Puppet, Azure Resource Manager (ARM), AWS Cloud Formation, and Google Cloud Deployment Manager. § Some experience with cloud native services like Data Lakes, CDN, API Gateways, Managed PaaS, Security, etc. on multiple cloud providers like AWS, Azure and GCP is preferred. § Good understanding of methodologies like, XP, Lean, SAFe to deliver high quality products rapidly. § General understanding of cloud providers security practices, database technologies and maintenance (e.g. RDS, DynamoDB, Redshift, Aurora, Azure SQL, Google Cloud SQL) § General knowledge of networking, firewalls, and load balancers. § Strong preference will be given to candidates with AI/ML and GenAI. § Excellent interpersonal and organizational skills, with the ability to handle diverse situations, complex projects, and changing priorities, behaving with passion, empathy, and care. How You will Grow: At Deloitte, our professional development plans focus on helping people at every level of their career to identify and use their strengths to do their best work every day and excel in everything they do. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India. Benefits to help you thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 306119

Posted 4 weeks ago

Apply

8.0 - 12.0 years

5 - 8 Lacs

Hyderābād

On-site

A Computer Science Degree (Bachelors or Masters), Mathematics or Statistics (Masters) Total of 8-12 years of Big Data, Business Intelligence, Data warehousing, Artificial Intelligence and Machine Learning experience At least 2 years of experience in data science in rolling out production grade solutions to end customers Technical Skills: Experience in designing and deploying Analytical Systems, End to end. Excellent problem-solving skills Strong verbal and written communication skills Strong hands-on programming experience with R or Python or Java Hands-on experience with any of the data science platforms such as RapidMiner, SAS and IBM Sound understanding of machine learning algorithms, including classification, clustering, association and recommendation generation Familiarity with Open Source Machine Learning (R, Python), NLP toolkits and deep learning frameworks (TensorFlow, Keras, H2O etc.) Working experience with statistical inference Experience with BI (Tableau, Oracle, IBM, Microsoft), Data Science (RapidMiner/ SAS/ IBM), NoSQL (MongoDB, Cassandra, Hadoop, Spark), RDBMS (Oracle, SQL Server) tools Experience using cloud computing and storage frameworks such as Amazon AWS (EC2, S3, Redshift, RDS) and Microsoft Azure Storage

Posted 4 weeks ago

Apply

3.0 - 5.0 years

0 Lacs

Hyderābād

On-site

Skills: 3-5 years of Data Warehouse, Business Intelligence work experience required working with Talend Open Studio Extensive experience with Talend Real-Time Big Data Platform in the areas of design, development and Testing with focus on Talend Data Integration and Talend Big-data Real Time including Big data Streaming (Spark) jobs with different databases Experience working with data bases like Greenplum, HAWQ, Oracle, Teradata, MsSql Server, Sybase, Casandra, Mongo-DB, Flat files, API’s and different Hadoop concepts in Bigdata (ecosystems like Hive, Pig, Sqoop, and Map Reduce) Working knowledge of Java is preferred Advanced knowledge of ETL, including the ability to read and write efficient, robust code, follow or implement best practices and coding standards, design implement common ETL strategies (CDC, SCD, etc.), and create reusable maintainable jobs Solid background in database systems (such as Oracle, SQL Server, Redshift and Salesforce) along with strong knowledge of PLSQL and SQL Knowledge and hands on of Unix commands and Shell Scripting Good knowledge of SQL, including the ability to write stored procedures, triggers, functions etc.

Posted 4 weeks ago

Apply

3.0 - 5.0 years

0 Lacs

Hyderābād

On-site

A Computer Science Degree (Bachelors or Masters), Mathematics or Statistics (Masters) Total of 3-5 years of Big Data, Business Intelligence, Data warehousing, Artificial Intelligence and Machine Learning experience At least 2 years of experience in data science in rolling out production grade solutions to end customers Technical Skills: Experience in designing and deploying Analytical Systems, End to end. Excellent problem-solving skills Strong verbal and written communication skills Strong hands-on programming experience with R or Python or Java Hands-on experience with any of the data science platforms such as Rapid Miner, Spark,SAS and IBM Sound understanding of machine learning algorithms, including classification, clustering, association and recommendation generation Familiarity with Open Source Machine Learning (R, Python), NLP toolkits and deep learning frameworks (TensorFlow, Keras, H2O etc.) Working experience with statistical inference Experience with BI (Tableau, Oracle, IBM, Microsoft), Data Science (RapidMiner/ SAS/ IBM), NoSQL (MongoDB, Cassandra, Hadoop, Spark), RDBMS (Oracle, SQL Server) tools Experience using cloud computing and storage frameworks such as Amazon AWS (EC2, S3, Redshift, RDS) and Microsoft Azure Storage

Posted 4 weeks ago

Apply

7.0 - 10.0 years

25 - 30 Lacs

Navi Mumbai

Work from Office

We are looking for a highly skilled Data Catalog Engineer to join our team at Serendipity Corporate Services, with 6-8 years of experience in the IT Services & Consulting industry. Roles and Responsibility Design and implement data cataloging solutions to meet business requirements. Develop and maintain large-scale data catalogs using various tools and technologies. Collaborate with cross-functional teams to identify and prioritize data needs. Ensure data quality and integrity by implementing data validation and testing procedures. Optimize data catalog performance by analyzing query logs and identifying improvement areas. Provide technical support and training to end-users on data catalog usage. Job Requirements Strong understanding of data modeling and database design principles. Experience with data management tools such as SQL and NoSQL databases. Proficiency in programming languages such as Python or Java. Excellent problem-solving skills and attention to detail. Ability to work collaboratively in a team environment. Strong communication and interpersonal skills.

Posted 4 weeks ago

Apply

3.0 years

3 - 10 Lacs

Chennai

On-site

- 3+ years of Excel or Tableau (data manipulation, macros, charts and pivot tables) experience - 2+ years of complex Excel VBA macros writing experience - Experience defining requirements and using data and metrics to draw business insights - Experience with SQL or ETL Amazon is looking for a data-savvy professional to create, report on, and monitor business and operations metrics. Amazon has a culture of data-driven decision-making, and demands business intelligence that is timely, accurate, and actionable. This role will help scope, influence, and evaluate process improvements, and will contribute to Amazon’s success by enabling data-driven decision making that will impact the customer experience. Key job responsibilities You love working with data, can create clear and effective reports and data visualizations, and can partner with customers to answer key business questions. You will also have the opportunity to display your skills in the following areas: * Own the design, development, and maintenance of ongoing metrics, reports, analyses, dashboards, etc. to drive key business decisions. * Analyze the current testing processes and identify improvement opportunities with define requirements and work with technical teams and managers to integrate into their development schedules. * Demonstrate good judgment in solving problems as well as identifying problems in advance, and proposing solutions. * Derive actionable insights and present recommendations from your analyses to partner teams and organizational leadership. * Translate technical testing results into business-friendly reports * Have a strong desire to dive deep and demonstrate the ability to do it effectively. * Share your expertise - partner with and empower product teams to perform their own analyses. * Produce high-quality documentation for processes and analysis results. A day in the life We are looking for a Business Analyst to join our team. This person will be a creative problem solver who cares deeply about what our customers experience, and is a highly analytical, team-oriented individual with excellent communication skills. In this highly visible role, you will provide reporting, analyze data, make sense of the results and be able to explain what it all means to key stakeholders, such as Front Line Managers (FLMs), QA Engineers and Project Managers. You are a self-starter while being a reliable teammate, you are comfortable with ambiguity in a fast-paced and ever-changing environment, you are able to see the big picture while paying meticulous attention to detail, you know what it takes to build trust, you are curious and thrive on learning. You will become a subject matter expert in the Device OS world. About the team The Amazon Devices group delivers delightfully unique Amazon experiences, giving customers instant access to everything, digital or physical. The Device OS team plays a central role in creating these innovative devices at Lab126. The Device OS team is responsible for the board bring up, low level software, core operating system architecture, innovative framework feature development, associated cloud services and end-to-end system functions that brings these devices to life. The software built by the Device OS team runs on all Amazon consumer electronics devices. Experience creating complex SQL queries joining multiple datasets, ETL DW concepts Experience in Amazon Redshift and other AWS technologies Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.

Posted 4 weeks ago

Apply

4.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About Us About DATAECONOMY: We are a fast-growing data & analytics company headquartered in Dublin with offices inDublin, OH, Providence, RI, and an advanced technology center in Hyderabad,India. We are clearly differentiated in the data & analytics space via our suite of solutions, accelerators, frameworks, and thought leadership. Job Description Job Summary: We are seeking a skilled ETL Data Engineer to design, build, and maintain efficient and reliable ETL pipelines, ensuring seamless data integration, transformation, and delivery to support business intelligence and analytics. The ideal candidate should have hands-on experience with ETL tools like Talend , strong database knowledge, and familiarity with AWS services . Key Responsibilities Design, develop, and optimize ETL workflows and data pipelines using Talend or similar ETL tools. Collaborate with stakeholders to understand business requirements and translate them into technical specifications. Integrate data from various sources, including databases, APIs, and cloud platforms, into data warehouses or data lakes. Create and optimize complex SQL queries for data extraction, transformation, and loading. Manage and monitor ETL processes to ensure data integrity, accuracy, and efficiency. Work with AWS services like S3, Redshift, RDS, and Glue for data storage and processing. Implement data quality checks and ensure compliance with data governance standards. Troubleshoot and resolve data discrepancies and performance issues. Document ETL processes, workflows, and technical specifications for future reference. Requirements Bachelor's degree in Computer Science, Information Technology, or a related field. 4+ years of experience in ETL development, data engineering, or data warehousing. Hands-on experience with Talend or similar ETL tools (Informatica, SSIS, etc.). Proficiency in SQL and strong understanding of database concepts (relational and non-relational). Experience working in an AWS environment with services like S3, Redshift, RDS, or Glue. Strong problem-solving skills and ability to troubleshoot data-related issues. Knowledge of scripting languages like Python or Shell scripting is a plus. Good communication skills to collaborate with cross-functional teams. Benefits As per company standards.

Posted 4 weeks ago

Apply

3.0 years

0 Lacs

Kochi, Kerala, India

On-site

Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your Role And Responsibilities As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform Responsibilities: Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / AWS eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Preferred Education Master's Degree Required Technical And Professional Expertise Total 3-5+ years of experience in Data Management (DW, DL, Data Platform, Lakehouse) and Data Engineering skills Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala ; Minimum 3 years of experience on Cloud Data Platforms on AWS; Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB Good to excellent SQL skills Preferred Technical And Professional Experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers

Posted 4 weeks ago

Apply

7.0 - 10.0 years

25 - 35 Lacs

Hyderabad

Work from Office

Job Summary As a Senior Data Engineer, you will play a key role in developing and maintaining the databases and scripts that power Creditsafes products and websites. You will be responsible for handling large datasets, designing scalable data pipelines, and ensuring seamless data processing across cloud environments. This role provides an excellent opportunity to contribute to an exciting, fast paced, and rapidly expanding organization. Key Responsibilities Develop and maintain scalable, metadata-driven, event-based distributed data processing platforms. Design and implement data solutions using Python, Airflow, Redshift, DynamoDB, AWS Glue, and S3. Build and optimize APIs to securely handle over 1,000 transactions per second using serverless technologies. Participate in peer reviews and contribute to a clean, efficient, and high-performance codebase. Implement best practices such as continuous integration, test-driven development, and cloud optimization. Understand company and domain data to suggest improvements in existing products. Provide mentorship and technical leadership to the engineering team. Skills & Qualifications Proficiency in Python and experience in building scalable data pipelines. Experience working in cloud environments such as AWS (S3, Glue, Redshift, DynamoDB). Strong understanding of data architecture, database design, and event-driven data processing. Ability to write clean, efficient, and maintainable code. Excellent communication skills and ability to collaborate within a team. Experience in mentoring engineers and providing leadership on complex technology issues. Benefits Competitive salary and performance bonus scheme. Hybrid working model for better work-life balance. 20 days annual leave plus 10 bank holidays. Healthcare, company pension, gratuity, and parental insurance. Cab services for women for enhanced safety. Global company gatherings and career growth opportunities.

Posted 4 weeks ago

Apply

2.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Greeting from Infosys BPM Ltd, We are hiring for Walkme, ETL Testing + Python Programming, Automation Testing with Java, Selenium, BDD, Cucumber, Test Automation using Java and Selenium, with knowledge on testing process, SQL, ETL DB Testing, ETL Testing Automation skills. Please walk-in for interview on 9th and 10th July 2025 at Pune location Note: Please carry copy of this email to the venue and make sure you register your application before attending the walk-in. Please use below link to apply and register your application. Please mention Candidate ID on top of the Resume *** https://career.infosys.com/jobdesc?jobReferenceCode=PROGEN-HRODIRECT-217814 Interview details Interview Date: 9th and 10th July 2025 Interview Time: 10 AM till 1 PM Interview Venue: Pune:: Hinjewadi Phase 1 Infosys BPM Limited, Plot No. 1, Building B1, Ground floor, Hinjewadi Rajiv Gandhi Infotech Park, Hinjewadi Phase 1, Pune, Maharashtra-411057 Please find below Job Description for your reference: Work from Office*** Min 2 years of experience on project is mandate*** Job Description: Walkme Design, develop, and deploy WalkMe solutions to enhance user experience and drive digital adoption. Experience in task-based documentation, training and content strategy Experience working in a multi-disciplined team with geographically distributed co-workers Working knowledge technologies such as CSS and JavaScript Project management and/or Jira experience Experience in developing in-app guidance using tools such as WalkMe, Strong experience in technical writing, instructional video or guided learning experience in a software company Job Description: ETL Testing + Python Programming Experience in Data Migration Testing (ETL Testing), Manual & Automation with Python Programming. Strong on writing complex SQLs for data migration validations. Work experience with Agile Scrum Methodology Functional Testing- UI Test Automation using Selenium, Java Financial domain experience Good to have AWS knowledge Job Description: Automation Testing with Java, Selenium, BDD, Cucumber Hands on exp in Automation. Java, Selenium, BDD , Cucumber expertise is mandatory. Banking Domian Experience is good. Financial domain experience Automation Talent with TOSCA skills, Payment domain skills is preferable. Job Description: Test Automation using Java and Selenium, with knowledge on testing process, SQL Java, Selenium automation, SQL, Testing concepts, Agile. Tools: Jira and ALM, Intellij Functional Testing: UI Test Automation using Selenium, Java Financial domain experience Job Description: ETL DB Testing Strong experience in ETL testing, data warehousing, and business intelligence. Strong proficiency in SQL. Experience with ETL tools (e.g., Informatica, Talend, AWS Glue, Azure Data Factory). Solid understanding of Data Warehousing concepts, Database Systems and Quality Assurance. Experience with test planning, test case development, and test execution. Experience writing complex SQL Queries and using SQL tools is a must, exposure to various data analytical functions. Familiarity with defect tracking tools (e.g., Jira). Experience with cloud platforms like AWS, Azure, or GCP is a plus. Experience with Python or other scripting languages for test automation is a plus. Experience with data quality tools is a plus. Experience in testing of large datasets. Experience in agile development is must Understanding of Oracle Database and UNIX/VMC systems is a must Job Description: ETL Testing Automation Strong experience in ETL testing and automation. Strong proficiency in SQL and experience with relational databases (e.g., Oracle, MySQL, PostgreSQL, SQL Server). Experience with ETL tools and technologies (e.g., Informatica, Talend, DataStage, Apache Spark). Hands-on experience in developing and maintaining test automation frameworks. Proficiency in at least one programming language (e.g., Python, Java). Experience with test automation tools (e.g., Selenium, PyTest, JUnit). Strong understanding of data warehousing concepts and methodologies. Experience with CI/CD pipelines and version control systems (e.g., Git). Experience with cloud-based data warehouses like Snowflake, Redshift, BigQuery is a plus. Experience with data quality tools is a plus. REGISTRATION PROCESS: The Candidate ID & SHL Test(AMCAT ID) is mandatory to attend the interview. Please follow the below instructions to successfully complete the registration. (Talents without registration & assessment will not be allowed for the Interview). Candidate ID Registration process: STEP 1: Visit: https://career.infosys.com/joblist STEP 2: Click on "Register" and provide the required details and submit. STEP 3: Once submitted, Your Candidate ID(100XXXXXXXX) will be generated. STEP 4: The candidate ID will be shared to the registered Email ID. SHL Test(AMCAT ID) Registration process: This assessment is proctored, and talent gets evaluated on Basic analytics, English Comprehension and writex (email writing). STEP 1: Visit: https://apc01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fautologin-talentcentral.shl.com%2F%3Flink%3Dhttps%3A%2F%2Famcatglobal.aspiringminds.com%2F%3Fdata%3DJTdCJTIybG9naW4lMjIlM0ElN0IlMjJsYW5ndWFnZSUyMiUzQSUyMmVuLVVTJTIyJTJDJTIyaXNBdXRvbG9naW4lMjIlM0ExJTJDJTIycGFydG5lcklkJTIyJTNBJTIyNDE4MjQlMjIlMkMlMjJhdXRoa2V5JTIyJTNBJTIyWm1abFpUazFPV1JsTnpJeU1HVTFObU5qWWpRNU5HWTFOVEU1Wm1JeE16TSUzRCUyMiUyQyUyMnVzZXJuYW1lJTIyJTNBJTIydXNlcm5hbWVfc3E5QmgxSWI5NEVmQkkzN2UlMjIlMkMlMjJwYXNzd29yZCUyMiUzQSUyMnBhc3N3b3JkJTIyJTJDJTIycmV0dXJuVXJsJTIyJTNBJTIyJTIyJTdEJTJDJTIycmVnaW9uJTIyJTNBJTIyVVMlMjIlN0Q%3D%26apn%3Dcom.shl.talentcentral%26ibi%3Dcom.shl.talentcentral%26isi%3D1551117793%26efr%3D1&data=05%7C02%7Comar.muqtar%40infosys.com%7Ca7ffe71a4fe4404f3dac08dca01c0bb3%7C63ce7d592f3e42cda8ccbe764cff5eb6%7C0%7C0%7C638561289526257677%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&sdata=s28G3ArC9nR5S7J4j%2FV1ZujEnmYCbysbYke41r5svPw%3D&reserved=0 STEP 2: Click on "Start new test" and follow the instructions to complete the assessment. STEP 3: Once completed, please make a note of the AMCAT ID( Access you Amcat id by clicking 3 dots on top right corner of screen). NOTE: During registration, you'll be asked to provide the following information: Personal Details: Name, Email Address, Mobile Number, PAN number. Availability: Acknowledgement of work schedule preferences (Shifts, Work from Office, Rotational Weekends, 24/7 availability, Transport Boundary) and reason for career change. Employment Details: Current notice period and total annual compensation (CTC) in the format 390000 - 4 LPA (example). Candidate Information: 10-digit candidate ID starting with 100XXXXXXX, Gender, Source (e.g., Vendor name, Naukri/LinkedIn/Found it, or Direct), and Location Interview Mode: Walk-in Attempt all questions in the SHL Assessment app. The assessment is proctored, so choose a quiet environment. Use a headset or Bluetooth headphones for clear communication. A passing score is required for further interview rounds. 5 or above toggles, multi face detected, face not detected, or any malpractice will be considered rejected Once you've finished, submit the assessment and make a note of the AMCAT ID (15 Digit) used for the assessment. Documents to Carry: Please have a note of Candidate ID & AMCAT ID along with registered Email ID. Please do not carry laptops/cameras to the venue as these will not be allowed due to security restrictions. Please carry 2 set of updated Resume/CV (Hard Copy). Please carry original ID proof for security clearance. Please carry individual headphone/Bluetooth for the interview. Pointers to note: Please do not carry laptops/cameras to the venue as these will not be allowed due to security restrictions. Original Government ID card is must for Security Clearance. Regards, Infosys BPM Recruitment team.

Posted 4 weeks ago

Apply

7.0 - 9.0 years

25 - 30 Lacs

Navi Mumbai

Work from Office

Key Responsibilities: Lead the end-to-end implementation of a data cataloging solution within AWS (preferably AWS Glue Data Catalog or third-party tools like Apache Atlas, Alation, Collibra, etc.). Establish and manage metadata frameworks for structured and unstructured data assets in the data lake and data warehouse environments. Integrate the data catalog with AWS-based storage solutions such as S3, Redshift, Athena, Glue, and EMR. Collaborate with data Governance/BPRG/IT projects teams to define metadata standards, data classifications, and stewardship processes. Develop automation scripts for catalog ingestion, lineage tracking, and metadata updates using Python, Lambda, Pyspark or Glue/EMR customs jobs. Work closely with data engineers, data architects, and analysts to ensure metadata is accurate, relevant, and up to date. Implement role-based access controls and ensure compliance with data privacy and regulatory standards. Create detailed documentation and deliver training/workshops for internal stakeholders on using the data catalog.

Posted 4 weeks ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Scrum Master Exp: 3-8 yrs Location: Bangalore Scrum Master advanced certifications(PSM, SASM, SSM, etc) Working experience on Agile project management tools (Jira, VersionOne(Agility.ai) Rally, etc) Good to have skills: SAFE Agile, Scrum master certification Knowledge and experience in working with the Safe framework. Experience with continuous delivery, DevOps, and release management. Experience working with European customers or colleagues as a big plus. Ability to communicate concisely and accurately to team and to management Knowledge in all or several of the following: In software development (Python, JavaScript, ASP, C#, HTML5...) Data storage technologies (SQL, . Net, NoSQL (Neo4J, Neptune), S3, AWS (Redshift) ) Web development technologies and frameworks (e.g. Angular, AngularJS, ReactJS) DevOps methodologies and practices

Posted 4 weeks ago

Apply

4.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About McDonald’s: One of the world’s largest employers with locations in more than 100 countries, McDonald’s Corporation has corporate opportunities in Hyderabad. Our global offices serve as dynamic innovation and operations hubs, designed to expand McDonald's global talent base and in-house expertise. Our new office in Hyderabad will bring together knowledge across business, technology, analytics, and AI, accelerating our ability to deliver impactful solutions for the business and our customers across the globe. Position Summary: We are seeking a skilled Data Product Engineering SRE to build, maintain, and ensure the reliability of our business-facing data products and analytics platforms. This role combines product engineering skills with data expertise and site reliability practices to deliver exceptional user experiences through reliable, performant data products. You will work at the intersection of data engineering, product development, and operations to ensure our data products meet user expectations and business requirements. Key Responsibilities: Build and maintain business-facing data products including dashboards, analytics APIs, and reporting platforms Implement data product features that enhance user experience and drive product adoption Create and maintain data APIs with proper authentication, caching, and performance optimization Build automated testing frameworks for data product functionality and user workflows Monitor and maintain SLA compliance for data product availability and performance Implement comprehensive monitoring for data products including frontend performance and backend data pipeline health Establish alerting systems for data product issues affecting user experience Respond to and resolve data product incidents, ensuring minimal customer impact Conduct root cause analysis and implement preventive measures for data product failures Operate and maintain data pipelines that feed analytics and reporting features Implement data validation and quality checks for customer-facing data products Monitor data product performance from end-user perspective (page load times, query response times) Implement user analytics and tracking to understand product usage patterns Participate in user feedback collection and translate insights into technical improvements Implement monitoring dashboards for data product health and business metrics Optimize database queries and data processing jobs for interactive analytics workloads Collaborate with Data Engineering teams to ensure reliable data flow into product features Partner with Frontend Engineers on data visualization and user interface components Required Qualifications: 4+ years of experience in product engineering, data engineering, or SRE roles 2+ years building business-facing applications or data products Experience with data visualization tools and frameworks (Tableau, Looker, or custom solutions) Proficiency in Python, JavaScript / TypeScript, and SQL Experience with React, Vue.js, or similar frameworks for building data interfaces Hands-on experience with analytics databases (Snowflake, BigQuery, Redshift, ClickHouse) Experience building and maintaining API’s Working knowledge of AWS, or GCP data and compute services Experience with application monitoring (DataDog, New Relic) and custom metrics Understanding of product development lifecycle and user-centered design principles Experience with product analytics tools (Google Analytics, Mixpanel, Amplitude) Skills in optimizing application performance for user-facing products Experience troubleshooting user-reported issues and providing technical solutions Experience with on-call responsibilities and incident response procedures Skills in setting up comprehensive monitoring and alerting systems Strong debugging skills for distributed systems and data pipeline issues Experience with CI/CD pipelines and infrastructure automation Knowledge of automated testing strategies for data products and APIs Knowledge of data governance and privacy regulations (GDPR, CCPA) in product context Experience with database optimization and query performance tuning Work location: Hyderabad, India Work pattern: Full time role. Work mode: Hybrid.

Posted 4 weeks ago

Apply

0.0 - 10.0 years

0 Lacs

Panchkula, Haryana

On-site

Description Job Description We are looking for a skilled and experienced ETL Engineer to join our growing team at Grazitti Interactive. In this role, you will be responsible for building and managing scalable data pipelines across traditional and cloud-based platforms. You will work with structured and unstructured data sources, leveraging tools such as SQL Server, Snowflake, Redshift, and BigQuery to deliver high-quality data solutions. If you have hands-on experience in Python, PySpark, and cloud platforms like AWS or GCP, along with a passion for transforming data into insights, we’d love to connect with you. Skills Key Skills Strong experience (4–10 years) in ETL development using platforms like SQL Server, Oracle, and cloud environments like Amazon S3, Snowflake, Redshift, Data Lake, and Google BigQuery. Proficient in Python, with hands-on experience creating data pipelines using APIs. Solid working knowledge of PySpark for large-scale data processing. Ability to output results in various formats, including JSON, data feeds, and reports. Skilled in data manipulation, schema design, and transforming data across diverse sources. Strong understanding of core AWS/Google Cloud Services and basic cloud architecture. Capable of developing, deploying, and debugging cloud-based data assets. Expert-level proficiency in SQL with a solid grasp of relational and cloud-based databases. Excellent ability to understand and adapt to evolving business requirements. Strong communication and collaboration skills, with experience in onsite/offshore delivery models. Familiarity with Marketo, Salesforce, Google Analytics, and Adobe Analytics. Working knowledge of Tableau and Power BI for data visualization and reporting. Responsibilities Roles and Responsibilities Design and implement robust ETL processes to ensure data integrity and accuracy across systems. Develop reusable data solutions and optimize performance across traditional and cloud environments. Collaborate with cross-functional teams, including data analysts, marketers, and engineers, to define data requirements and deliver insights. Take ownership of end-to-end data pipelines, from requirement gathering to deployment and monitoring. Ensure compliance with internal QMS and ISMS standards. Proactively report any data incidents or concerns to reporting managers. Contacts Email: careers@grazitti.com Address: HSIIDC Technology Park, Plot No – 19, Sector 22, 134104, Panchkula, Haryana, India

Posted 4 weeks ago

Apply

0.0 years

0 Lacs

Bengaluru, Karnataka

On-site

- 3+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience - Experience with data visualization using Tableau, Quicksight, or similar tools - Experience with data modeling, warehousing and building ETL pipelines - Experience writing complex SQL queries - Experience in Statistical Analysis packages such as R, SAS and Matlab - Experience using SQL to pull data from a database or data warehouse and scripting experience (Python) to process data for modeling When you attract people who have the DNA of pioneers and the DNA of explorers, you build a company of like-minded people who want to invent. And that’s what they think about when they get up in the morning: how are we going to work backwards from customers and build a great service or a great product” – Jeff Bezos Amazon.com’s success is built on a foundation of customer obsession. Have you ever thought about what it takes to successfully deliver millions of packages to Amazon customers seamlessly every day like a clock work? In order to make that happen, behind those millions of packages, billions of decision gets made by machines and humans. What is the accuracy of customer provided address? Do we know exact location of the address on Map? Is there a safe place? Can we make unattended delivery? Would signature be required? If the address is commercial property? Do we know open business hours of the address? What if customer is not home? Is there an alternate delivery address? Does customer have any special preference? What are other addresses that also have packages to be delivered on the same day? Are we optimizing delivery associate’s route? Does delivery associate know locality well enough? Is there an access code to get inside building? And the list simply goes on. At the core of all of it lies quality of underlying data that can help make those decisions in time. The person in this role will be a strong influencer who will ensure goal alignment with Technology, Operations, and Finance teams. This role will serve as the face of the organization to global stakeholders. This position requires a results-oriented, high-energy, dynamic individual with both stamina and mental quickness to be able to work and thrive in a fast-paced, high-growth global organization. Excellent communication skills and executive presence to get in front of VPs and SVPs across Amazon will be imperative. Key Strategic Objectives: Amazon is seeking an experienced leader to own the vision for quality improvement through global address management programs. As a Business Intelligence Engineer of Amazon last mile quality team, you will be responsible for shaping the strategy and direction of customer-facing products that are core to the customer experience. As a key member of the last mile leadership team, you will continually raise the bar on both quality and performance. You will bring innovation, a strategic perspective, a passionate voice, and an ability to prioritize and execute on a fast-moving set of priorities, competitive pressures, and operational initiatives. You will partner closely with product and technology teams to define and build innovative and delightful experiences for customers. You must be highly analytical, able to work extremely effectively in a matrix organization, and have the ability to break complex problems down into steps that drive product development at Amazon speed. You will set the tempo for defect reduction through continuous improvement and drive accountability across multiple business units in order to deliver large scale high visibility/ high impact projects. You will lead by example to be just as passionate about operational performance and predictability as you will be about all other aspects of customer experience. The successful candidate will be able to: - Effectively manage customer expectations and resolve conflicts that balance client and company needs. - Develop process to effectively maintain and disseminate project information to stakeholders. - Be successful in a delivery focused environment and determining the right processes to make the team successful. - This opportunity requires excellent technical, problem solving, and communication skills. The candidate is not just a policy maker/spokesperson but drives to get things done. - Possess superior analytical abilities and judgment. Use quantitative and qualitative data to prioritize and influence, show creativity, experimentation and innovation, and drive projects with urgency in this fast-paced environment. - Partner with key stakeholders to develop the vision and strategy for customer experience on our platforms. Influence product roadmaps based on this strategy along with your teams. - Support the scalable growth of the company by developing and enabling the success of the Operations leadership team. - Serve as a role model for Amazon Leadership Principles inside and outside the organization - Actively seek to implement and distribute best practices across the operation Experience with AWS solutions such as EC2, DynamoDB, S3, and Redshift Experience in data mining, ETL, etc. and using databases in a business environment with large-scale, complex datasets Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.

Posted 4 weeks ago

Apply

0.0 years

0 Lacs

Bengaluru, Karnataka

On-site

- 2+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience - Experience with data visualization using Tableau, Quicksight, or similar tools - Experience with one or more industry analytics visualization tools (e.g. Excel, Tableau, QuickSight, MicroStrategy, PowerBI) and statistical methods (e.g. t-test, Chi-squared) - Experience with scripting language (e.g., Python, Java, or R) We are looking to hire an insightful, results-oriented Business Intelligence Engineer to produce and drive analyses for Worldwide Operations Security (WWOS) Team in Amazon. To keep our operations network secure and assure operational continuity, we are seeking an experienced professional who wants to join our Business Insights team. This role involves translating broad business problems into specific analytics projects, conducting deep quantitative analyses, and communicating results effectively. Key job responsibilities • Design and implement scalable data infrastructure solutions • Create and maintain data pipelines for metric tracking and reporting • Develop analytical models to identify Theft/Fraud trends and patterns • Partner with stakeholders to translate business needs into analytical solutions • Build and maintain data visualization dashboards for operational insights A day in the life As a Business Intelligence Engineer – I, you will collaborate with cross-functional teams to design and implement data solutions that drive business decisions. Your day might include analysing Theft & Fraud patterns, building automated reporting systems, or presenting insights to stakeholders. You'll work with petabyte-scale data sets and have the opportunity to influence strategic decisions through your analysis. About the team We are part of the Business Insights team under the Strategy vertical in Worldwide Operations Security, focusing on data analytics to support security and loss prevention initiatives. Our team collaborates across global operations to develop innovative solutions that protect Amazon's assets and contribute to business profitability. We leverage technology to identify patterns, prevent losses, and strengthen our operational network. Master's degree, or Advanced technical degree Knowledge of data modeling and data pipeline design Experience with statistical analysis, co-relation analysis Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.

Posted 4 weeks ago

Apply

3.0 years

0 Lacs

India

On-site

About Toku At Toku, we create enterprise cloud communications and customer engagement solutions to reimagine customer experiences for enterprises. We provide an end-to-end approach to help businesses overcome the complexity of digital transformation in APAC markets and enhance their CX with mission-critical cloud communication solutions . Toku combines local strategic consulting expertise, bespoke technology, regional in-country infrastructure, connectivity and global reach to serve the diverse needs of enterprises. About The Role As we continue creating momentum for our products in the APAC region and helping customers with their communications needs, we are looking for a Data Engineer to ensure the high quality and reliability of our cutting-edge contact center and unified communication platforms, and contribute to the seamless delivery of exceptional customer experiences. This is an impactful position during a growth phase for the business. You will be instrumental in shaping new processes, bringing new ideas, and selecting tools in a collaborative and highly visible team environment. You will thrive in this role if you have a passion for quality, an eye for detail, and the experience to help excel a growing Engineering function to the next level What you will be doing As a Data Engineer , reporting to an Engineering Manager or potentially the VP of Engineering, you will collaborate with stakeholders across the organization. You will be responsible for Data Pipeline Design and Development, Data Infrastructure Management, Data Quality Management, and Data Security and Privacy management. Delivery This axis refers to the reliability in delivering impactful results across various scopes, including tasks, features, projects, initiatives, teams, the organization. For a Data Engineer, this involves: Building robust and efficient data pipelines to extract, transform, and load data from various sources. Ensuring optimal performance and scalability of the data warehouse. Designing and implementing a scalable data architecture to support future growth and innovation. Providing clean and reliable datasets to stakeholders to assist them in building and optimizing products into innovative industry leaders. Identifying opportunities to automate manual tasks and optimize data delivery. Assisting stakeholders in leveraging data to drive product innovation Strategic Alignment This involves the ability to prioritize work and influence goals and direction for oneself, the team, the organization. In the Data Engineer role, this means: Ensuring the data infrastructure can handle future growth and maintain high availability. Maintaining data accuracy, integrity, and consistency to support reliable decision-making. Adhering to data standards, security protocols, and compliance regulations. Staying informed about emerging technologies and their potential benefits. Following industry best practices to optimize data pipelines and processes. Talent This axis focuses on contributions to raising the bar by strengthening oneself and others, and by attracting talent. Data Engineers are expected to: Actively participate in knowledge sharing and contribute to the growth and development of the Data Engineering team. Provide guidance and mentorship to fellow data engineers, offering support and training to enhance their skills and performance Culture This describes the level of participation in Toku's culture and collaboration across different functions, teams, and organizations. For a Data Engineer, this means: Maintaining excellent interpersonal skills, with strong written and oral communication abilities in English. Ability to work independently and in a fast-paced, dynamic startup environment. Fostering a continuous learning mindset, staying up-to-date with the latest trends and technologies. Technical Excellence This refers to the knowledge and fluency within one's technical functional area of expertise that enables engineering and operational excellence. Key technical proficiencies for a Data Engineer include: Programming Languages: Proficiency in languages like Python and SQL for data manipulation, analysis, and automation. Data Technologies: Expertise in tools like Databricks, Spark, and Kafka for handling large and complex datasets. Experience with Amazon Redshift is also beneficial. Data Warehousing and ETL/ELT: Knowledge of data warehousing concepts and ETL/ELT processes to design and implement data pipelines. Cloud Platforms: Familiarity with cloud platforms (AWS) for deploying and managing data infrastructure. Toku's broader architecture leverages containerized services, serverless computing, and modern deployment DevOps practices for scalability and resilience. Database Systems: Understanding of both relational (SQL) and NoSQL databases. Data Modeling: Ability to design efficient data models to support business needs. Expected Collaborations Work closely with Data Manager to align on data strategies and goals for Toku. Collaborate with BI team on data initiatives and will ensure optimal data delivery is consistent throughout ongoing projects. Provide technical inputs to data stakeholders and assist them in building and optimizing pipelines for their data needs. Partner with Infra Team to provisioning, capacity planning, monitoring and maintenance. Discuss with Security team to implement security policies and privacy concerns Share knowledge and best practices related to Data Engineering tools and techniques with fellow team members. We would love to hear from you if you have: At least a Bachelor’s degree in Data Science / Information Technology or a relevant field. Around 3+ years of total relevant experience in Data Engineering. Significant experience with Databricks, SQL Query language, Python, ETL processes, and best practices for data engineering. Proficiency in languages like Python, SQL for data manipulation, analysis, and automation. Working knowledge and familiarity with a variety of databases (SQL and NoSQL). Good to have exposure/experience in building and optimizing ‘big data’ data pipelines, architectures, and data sets. Experience in tools like Databricks, Spark, Kafka for handling large and complex datasets. Knowledge on Data Warehousing concepts and ETL, ELT processes to design and implement data pipelines. Familiarity on working with Cloud Platforms (AWS). Strong analytical and problem-solving skills to resolve data-related challenges. Ability to work collaboratively in cross-functional teams. Able to think critically and innovate to improve data processes. Effective Communication skills to collaborate with business stakeholders. Knowledge with Agile methodologies and experience working in Agile environments. If you would love to experience working in a fast-paced, growing company and believe you meet most of the requirements, come join us!

Posted 4 weeks ago

Apply

0.0 - 2.0 years

0 Lacs

Kochi M.G.Road, Kochi, Kerala

On-site

Data Engineer Experience: 2-4Years Location: Kochin, Kerala (Work From Office) Key Responsibilities: Build and manage data lakes and data warehouses using services like Amazon S3, Redshift, and Athena Design and build secure, scalable, and efficient ETL/ELT pipelines on AWS using services like Glue, Lambda, Step Functions Work on SAP Datasphere to build and maintain Spaces, Data Builders, Views, and Consumption Layers Support data integration between AWS, Datasphere, and various source systems(SAP S4HANA, Non-SAP apps, Flat-files etc) Develop and maintain scalable data models and optimize queries for performance · Monitor and optimize data workflows to ensure reliability, performance, and cost-efficiency Collaborate with Data Analysts and BI teams to provide clean, validated, and well-documented datasets Monitor, troubleshoot, and enhance data workflows and pipelines Ensure data quality, integrity, and governance policies are met Required Skills Strong SQL skills and experience with relational databases like MySQL, or SQL Server Proficient in Python or Scala for data transformation and scripting Familiarity with cloud platforms like AWS (S3, Redshift, Glue), Datasphere, Azure Good-to-Have Skills: AWS Certification – AWS Certified Data Analytics Exposure to modern data stack tools like Snowflake Experience in cloud-based projects and working in an Agile environment Understanding of data governance, security best practices, and compliance standards Job Types: Full-time, Permanent Pay: Up to ₹960,000.00 per year Application Question(s): Willing to take up Work from Office mode in Kochi Location? Experience: Data Engineer / ETL Developer: 2 years (Required) AWS: 2 years (Required) SQL and (Python OR Scala): 2 years (Required) Datasphere OR "SAP BW" OR "SAP S/4HANA": 2 years (Required) AWS (S3, Redshift, Glue), Datasphere, Azure: 2 years (Required) PostgreSQL and MySQL or SQL Server: 2 years (Required)

Posted 4 weeks ago

Apply

5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Data Engineering Good to have skills : Microsoft SQL Server, Python (Programming Language), Snowflake Data Warehouse Minimum 5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Senior Analyst, Data Engineering, you will be part of the Data and Analytics team, responsible for developing and delivering high-quality data assets and managing data domains for Personal Banking customers and colleagues. You will bring expertise in data handling, curation, and conformity, and support the design and development of data solutions that drive business value. You will work in an agile environment to build scalable and reliable data pipelines and platforms within a complex enterprise. Roles & Responsibilities: Hands-on development experience in Data Warehousing and/or Software Development. Utilize tools and best practices to build, verify, and deploy data solutions efficiently. Perform data integration and sourcing activities across various platforms. Develop data assets to support optimized analysis for customer and regulatory outcomes. Provide ongoing support for data platforms, including problem and incident management. Collaborate in Agile software development environments using tools like GitHub, Confluence, and Rally. Support continuous improvement and innovation in data engineering practices. Professional & Technical Skills: Must To Have Skills: Experience with cloud technologies, especially AWS (S3, Redshift, Airflow). Proficiency in DevOps and DataOps tools such as Jenkins, Git, and Erwin. Advanced skills in SQL and Python. Working knowledge of UNIX, Spark, and Databricks. Additional Information: Position: Senior Analyst, Data Engineering Reports to: Manager, Data Engineering Division: Personal Bank Group: 3 Industry/Domain Skills: Experience in Retail Banking, Business Banking, or Wealth Management preferred 15 years full time education

Posted 4 weeks ago

Apply

5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Data Analytics Good to have skills : Microsoft SQL Server, Python (Programming Language), AWS Redshift Minimum 5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Senior Analyst, Data Engineering, you will be part of the Data and Analytics team, responsible for developing and delivering high-quality data assets and managing data domains for Personal Banking customers and colleagues. You will bring expertise in data handling, curation, and conformity, and support the design and development of data solutions that drive business value. You will work in an agile environment to build scalable and reliable data pipelines and platforms within a complex enterprise. Roles & Responsibilities: Hands-on development experience in Data Warehousing and/or Software Development. Utilize tools and best practices to build, verify, and deploy data solutions efficiently. Perform data integration and sourcing activities across various platforms. Develop data assets to support optimized analysis for customer and regulatory outcomes. Provide ongoing support for data platforms, including problem and incident management. Collaborate in Agile software development environments using tools like GitHub, Confluence, and Rally. Support continuous improvement and innovation in data engineering practices. Professional & Technical Skills: Must To Have Skills: Experience with cloud technologies, especially AWS (S3, Redshift, Airflow). Proficiency in DevOps and DataOps tools such as Jenkins, Git, and Erwin. Advanced skills in SQL and Python. Working knowledge of UNIX, Spark, and Databricks. Additional Information: Position: Senior Analyst, Data Engineering Reports to: Manager, Data Engineering Division: Personal Bank Group: 3 Industry/Domain Skills: Experience in Retail Banking, Business Banking, or Wealth Management preferred 15 years full time education

Posted 4 weeks ago

Apply

11.0 years

0 Lacs

Itanagar, Arunachal Pradesh, India

Remote

Job Title : Python Technical Lead. Experience : 11+ Years. Location : Remote. Time Zone : 2 PM to 11 PM IST. Primary Tech Stack Python (Flask). Celery. AWS (Lambda, Redshift, Glue, S3). Microservices & API Development. Database Optimization (SQL, PostgreSQL, Amazon Aurora RDS). CI/CD & Infrastructure (GitHub Actions, Git-lab CI/CD, Docker, Kubernetes, Terra-form, Cloud-formation). Monitoring & Logging (AWS Cloud-watch, ELK Stack, Prometheus). Security & Compliance Best Practices. Secondary Tech Stack Agile/Scrum Methodologies. Service Discovery & API Gateway Design. Distributed Systems Optimization. Job Responsibilities Own backend architecture and lead the development of scalable, efficient web applications and micro services. Ensure production-grade AWS deployment with high availability, cost optimization, and security best practices. Design and optimize databases (RDBMS, SQL) for performance, scalability, and reliability. Lead API and micro services development, ensuring seamless integration, scalability, and maintainability. Implement high-performance solutions focusing on low latency, up-time, and data accuracy. Mentor and guide developers, fostering a culture of collaboration, disciplined coding, and technical excellence. Conduct technical reviews, enforce best coding practices, and ensure security and compliance adherence. Drive automation and CI/CD pipelines to enhance deployment efficiency and reduce operational overhead. Communicate technical concepts effectively to technical and non-technical stakeholders. Provide accurate work estimations and align development efforts with broader business objectives. Preferred Experience Experience in high-performance, product-focused companies emphasizing up-time, defect reduction, and system reliability. Hands-on leadership in scaling cloud infrastructure and optimizing backend services. Proven ability to lead and mentor a development team while driving strategic technical initiatives. (ref:hirist.tech)

Posted 4 weeks ago

Apply

7.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Must-Have Bachelors or Masters in Computer Science, Electrical Engineering, or related field. 7+ years of experience in software/product engineering, with 2+ years in a technical leadership or management role. Deep understanding of cloud architecture, video pipelines, edge computing, and microservices. Proficient in AWS/GCP/Azure, Docker/Kubernetes, serverless computing, and RESTful API design. Solid grasp of AI/ML integration and computer vision workflows (model lifecycle, optimization, deployment). Experience in data pipelines, SQL/NoSQL databases, and analytics tools (e.g., Redshift, Snowflake, Grafana, : Prior work on automotive, fleet, or video telematics solutions. Experience with camera hardware (MIPI, ISP tuning), compression codecs (H.264/H.265), and event-based recording. Familiarity with telematics protocols (CAN, MQTT) and geospatial analytics. Working knowledge of data privacy regulations (GDPR, CCPA). Key Responsibilities : Leadership : Own and drive the development of the complete video telematics ecosystem, covering edge AI, video processing, cloud platform, and data insights. Architect and oversee scalable and secure cloud infrastructure for video streaming, data storage, OTA updates, analytics, and alerting systems. Define the data pipeline architecture for collecting, storing, analyzing, and visualizing video, sensor, and telematics data. Cross-functional Engineering Management Lead And Coordinate Cross-functional Teams Cloud/backend team for infrastructure, APIs, analytics dashboards, and system scalability. AI/ML and CV teams for DMS/ADAS model deployment and real-time inference. Hardware/Embedded teams for firmware integration, camera tuning, and SoC optimization. Collaborate with product and business stakeholders to define the roadmap, features, and Analytics : Work closely with data scientists to build meaningful insights and reports from video and sensor data. Drive implementation of real-time analytics, fleet safety scoring, and predictive insights using telematics Responsibilities : Own product quality and system reliability across all components. Support product rollout, monitoring, and updates in production. Manage resources, mentor engineers, and build a high-performing development team. Ensure adherence to industry standards for security, privacy (GDPR), and compliance (e.g., GSR, AIS-140). (ref:hirist.tech)

Posted 4 weeks ago

Apply

7.0 years

0 Lacs

Greater Kolkata Area

Remote

Location : Remote. Experience Level : 7+ years. Job Summary We are seeking an experienced Senior Data Platform Architect with strong expertise in Amazon Redshift to lead the design and implementation of robust data architectures. The ideal candidate will bring a solid background in data warehousing, performance tuning, and workload management, combined with excellent communication and stakeholder engagement skills. Key Responsibilities Design and architect scalable and efficient data platform solutions using Amazon Redshift (both Serverless and Provisioned). Define and implement workload isolation strategies, optimizing for performance and cost. Lead technical design sessions and provide strategic direction throughout implementation phases. Tune performance via RPU sizing, concurrency scaling, and resource management best practices. Develop solutions involving Redshift Data Sharing and cross-cluster access patterns. Collaborate with BI/reporting teams, ideally with MicroStrategy, to support analytical and dashboarding requirements. Produce high-quality technical documentation and communicate architecture decisions clearly to stakeholders. Required Skills & Experience Proven experience designing and implementing modern data platform architectures. Expertise in Amazon Redshift, including Serverless and Provisioned modes. Strong knowledge of performance tuning and workload management in Redshift. Understanding of data sharing across clusters and access control strategies. Background in BI/reporting environments; MicroStrategy experience is a strong plus. Excellent problem-solving abilities and strong communication skills. Ability to work independently as well as in a team-oriented, collaborative environment. Preferred Qualifications AWS certification in Big Data or Data Analytics. Experience in broader AWS data services (e.g, Glue, S3, Athena). Knowledge of data governance and security best practices. (ref:hirist.tech)

Posted 4 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies