Jobs
Interviews

3597 Redshift Jobs - Page 40

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

About The Role: Eucloid is looking for a senior Data Engineer with hands-on expertise in Databricks to join our Data Platform team supporting various business applications. The ideal candidate will support development of data infrastructure on Databricks for our clients by participating in activities which may include starting from up- stream and down-stream technology selection to designing and building of different components. Candidate will also involve in projects like integrating data from various sources, managing big data pipelines that are easily accessible with optimized performance of overall ecosystem. The ideal candidate is an experienced data wrangler who will support our software developers, database architects and data analysts on business initiatives. You must be self-directed and comfortable supporting the data needs of cross-functional teams, systems, and technical solutions. Location: Chennai Qualifications: B.Tech/BS degree in Computer Science, Computer Engineering, Statistics, or other Engineering disciplines Min. 5 years of Professional work Experience, with 1+ years of hands-on experience with Databricks Highly proficient in SQL & Data model (conceptual and logical) concepts Highly proficient with Python & Spark (3+ year) Knowledge of Distributed computing and cloud databases like Redshift, Big query etc. 2+ years of Hands-on experience with one of the top cloud platforms - AWS/GCP/Azure. Experience with Modern Data stack tools like Airflow, Terraform, dbt, Glue, Data proc etc. Exposure to Hadoop& Shell scripting a plus Min 2 years, Databricks 1 year desirable, Python & Spark 1+ years, remove data model, SQL only, any cloud exp 1+ year Responsibilities Design, implementation, and improvement of processes & automation of Data infrastructure Tuning of Data pipelines for reliability & performance Building tools and scripts to develop, monitor, troubleshoot ETL’s Perform scalability, latency, and availability tests on a regular basis. Perform code reviews and QA data imported by various processes. Investigate, analyze, correct and document reported data defects. Create and maintain technical specification documentation. Eucloid offers a high growth path along with great compensation, which is among the best in the industry. Please reach out to chandershekhar.verma@eucloid.com if you want to apply.

Posted 3 weeks ago

Apply

3.0 - 4.0 years

0 Lacs

Greater Bengaluru Area

On-site

Job Title: Senior Data Analyst Location: Bangalore Experience: 3-4 Years Department: Analytics / Business Intelligence Employment Type: Full-time Job Summary: We are seeking a highly skilled and detail-oriented Senior Data Analyst to join our data-driven team. The ideal candidate will have strong expertise in SQL, Tableau, and MS Excel , with a foundational understanding of Python for data analysis and automation . You will play a key role in turning data into actionable insights that influence strategic decisions across the business. Key Responsibilities: Design, develop, and maintain SQL queries to extract and analyze large datasets from multiple sources. Build interactive Tableau dashboards and reports to visualize business trends and performance metrics. Perform advanced data analysis in MS Excel including pivot tables, lookups, and complex formulas. Use Python for data cleaning, automation, and basic exploratory data analysis. Collaborate with cross-functional teams to understand business requirements and translate them into data solutions. Conduct root cause analysis and identify key insights to support business decisions. Ensure data accuracy, consistency, and integrity across all reporting and analytics deliverables. Exposure to AI-driven analytics or interest in learning AI-based tools will be an added advantage. Required Qualifications: Bachelor’s degree in Computer Science, Statistics, Mathematics, Economics, or a related field. Proficiency in SQL for data extraction and transformation. Strong expertise in Tableau for building reports and dashboards. Advanced skills in Microsoft Excel , including macros, charts, and data modeling. Working knowledge of Python for scripting and data manipulation (Pandas, NumPy preferred). Previous experience in E-commerce Industry (Mandatory) Strong problem-solving abilities and attention to detail. Excellent communication and data storytelling skills. Preferred Qualifications: Experience with data warehousing tools like Snowflake, Redshift, or BigQuery. Exposure to cloud platforms (AWS, Azure, GCP). Familiarity with ETL tools and processes. Background in A/B testing or statistical modeling is a plus. Join us if you're passionate about turning data into insights and want to drive real business impact!

Posted 3 weeks ago

Apply

0 years

0 Lacs

India

Remote

Role Description This is a Full-Time position with one of our clients for a Redshift Database Administrator. The Redshift Database Administrator will be responsible for managing, designing, and troubleshooting Redshift databases. Daily tasks include performing database administration, ensuring database design is optimal, troubleshooting issues, managing database replication, and ensuring database performance and integrity. Qualifications Proficiency in Database Administration and Database Design Strong Troubleshooting skills Experience with Databases and Replication Strong understanding of database design, performance tuning, and optimization techniques Proficiency in SQL and experience with database scripting languages (e.g., Python, Shell) Experience with database backup and recovery, security, and high availability solutions Familiarity with AWS services and tools, including S3, EC2, IAM, and CloudWatch Excellent problem-solving abilities and analytical skills Ability to work independently and remotely Advanced knowledge of AWS Redshift is a plus Bachelor's degree in computer science, Information Technology, or related field

Posted 3 weeks ago

Apply

8.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Who we are? Searce means ‘a fine sieve’ & indicates ‘to refine, to analyze, to improve’. It signifies our way of working: To improve to the finest degree of excellence, ‘solving for better’ every time. Searcians are passionate improvers & solvers who love to question the status quo. The primary purpose of all of us, at Searce, is driving intelligent, impactful & futuristic business outcomes using new-age technology. This purpose is driven passionately by HAPPIER people who aim to become better, everyday. What are we looking for? Are you a keen learner? Excellent mentor? Passionate coach? We’re looking for someone who’s all three! We’re on the lookout for someone who can design and implement our data processing pipelines for all kinds of data sources. What you'll do as a Manager - Data Engineering with us? 1. You have worked in environments of different shapes and sizes. On-premise, private cloud, public cloud, Hybrid, all windows / linux / healthy mix. Thanks to this experience, you can connect the dots quickly and understand client pain points. 2. You are curious. You keep up with the breakneck speed of innovation on Public cloud. When something new gets released or an existing service changes - you try it out and you learn. 3. You have Strong database background - relational and non-relational alike. a. MySQL, PostgreSQL, SQL Server, Oracle. b. MongoDB, Cassandra and other NoSQL databases. c. Strong SQL query writing experience. d. HA, DR, Performance tuning, Migrations. e. Experience with the cloud offerings - RDS, Aurora, CloudSQL, Azure SQL. 4. You have hands-on experience with designing, deploying, migrating enterprise data warehouses and data lakes. a. Familiarity with migrations from the likes of Netezza, Greenplum, Oracle to BigQuery/RedShift/Azure Data warehouse. b. Dimensional data modelling, reporting & analytics. c. Designing ETL pipelines. 5. You have experience with Advanced Analytics - Ability to work with the Applied AI team and assist in delivering predictive analytics, ML models etc. 6. You have experience with BigData ecosystem a. Self managed Hadoop clusters, distributions like Hortonworks and the cloud equivalents like EMR, Dataproc, HDInsight b. Apache Hudi, Hive, Presto, Spark, Flink, Kafka etc 7. You have hands-on experience with Tools: Apache Airflow, Talend, Tableau, Pandas, DataFlow, Kinesis, Stream Analytics etc. What are the must-haves to join us? 1. Is Education overrated? Yes. We believe so. However there is no way to locate you otherwise. So we might have to look for a Bachelor's or Master's Degree in engineering from a reputed institute or you should have been coding since your 6th grade. And the later is better. We will find you faster if you specify the latter in some manner. :) 2. 8-10+ years of overall IT experience with a strong data engineering and business intelligence background. 3. Minimum 3 years of experience on projects with GCP / AWS / Azure. 4. Minimum 3+ years of experience in data & analytics delivery and management consulting working with Data Migration, ETL, Business Intelligence, Data Quality, Data Analytics and AI tools. 5. 4+ years of hands-on experience with Python & SQL. 6. Experience across data solutions including data lake, warehousing, ETL, streaming, reporting and analytics tools. 7. Prior experience in recruitment, training & grooming of geeks. 8. Great to have certifications: a. GCP and/or AWS, professional level b. Your contributions to the community - tech blogs, stackover ow etc. 9. Strong communication skills to communicate across a diverse audience with varying levels of business and technical expertise. So, If you are passionate about tech, future & what you read above (we really are!), apply here to experience the ‘Art of Possible’

Posted 3 weeks ago

Apply

3.0 years

0 Lacs

India

Remote

Job Title: AWS Data Engineer 📍 Location: Remote (India) 🕒 Experience: 3+ Years 💼 Employment Type: Full-Time About the Role: We’re looking for a skilled AWS Data Engineer with 3+ years of hands-on experience in building and managing robust, scalable data pipelines using AWS services. The ideal candidate will have a strong foundation in processing both structured and unstructured data, particularly from IoT/sensor sources. Experience in the energy sector and with time-series data is highly desirable. Key Responsibilities: Design, develop, and maintain scalable ETL/ELT pipelines using AWS Glue, Redshift, Lambda, EMR, S3, and Athena Integrate and process structured, unstructured, and real-time sensor/IoT data Ensure pipeline performance, reliability, and fault tolerance Collaborate with data scientists, analysts, and engineering teams to build analytics-ready solutions Transform data using Python, Pandas , and SQL Enforce data integrity, quality, and security standards Use Terraform and CI/CD tools (e.g., Azure DevOps) for infrastructure and deployment automation Monitor workflows, troubleshoot pipeline issues, and implement solutions Explore and contribute to the use of modern AWS tools like Bedrock, Textract, Rekognition , and GenAI applications Required Skills & Qualifications: Bachelor’s/Master’s in Computer Science, IT, or related field Minimum 3 years of experience in AWS data engineering Proficient in: AWS Glue, Redshift, S3, Lambda, EMR, Athena Python, Pandas, SQL RDS, Postgres, SAP HANA Strong knowledge of data modeling, warehousing, and pipeline orchestration Experience with Git and Infrastructure as Code using Terraform Preferred Skills: Experience with energy sector data or sensor-based/IoT data Exposure to ML tools like SageMaker, TensorFlow, Scikit-learn Familiarity with Apache Spark, Kafka Experience with data visualization tools: Tableau, Power BI, AWS QuickSight Awareness of data governance tools like AWS Data Quality, Collibra, Databrew AWS certifications (e.g., Data Analytics Specialty, Solutions Architect Associate)

Posted 3 weeks ago

Apply

2.0 years

0 Lacs

Kochi, Kerala, India

On-site

Job Title - + + Management Level: Location: Kochi, Coimbatore, Trivandrum Must have skills: Python/Scala, Pyspark/Pytorch Good to have skills: Redshift Job Summary You’ll capture user requirements and translate them into business and digitally enabled solutions across a range of industries. Your responsibilities will include: Responsibilities Roles and Responsibilities Designing, developing, optimizing, and maintaining data pipelines that adhere to ETL principles and business goals Solving complex data problems to deliver insights that helps our business to achieve their goals. Source data (structured→ unstructured) from various touchpoints, format and organize them into an analyzable format. Creating data products for analytics team members to improve productivity Calling of AI services like vision, translation etc. to generate an outcome that can be used in further steps along the pipeline. Fostering a culture of sharing, re-use, design and operational efficiency of data and analytical solutions Preparing data to create a unified database and build tracking solutions ensuring data quality Create Production grade analytical assets deployed using the guiding principles of CI/CD. Professional And Technical Skills Expert in Python, Scala, Pyspark, Pytorch, Javascript (any 2 at least) Extensive experience in data analysis (Big data- Apache Spark environments), data libraries (e.g. Pandas, SciPy, Tensorflow, Keras etc.), and SQL. 2-3 years of hands-on experience working on these technologies. Experience in one of the many BI tools such as Tableau, Power BI, Looker. Good working knowledge of key concepts in data analytics, such as dimensional modeling, ETL, reporting/dashboarding, data governance, dealing with structured and unstructured data, and corresponding infrastructure needs. Worked extensively in Microsoft Azure (ADF, Function Apps, ADLS, Azure SQL), AWS (Lambda,Glue,S3), Databricks analytical platforms/tools, Snowflake Cloud Datawarehouse. Additional Information Experience working in cloud Data warehouses like Redshift or Synapse Certification in any one of the following or equivalent AWS- AWS certified data Analytics- Speciality Azure- Microsoft certified Azure Data Scientist Associate Snowflake- Snowpro core- Data Engineer Databricks Data Engineering About Our Company | Accenture , Experience: 3.5 -5 years of experience is required Educational Qualification: Graduation (Accurate educational details should capture)

Posted 3 weeks ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Software Engineer – Integration (Linux) Skills To be successful in this role as a Linux focused Integration “Software Engineer – OSS Platform Engineering", you should possess the following skillsets: Strong Linux proficiency and expertise with containerization and Kubernetes with programming expertise in one of the high-level languages like Python, Java, Golang and NetDevOps automation. Hands-on expertise with IaC, Cloud Platforms, CI/CD Pipelines for Data, Containerization & Orchestration and SRE principles; Strong Knowledge and demonstrable hands-on experience with middleware technologies (Kafka, API gateways etc) and Data Engineering tools/frameworks like Apache Spark, Airflow, Flink and Hadoop ecosystems. Some Other Highly Valued Skills Include Expertise building ELT pipelines and cloud/storage integrations - data lake/warehouse integrations (redshift, BigQuery, Snowflake etc). Solid understanding of DevOps tooling, GitOps, CI/CD, config management, Jenkins, build pipelines and source control systems. Working knowledge of cloud infrastructure services: compute, storage, networking, hybrid connectivity, monitoring/logging, security and IAM. SRE Experience. Expertise building and defining KPI’s (SLI/SLO’s) using open-source tooling like ELK, Prometheus and various other instrumentation, telemetry, and log analytics. You may be assessed on the key critical skills relevant for success in role, such as risk and controls, change and transformation, business acumen strategic thinking and digital and technology, as well as job-specific technical skills. This role is based in our Pune office.

Posted 3 weeks ago

Apply

7.0 years

0 Lacs

India

Remote

Position: Senior Database Administrator Job Type: Remote - Full Time Experience: 7+ years About the role: We are looking for a highly skilled Senior Database Administrator (DBA) with expertise across both cloud and on-premise environments to join our Production DBA Team. The ideal candidate will ensure high availability, performance, and security of critical database systems while driving root cause analysis, automation, and proactive monitoring. You will take end-to-end ownership of issues, maintain clear communication with stakeholders, and collaborate with cross-functional teams to drive timely resolution—all while adhering to change management and production governance protocols. Key Responsibilities: Administer, maintain, and optimize databases across on-premise and cloud platforms including AWS (RDS MySQL/Aurora/Postgres, Redshift) and Oracle Cloud Infrastructure (OCI - DBCS, ADW) . Manage and tune MySQL, PostgreSQL, Oracle (CDB/PDB), Redshift , and Hadoop environments. Perform advanced performance tuning, capacity planning, and health checks using tools such as SQL Developer, OCI Metrics, Performance Hub, and SolarWinds DPA . Implement monitoring and alerting systems (CloudWatch, Opsgenie, OCI Alarms), and proactively resolve CPU, memory, I/O, and storage issues. Handle database backup, recovery, replication, and housekeeping tasks , ensuring minimal downtime and data integrity. Troubleshoot issues related to tablespaces, indexes, mounting failures, blocking/deadlocks, and data replication . Work with command line tools (OCI CLI) and develop automation scripts in Shell, Python, or Perl . Administer wallets and password-less authentication , and manage Oracle ADW services. Collaborate with vendors (Oracle, AWS, SolarWinds) to resolve escalated issues efficiently. Maintain detailed documentation and communicate effectively across technical and non-technical teams. Requirements: 6–10 years of hands-on DBA experience with increasing responsibility in enterprise environments. Strong experience with MySQL, PostgreSQL, Oracle , and cloud-based databases (AWS RDS, Redshift, OCI DBCS/ADW) . Solid scripting skills in Python, Shell, or Perl for automation and operational efficiency. Experience with database performance tuning, capacity planning, and backup strategies . Working knowledge of Hadoop ecosystems is a strong plus. Familiarity with wallet management, password-less auth, and Oracle multi-tenant architecture (CDB/PDB) . Excellent problem-solving, interpersonal, and communication skills. Ability to work within SLAs and maintain high levels of ownership and accountability.

Posted 3 weeks ago

Apply

0.0 - 7.0 years

0 Lacs

Bengaluru, Karnataka

Remote

Bengaluru, Karnataka, India Department Data Engineering Job posted on Jul 09, 2025 Employment type Full Time About Us MatchMove is a leading embedded finance platform that empowers businesses to embed financial services into their applications. We provide innovative solutions across payments, banking-as-a-service, and spend/send management, enabling our clients to drive growth and enhance customer experiences. Are You The One? As a Technical Lead Engineer - Data , you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS . You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability , while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark . Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization , enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control , and compliance (GDPR, MAS TRM) . Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities Architect scalable, cost-optimized pipelines across real-time and batch paradigms , using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS , with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack : Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum. Expertise in designing data pipelines for real-time, streaming, and batch systems , including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale. Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls , encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain , with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts , data mesh patterns , and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases . Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. MatchMove Culture: We cultivate a dynamic and innovative culture that fuels growth, creativity, and collaboration. Our fast-paced fintech environment thrives on adaptability, agility, and open communication. We focus on employee development, supporting continuous learning and growth through training programs, learning on the job and mentorship. We encourage speaking up, sharing ideas, and taking ownership. Embracing diversity, our team spans across Asia, fostering a rich exchange of perspectives and experiences. Together, we harness the power of fintech and e-commerce to make a meaningful impact on people's lives. Grow with us and shape the future of fintech and e-commerce. Join us and be part of something bigger! Personal Data Protection Act: By submitting your application for this job, you are authorizing MatchMove to: collect and use your personal data, and to disclose such data to any third party with whom MatchMove or any of its related corporation has service arrangements, in each case for all purposes in connection with your job application, and employment with MatchMove; and retain your personal data for one year for consideration of future job opportunities (where applicable).

Posted 3 weeks ago

Apply

0.0 years

0 Lacs

Bengaluru, Karnataka

On-site

- 2+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience - Experience with one or more industry analytics visualization tools (e.g. Excel, Tableau, QuickSight, MicroStrategy, PowerBI) and statistical methods (e.g. t-test, Chi-squared) - Experience with scripting language (e.g., Python, Java, or R) - Experience with building back-end aggregated tables/pipelines using ANDES, AWS Cradle, S3 IN-COBRA (center of business reporting and analytics), the central BIE team for IN-Stores, requires a BIE who will help us make effective decisions based on data from multiple sources, compiling it and triangulating it into a digestible and actionable format and dashboards. This will help us provide historic information on our business metrics, our customer metrics and make effective decisions for future. The BI will create pipelines for reports to analyze data, make sense of the results and be able to explain what it all means to key stakeholders. This individual will analyze large amounts of data, discover and solve real world problems and build metrics and business cases around key performance of the project tez programs. The ideal candidate will use a customer backwards approach in deriving insights and identifying actions we can take to improve the customer experience and conversion for the program. Key job responsibilities • Develop and streamline necessary dashboards and one-off analyses, providing ability to surface business-critical KPIs, monitor the health of metrics and effectively communicate performance. • Partner with stakeholders and other Business Intelligence teams to acquire necessary data for robust analysis. • Convert data into insights including implications and recommendations that are specific and actionable for the Private Brands team and across the business. • Partner with other analysts as well as data engineering and technology teams to support building a best-in-class dashboards and data infrastructure. • Communicate insights using data visualization and presentations to stakeholders • The successful candidate will be an expert with analyzing large data sets and have exemplary communication skills. The candidate will need to be a self-starter, very comfortable with ambiguity in a fast-paced and ever-changing environment, and able to think big while paying careful attention to detail. Master's degree, or Advanced technical degree Knowledge of data modeling and data pipeline design Experience with statistical analysis, co-relation analysis Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.

Posted 3 weeks ago

Apply

12.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About Crunchyroll Founded by fans, Crunchyroll delivers the art and culture of anime to a passionate community. We super-serve over 100 million anime and manga fans across 200+ countries and territories, and help them connect with the stories and characters they crave. Whether that experience is online or in-person, streaming video, theatrical, games, merchandise, events and more, it’s powered by the anime content we all love. Join our team, and help us shape the future of anime! Who We Are We're a cast of characters working to shine a spotlight on anime. Crunchyroll is an international business focused on creating both online and offline experiences for fans through content (licensed, co-produced, originals, distribution), merchandise, events, gaming, news, and more. Visit our About Us pages for more information about our collection of brands. Location: Hyderabad, India The intersection of media and technology is our sweet spot and we are fortunate to be global office in Hyderabad, India. This office houses many of our corporate functions and cross-functional teams tasked with creating exceptional experiences for our passionate communities. About the Team: The Center for Data and Insights (CDI) is the centralized team of data engineering, BI, analytics, and data science experts, passionate about servicing the organization with certified reports and insights! The mission of the group is to inspire, support, and guide our partners to be data-aware and to build the systems of intelligence to discover insights and act on them. About the Role: We are looking for a Director, CDI Operations who will manage partner relationships, ensure project success, and drive satisfaction. You will report to the Senior Director, Data Analytics. You will be a key point of contact for our growing global organization in our efforts towards Growth and Strategy. Your work also involves identifying opportunities for growth within existing stakeholder relationships. Responsibilities: Stakeholder Relationship Management: Build and maintain engaging, trusting relationships between global team members. Project Management: Oversee project execution, ensuring timelines are met, budgets are observed, and project scope is well-defined. Communication: Act as the primary point of contact, communicating project progress and updates to stakeholders and internal teams. Problem Solving: Identify and resolve challenges that may arise during the project lifecycle, ensuring partner satisfaction. Opportunity Identification: Identify and pursue new business opportunities with existing partners, potentially leading to expanded engagements. Team Leadership: Guide and develop team members involved in stakeholder projects. Partner Onboarding and Education: Ensuring new partners are onboarded and understand the value of the service/product. Global Business Impact: Lead and influence business principles and how they apply to stakeholder engagements. Global team management: Being available during the US/India timezones to collaborate with stakeholders. About You: 12+ years in Partner Relationship Management. 12+ years Project Management: Overseeing project execution, ensuring timelines are met, budgets are followed, and project scope is well-defined. 10+ years within a technical role. This includes Data Analytics, Data Engineering, Data Science, etc. Knowledge on Cloud data warehouses like Redshift, Snowflake, Imply etc. Knowledge about visualization tools like Tableau. Knowledge on large data sets (Terabytes of data/ billions of records). 5+ years of consulting working experience in international environments, with the stature necessary to work as a partner with senior colleagues and clients. 5+ years of Onsite/Offshore Management experience. Experience breaking down and solving problems through quantitative analysis. Knowledge about the Entertainment domain or equivalent B2C industry. Bachelor's degree in Business, Management, Data Science, or a related field A Day in the Life: On a daily basis, partner with the CDI Stakeholders in a structured manner, both verbally and in writing, including colleagues with different perspectives and seniority levels. Collaborate across time zones using relevant digital productivity tools and digital communication tools (e.g., email, Slack, Zoom). Work with offshore and onsite teams, including a 3-4 hour overlap with the US team. Maintain a culture of high-quality output and outstanding customer service. Why you will love working at Crunchyroll In addition to getting to work with fun, passionate and inspired colleagues, you will also enjoy the following benefits and perks: Best-in class medical, dental, and vision private insurance healthcare coverage Access to counseling & mental health sessions 24/7 through our Employee Assistance Program (EAP) Free premium access to Crunchyroll Professional Development Company's Paid Parental Leave up to 26 weeks for birthing parents up to 12 weeks for non-birthing parents Hybrid Work Schedule Paid Time Off Flex Time Off 5 Yasumi Days Half-Day Fridays during the summer Winter Break About Our Values We want to be everything for someone rather than something for everyone and we do this by living and modeling our values in all that we do. We value Courage. We believe that when we overcome fear, we enable our best selves. Curiosity. We are curious, which is the gateway to empathy, inclusion, and understanding. Kaizen. We have a growth mindset committed to constant forward progress. Service. We serve our community with humility, enabling joy and belonging for others. Our commitment to diversity and inclusion Our mission of helping people belong reflects our commitment to diversity & inclusion. It's just the way we do business. We are an equal opportunity employer and value diversity at Crunchyroll. Pursuant to applicable law, we do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status. Crunchyroll, LLC is an independently operated joint venture between US-based Sony Pictures Entertainment, and Japan's Aniplex, a subsidiary of Sony Music Entertainment (Japan) Inc., both subsidiaries of Tokyo-based Sony Group Corporation. Questions about Crunchyroll’s hiring process? Please check out our Hiring FAQs: https://help.crunchyroll.com/hc/en-us/articles/360040471712-Crunchyroll-Hiring-FAQs Please refer to our Candidate Privacy Policy for more information about how we process your personal information, and your data protection rights: https://tbcdn.talentbrew.com/company/22978/v1_0/docs/spe-jobs-privacy-policy-update-for-crpa-dec-21-22.pdf Please beware of recent scams to online job seekers. Those applying to our job openings will only be contacted directly from @crunchyroll.com email account.

Posted 3 weeks ago

Apply

6.0 years

0 Lacs

Gandhinagar, Gujarat, India

On-site

Role Overview We are seeking a Lead Data Engineer who will take ownership of our data infrastructure, manage a small data team, and oversee the design and implementation of reporting systems. Team Leadership This role is perfect for someone with strong technical skills in data engineering, who also has experience leading projects and delivering business-critical dashboards and Responsibilities : Lead and mentor a team of data engineers and BI developers. Assign tasks, review code, and ensure timely delivery of Pipeline Management : Design, build, and maintain scalable ETL/ELT pipelines across various data & BI Oversight : Oversee the development and delivery of operational and executive reports. Ensure data accuracy and alignment with business goals. Data Warehousing Architect and optimize data warehouses (e.g., Snowflake, Redshift, BigQuery) to support Analytical Workloads And Real-time Collaboration Work closely with business and analytics teams to understand data needs and translate them Into Technical Governance & Quality Implement standards for data governance, documentation, and quality : Evaluate and integrate new tools for data transformation, visualization (e.g., Tableau, Power BI, Looker), And 6+ years of experience in data engineering, with at least 2 years in a lead role. Strong experience in SQL, Python, and ETL tools (e.g., Airflow, DBT). Experience with BI/reporting tools like Power BI, Tableau, or Looker. Deep understanding of data modeling and warehouse architecture. Familiarity with cloud platforms (AWS, GCP, Azure). Excellent communication and stakeholder management skills. Experience managing or mentoring junior team members. (ref:hirist.tech)

Posted 3 weeks ago

Apply

7.0 years

0 Lacs

Gandhinagar, Gujarat, India

On-site

Key Responsibilities Lead and mentor a high-performing data pod composed of data engineers, data analysts, and BI developers. Design, implement, and maintain ETL pipelines and data workflows to support real-time and batch processing. Architect and optimize data warehouses for scale, performance, and security. Perform advanced data analysis and modeling to extract insights and support business decisions. Lead data science initiatives including predictive modeling, NLP, and statistical analysis. Manage and tune relational and non-relational databases (SQL, NoSQL) for availability and performance. Develop Power BI dashboards and reports for stakeholders across departments. Ensure data quality, integrity, and compliance with data governance and security standards. Work with cross-functional teams (product, marketing, ops) to turn data into strategy. Qualifications : : PhD in Data Science, Computer Science, Engineering, Mathematics, or related field. 7+ years of hands-on experience across data engineering, data science, analysis, and database administration. Strong experience with ETL tools (e.g., Airflow, Talend, SSIS) and data warehouses (e.g., Snowflake, Redshift, BigQuery). Proficient in SQL, Python, and Power BI. Familiarity with modern cloud data platforms (AWS/GCP/Azure). Strong understanding of data modeling, data governance, and MLOps practices. Exceptional ability to translate business needs into scalable data solutions. (ref:hirist.tech)

Posted 3 weeks ago

Apply

4.0 years

0 Lacs

Hyderabad, Telangana, India

Remote

Job Description Are you ready to make an impact at DTCC? Do you want to work on innovative projects, collaborate with a dynamic and supportive team, and receive investment in your professional development? At DTCC, we are at the forefront of innovation in the financial markets. We're committed to helping our employees grow and succeed. We believe that you have the skills and drive to make a real impact. We foster a thriving internal community and are committed to creating a workplace that looks like the world that we serve. Benefits Pay and Benefits: Competitive compensation, including base pay and annual incentive. Comprehensive health and life insurance and well-being benefits, based on location. Pension / Retirement benefits Paid Time Off and Personal/Family Care, and other leaves of absence when needed to support your physical, financial, and emotional well-being. DTCC offers a flexible/hybrid model of 3 days onsite and 2 days remote (onsite Tuesdays, Wednesdays and a third day unique to each team or employee). The Impact You Will Have In This Role The Development family is responsible for creating, designing, deploying, and supporting applications, programs, and software solutions. May include research, new development, prototyping, modification, reuse, re-engineering, maintenance, or any other activities related to software products used internally or externally on product platforms supported by the firm. The software development process requires in-depth subject matter expertise in existing and emerging development methodologies, tools, and programming languages. Software Developers work closely with business partners and / or external clients in defining requirements and implementing solutions. The Software Engineering role specializes in planning, documenting technical requirements, crafting, developing, and testing all software systems and applications for the firm. Works closely with architects, product managers, project management, and end-users in the development and improvement of existing software systems and applications, proposing and recommending solutions that solve complex business problems. Your Primary Responsibilities Participate in daily code deploys while working on individual or team projects. Translate business requirements into software designs and implementations. Participate in thorough code reviews with a goal of illustrating quality engineering practices and to produce the highest quality code possible. Build high quality and scalable / performant applications. Understand requirements and translate them into specific application and infrastructure related tasks. Design frameworks that promote concepts of isolation, extensibility, and reusability Supports team in handling client expectations and resolving issues urgently. Support development teams, testing, troubleshooting, and production support. Create applications and construct unit test cases that ensure compliance with functional and non-functional requirements. Work with peers to mature ways of working, continuous integration, and continuous delivery. Aligns risk and control processes into day to day responsibilities to supervise and mitigate risk; brings up appropriately. Qualifications Minimum of 4 years of related experience Bachelor's degree preferred or equivalent experience. Talents Needed For Success 4+ years’ experience in Application Development and system analysis Expert in Java/JEE and Coding standard methodologies Expert knowledge in development concepts. Good design and coding skills in Web Services, Spring/Spring Boot, Soap/Rest APIs, and Java Script Frameworks for modern web applications Solid understanding of HTML, CSS, and modern JavaScript Experience with Angular V15+ and/or React. Experience integrating with database technologies such as Oracle, PostgreSQL, etc. Ability to write quality and self-validating code using unit tests and following TDD. Experience with Agile methodology and ability to collaborate with other team members. Bachelor's degree in technical field or equivalent experience. Nice To Have Experience in developing and using AWS cloud stack (S3, SQS, Redshift, Lambda etc.) is a big plus. Ability to demonstrate DevOps techniques and practices like Continuous Integration, Continuous Deployment, Test Automation, Build Automation and Test-Driven Development to enable the rapid delivery of working code utilizing tools like Jenkins, Cloudbees, Git, etc. We do not discriminate on the basis of race, religion, color, national origin, sex, gender, gender expression, sexual orientation, age, marital status, veteran status, or disability status. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 3 weeks ago

Apply

40.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About Amgen Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. About The Role Role Description: As a Sr. Associate BI Engineer, you will support the development and delivery of data-driven solutions that enable business insights and operational efficiency. You will work closely with senior BI engineers, analysts, and stakeholders to build dashboards, analyze data, and contribute to the design of scalable reporting systems. This is an ideal role for early-career professionals looking to grow their technical and analytical skills in a collaborative environment. Roles & Responsibilities: Designing and maintaining dashboards and reports using tools like Power BI, Tableau, or Cognos. Perform data analysis to identify trends and support business decisions. Gather BI requirements and translate them into technical specifications. Support data validation, testing, and documentation efforts. Apply best practices in data modeling, visualization, and BI development. Participate in Agile ceremonies and contribute to sprint planning and backlog grooming. Basic Qualifications and Experience: Bachelor's or Master’s degree in Computer Science, IT or related field experience Atleast 5 years of relevant experience. Functional Skills: Exposure to data visualization tools such as Power BI, Tableau, or QuickSight. Proficiency in SQL and scripting languages (e.g., Python) for data processing and analysis Familiarity with data modeling, warehousing, and ETL pipelines Understanding of data structures and reporting concepts Strong analytical and problem-solving skills Good-to-Have Skills: Familiarity with Cloud services like AWS (e.g., Redshift, S3, EC2) Understanding of Agile methodologies (Scrum, SAFe) Knowledge of DevOps, CI/CD practices Familiarity with scientific or healthcare data domains Soft Skills: Strong verbal and written communication skills Willingness to learn and take initiative Ability to work effectively in a team environment Attention to detail and commitment to quality Ability to manage time and prioritize tasks effectively Shift Information: This position may require working a second or third shift based on business needs. Candidates must be willing and able to work during evening or night shifts if required. EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation.

Posted 3 weeks ago

Apply

40.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About Amgen Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. About The Role Role Description: As a BI Analyst in the Business Intelligence, Reporting, and Sensing team, you will play a critical role in transforming data into actionable insights that drive strategic decisions. You will collaborate with cross-functional teams to gather requirements, design analytical solutions, and deliver high-quality dashboards and reports. This role blends technical expertise with business acumen and requires strong communication and problem-solving skills. Roles & Responsibilities: Collaborate with System Architects and Product Managers to manage business analysis activities, ensuring alignment with engineering and product goals. Support Design, development, and maintenance activities of interactive dashboards, reports, and data visualizations using BI tools (e.g., Power BI, Tableau, Cognos). Analyze datasets to identify trends, patterns, and insights that inform business strategy and decision-making. Collaborate with stakeholders across departments to understand data and reporting needs. Translate business requirements into technical specifications and analytical solutions. Work with Data Engineers to ensure data models and pipelines support accurate and reliable reporting. Contribute to data quality and governance initiatives. Document business processes, use cases, and test plans to support development and QA efforts. Participate in Agile ceremonies and contribute to backlog refinement and sprint planning. Basic Qualifications and Experience: Bachelor's or Master’s degree in Computer Science, IT or related field experience Atleast 5 years of experience as Business Analyst or relevant areas. Bachelor’s degree and 3 to 5 years of Computer Science, IT or related field experience OR Diploma and 7 to 9 years of Computer Science, IT or related field experience Functional Skills: Experience with data visualization tools such as Power BI, Tableau, or QuickSight. Proficiency in SQL and scripting languages (e.g., Python) for data processing and analysis Familiarity with data modeling, warehousing, and ETL pipelines Experience writing user stories and acceptance criteria in Agile tools like JIRA Strong analytical and problem-solving skills Good-to-Have Skills: Experience with AWS services (e.g., Redshift, S3, EC2) Understanding of Agile methodologies (Scrum, SAFe) Knowledge of DevOps, CI/CD practices Familiarity with scientific or healthcare data domains Professional Certifications (please mention if the certification is preferred or mandatory for the role): AWS Developer certification (preferred) SAFe for Teams Certification (preferred) Soft Skills: Excellent analytical and troubleshooting skills Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation Ability to manage multiple priorities successfully Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills Shift Information: This position may require working a second or third shift based on business needs. Candidates must be willing and able to work during evening or night shifts if required. EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation.

Posted 3 weeks ago

Apply

5.0 - 10.0 years

22 - 37 Lacs

Kolkata, Hyderabad, Bengaluru

Hybrid

Inviting applications for the role of Senior Principal Consultant-Data Engineer, AWS! Locations Bangalore, Hyderabad, Kolkata Responsibilities Develop, deploy, and manage ETL pipelines using AWS services, Python, Spark, and Kafka. Integrate structured and unstructured data from various data sources into data lakes and data warehouses. Design and deploy scalable, highly available, and fault-tolerant AWS data processes using AWS data services (Glue, Lambda, Step, Redshift) Monitor and optimize the performance of cloud resources to ensure efficient utilization and cost-effectiveness. Implement and maintain security measures to protect data and systems within the AWS environment, including IAM policies, security groups, and encryption mechanisms. Migrate the application data from legacy databases to Cloud based solutions (Redshift, DynamoDB, etc) for high availability with low cost Develop application programs using Big Data technologies like Apache Hadoop, Apache Spark, etc with appropriate cloud-based services like Amazon AWS, etc. Build data pipelines by building ETL processes (Extract-Transform-Load) Implement backup, disaster recovery, and business continuity strategies for cloud-based applications and data. Responsible for analysing business and functional requirements which involves a review of existing system configurations and operating methodologies as well as understanding evolving business needs Analyse requirements/User stories at the business meetings and strategize the impact of requirements on different platforms/applications, convert the business requirements into technical requirements Participating in design reviews to provide input on functional requirements, product designs, schedules and/or potential problems Understand current application infrastructure and suggest Cloud based solutions which reduces operational cost, requires minimal maintenance but provides high availability with improved security Perform unit testing on the modified software to ensure that the new functionality is working as expected while existing functionalities continue to work in the same way Coordinate with release management, other supporting teams to deploy changes in production environment Qualifications we seek in you! Minimum Qualifications Experience in designing, implementing data pipelines, build data applications, data migration on AWS Strong experience of implementing data lake using AWS services like Glue, Lambda, Step, Redshift Experience of Databricks will be added advantage Strong experience in Python and SQL Proven expertise in AWS services such as S3, Lambda, Glue, EMR, and Redshift. Advanced programming skills in Python for data processing and automation. Hands-on experience with Apache Spark for large-scale data processing. Experience with Apache Kafka for real-time data streaming and event processing. Proficiency in SQL for data querying and transformation. Strong understanding of security principles and best practices for cloud-based environments. Experience with monitoring tools and implementing proactive measures to ensure system availability and performance. Excellent problem-solving skills and ability to troubleshoot complex issues in a distributed, cloud-based environment. Strong communication and collaboration skills to work effectively with cross-functional teams. Preferred Qualifications/ Skills Masters Degree-Computer Science, Electronics, Electrical. AWS Data Engineering & Cloud certifications, Databricks certifications Experience with multiple data integration technologies and cloud platforms Knowledge of Change & Incident Management process

Posted 3 weeks ago

Apply

1.0 years

0 Lacs

Hyderabad, Telangana, India

Remote

Description Want to join the Earth’s most customer centric company? Do you like to dive deep to understand problems? Are you someone who likes to challenge Status Quo? Do you strive to excel at goals assigned to you? If yes, we have opportunities for you. Global Operations – Artificial Intelligence (GO-AI) at Amazon is looking to hire candidates who can excel in a fast-paced dynamic environment. Are you somebody that likes to use and analyze big data to drive business decisions? Do you enjoy converting data into insights that will be used to enhance customer decisions worldwide for business leaders? Do you want to be part of the data team which measures the pulse of innovative machine vision-based projects? If your answer is yes, join our team. GO-AI is looking for a motivated individual with strong skills and experience in resource utilization planning, process optimization and execution of scalable and robust operational mechanisms, to join the GO-AI Ops DnA team. In this position you will be responsible for supporting our sites to build solutions for the rapidly expanding GO-AI team. The role requires the ability to work with a variety of key stakeholders across job functions with multiple sites. We are looking for an entrepreneurial and analytical program manager, who is passionate about their work, understands how to manage service levels across multiple skills/programs, and who is willing to move fast and experiment often. Key job responsibilities Design and develop highly available dashboards and metrics using SQL and Excel/Tableau Execute high priority (i.e. cross functional, high impact) projects to create robust, scalable analytics solutions and frameworks with the help of Analytics/BIE managers Work closely with internal stakeholders such as business teams, engineering teams, and partner teams and align them with respect to your focus area Creates and maintains comprehensive business documentation including user stories, acceptance criteria, and process flows that help the BIE understand the context for developing ETL processes and visualization solutions. Performs user acceptance testing and business validation of delivered dashboards and reports, ensuring that BIE-created solutions meet actual operational needs and can be effectively utilized by site managers and operations teams. Monitors business performance metrics and operational KPIs to proactively identify emerging analytical requirements, working with BIEs to rapidly develop solutions that address real-time operational challenges in the dynamic AI-enhanced fulfillment environment. About The Team The Global Operations – Artificial Intelligence (GO-AI) team remotely handles exceptions in the Amazon Robotic Fulfillment Centers Globally. GO-AI seeks to complement automated vision based decision-making technologies by providing remote human support for the subset of tasks which require higher cognitive ability and cannot be processed through automated decision making with high confidence. This team provides end-to-end solutions through inbuilt competencies of Operations and strong central specialized teams to deliver programs at Amazon scale. It is operating multiple programs including Nike IDS, Proteus, Sparrow and other new initiatives in partnership with global technology and operations teams. Basic Qualifications Experience defining requirements and using data and metrics to draw business insights Knowledge of SQL Knowledge of data visualization tools such as Quick Sight, Tableau, Power BI or other BI packages Knowledge of Python, VBA, Macros, Selenium scripts 1+ year of experience working in Analytics / Business Intelligence environment with prior experience of design and execution of analytical projects Preferred Qualifications Experience in using AI tools Experience in Amazon Redshift and other AWS technologies for large datasets Analytical mindset and ability to see the big picture and influence others Detail-oriented and must have an aptitude for solving unstructured problems. The role will require the ability to extract data from various sources and to design/construct/execute complex analyses to finally come up with data/reports that help solve the business problem Good oral, written and presentation skills combined with the ability to be part of group discussions and explaining complex solutions Ability to apply analytical, computer, statistical and quantitative problem solving skills is required Ability to work effectively in a multi-task, high volume environment Ability to be adaptable and flexible in responding to deadlines and workflow fluctuations Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - Amazon Dev Center India - Hyderabad Job ID: A3027310

Posted 3 weeks ago

Apply

1.0 years

0 Lacs

Hyderabad, Telangana, India

Remote

Description Want to join the Earth’s most customer centric company? Do you like to dive deep to understand problems? Are you someone who likes to challenge Status Quo? Do you strive to excel at goals assigned to you? If yes, we have opportunities for you. Global Operations – Artificial Intelligence (GO-AI) at Amazon is looking to hire candidates who can excel in a fast-paced dynamic environment. Are you somebody that likes to use and analyze big data to drive business decisions? Do you enjoy converting data into insights that will be used to enhance customer decisions worldwide for business leaders? Do you want to be part of the data team which measures the pulse of innovative machine vision-based projects? If your answer is yes, join our team. GO-AI is looking for a motivated individual with strong skills and experience in resource utilization planning, process optimization and execution of scalable and robust operational mechanisms, to join the GO-AI Ops DnA team. In this position you will be responsible for supporting our sites to build solutions for the rapidly expanding GO-AI team. The role requires the ability to work with a variety of key stakeholders across job functions with multiple sites. We are looking for an entrepreneurial and analytical program manager, who is passionate about their work, understands how to manage service levels across multiple skills/programs, and who is willing to move fast and experiment often. Key job responsibilities Design and develop highly available dashboards and metrics using SQL and Excel/Tableau Execute high priority (i.e. cross functional, high impact) projects to create robust, scalable analytics solutions and frameworks with the help of Analytics/BIE managers Work closely with internal stakeholders such as business teams, engineering teams, and partner teams and align them with respect to your focus area Creates and maintains comprehensive business documentation including user stories, acceptance criteria, and process flows that help the BIE understand the context for developing ETL processes and visualization solutions. Performs user acceptance testing and business validation of delivered dashboards and reports, ensuring that BIE-created solutions meet actual operational needs and can be effectively utilized by site managers and operations teams. Monitors business performance metrics and operational KPIs to proactively identify emerging analytical requirements, working with BIEs to rapidly develop solutions that address real-time operational challenges in the dynamic AI-enhanced fulfillment environment. About The Team The Global Operations – Artificial Intelligence (GO-AI) team remotely handles exceptions in the Amazon Robotic Fulfillment Centers Globally. GO-AI seeks to complement automated vision based decision-making technologies by providing remote human support for the subset of tasks which require higher cognitive ability and cannot be processed through automated decision making with high confidence. This team provides end-to-end solutions through inbuilt competencies of Operations and strong central specialized teams to deliver programs at Amazon scale. It is operating multiple programs including Nike IDS, Proteus, Sparrow and other new initiatives in partnership with global technology and operations teams. Basic Qualifications Experience defining requirements and using data and metrics to draw business insights Knowledge of SQL Knowledge of data visualization tools such as Quick Sight, Tableau, Power BI or other BI packages Knowledge of Python, VBA, Macros, Selenium scripts 1+ year of experience working in Analytics / Business Intelligence environment with prior experience of design and execution of analytical projects Preferred Qualifications Experience in using AI tools Experience in Amazon Redshift and other AWS technologies for large datasets Analytical mindset and ability to see the big picture and influence others Detail-oriented and must have an aptitude for solving unstructured problems. The role will require the ability to extract data from various sources and to design/construct/execute complex analyses to finally come up with data/reports that help solve the business problem Good oral, written and presentation skills combined with the ability to be part of group discussions and explaining complex solutions Ability to apply analytical, computer, statistical and quantitative problem solving skills is required Ability to work effectively in a multi-task, high volume environment Ability to be adaptable and flexible in responding to deadlines and workflow fluctuations Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - Amazon Dev Center India - Hyderabad Job ID: A3027313

Posted 3 weeks ago

Apply

5.0 - 8.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Job Title: Informatica IDMC Developer Skills: Informatica Intelligent Data Management Cloud (IDMC/IICS), SQL, AWS, Azure, or GCP, CI/CD pipelines, Snowflake, Redshift, or BigQuery. Experience Required: 5 - 8 Years Job Location: Greater Noida Only Send your CV to Gaurav.2.Kumar@coforge.com We at Coforge are hiring Informatica IDMC Developer with following skillset: Key Responsibilities: Design, develop, and maintain robust ETL pipelines using Informatica IDMC (IICS) . Collaborate with data architects, analysts, and business stakeholders to gather and understand data requirements. Integrate data from diverse sources including databases, APIs, and flat files. Optimize data workflows for performance, scalability, and reliability. Monitor and troubleshoot ETL jobs and resolve data quality issues. Implement data governance and security best practices. Maintain comprehensive documentation of data flows, transformations, and architecture. Participate in code reviews and contribute to continuous improvement initiatives. Required Skills & Qualifications: Strong hands-on experience with Informatica IDMC (IICS) and cloud-based ETL tools. Proficiency in SQL and experience with relational databases such as Oracle, SQL Server, PostgreSQL . Experience working with cloud platforms like AWS, Azure, or GCP . Familiarity with data warehousing concepts and tools such as Snowflake, Redshift, or BigQuery . Excellent problem-solving abilities and strong communication skills. Preferred Qualifications: Experience with CI/CD pipelines and version control systems. Knowledge of data modeling and metadata management. Certification in Informatica or cloud platforms is a plus.

Posted 3 weeks ago

Apply

4.0 years

0 Lacs

Delhi, India

On-site

What do you need to know about us? M+C Saatchi Performance is an award-winning global digital media agency, connecting brands to people. We deliver business growth for our clients through effective, measurable, and evolving digital media strategies. Position Title : Analyst- Reporting & QA Department : Reporting & QA Location : [New Delhi / Hybrid options] About the Role: We are looking for a highly skilled Analyst- Reporting & QA with a deep understanding of digital and mobile media to join our Reporting and QA team. This role will focus on enabling our clients to meet their media goals by ensuring data accuracy and delivering actionable insights into media performance through our reporting tools. The ideal candidate will have strong technical skills, be detail-oriented, and have experience in digital/mobile media attribution and reporting. Core Responsibilities: ETL & Data Automation : Use Matillion to streamline data processes, ensuring efficient and reliable data integration across all reporting systems. Data Quality Assurance : Verify and validate data accuracy within Power BI dashboards, proactively identifying and addressing discrepancies to maintain high data integrity. Dashboard Development : Build, maintain, and optimize Power BI dashboards to deliver real-time insights that help clients understand the performance of their digital and mobile media campaigns. Media Performance Insights : Collaborate closely with media teams to interpret data, uncover trends, and provide actionable insights that support clients in optimizing their media investments. Industry Expertise : Apply in-depth knowledge of digital and mobile media, attribution models, and reporting frameworks to deliver valuable perspectives on media performance. Tools & Platforms Expertise : Utilize tools such as GA4, platform reporting systems, first-party data analytics, and mobile measurement partners (MMPs) to support comprehensive media insights for clients. Qualifications and Experience: Education : Bachelor’s degree in Statistics, Data Science, Computer Science, Marketing, or a related field. Experience : 4-6 years in a similar role, with substantial exposure to data analysis, reporting, and the digital/mobile media landscape. Technical Skills : Proficiency in ETL tools (preferably Matillion), Power BI, and data quality control. Industry Knowledge : Strong understanding of digital and mobile media, with familiarity in attribution, reporting practices, and performance metrics. Analytical Skills : Skilled in interpreting complex data, generating actionable insights, and presenting findings effectively to non-technical stakeholders. Communication : Excellent communicator with a proven ability to collaborate effectively across cross-functional teams and with clients. Tools & Platforms : Proficiency in GA4, platform reporting, first-party data analysis, and mobile measurement partners (MMPs). Desired Skills: Background in a media agency environment. Experience with cloud-based data platforms (e.g., AWS, Redshift) preferred. Experience with Power BI is must. Strong collaboration skills and the ability to work independently. What Can you look forward to Being a part of the world’s largest independent advertising holding group. Family Health Insurance Coverage. Flexible Working Hours. Regular events including Reece Lunch & indoor games. Employee Training/Learning Programs About M+C Saatchi Performance M+C Saatchi Performance has pledged its commitment to create a company that values difference, with an inclusive culture. As part of this, M+C Saatchi Performance continues to be an Equal Opportunity Employer which does not and shall not discriminate, celebrates diversity and bases all hiring and promotion decisions solely on merit, without regard for any personal characteristics. All employee information is kept confidential according to General Data Protection Regulation (GDPR).

Posted 3 weeks ago

Apply

4.0 years

0 Lacs

Delhi, India

On-site

What do you need to know about us? M+C Saatchi Performance is an award-winning global digital media agency, connecting brands to people. We deliver business growth for our clients through effective, measurable, and evolving digital media strategies. Position Title : Analyst- Reporting & QA Department : Reporting & QA Location : New Delhi - Hybrid About the Role: We are looking for a highly skilled Analyst- Reporting & QA with a deep understanding of digital and mobile media to join our Reporting and QA team. This role will focus on enabling our clients to meet their media goals by ensuring data accuracy and delivering actionable insights into media performance through our reporting tools. The ideal candidate will have strong technical skills, be detail-oriented, and have experience in digital/mobile media attribution and reporting. Core Responsibilities: ETL & Data Automation : Use Matillion to streamline data processes, ensuring efficient and reliable data integration across all reporting systems. Data Quality Assurance : Verify and validate data accuracy within Power BI dashboards, proactively identifying and addressing discrepancies to maintain high data integrity. Dashboard Development : Build, maintain, and optimize Power BI dashboards to deliver real-time insights that help clients understand the performance of their digital and mobile media campaigns. Media Performance Insights : Collaborate closely with media teams to interpret data, uncover trends, and provide actionable insights that support clients in optimizing their media investments. Industry Expertise : Apply in-depth knowledge of digital and mobile media, attribution models, and reporting frameworks to deliver valuable perspectives on media performance. Tools & Platforms Expertise : Utilize tools such as GA4, platform reporting systems, first-party data analytics, and mobile measurement partners (MMPs) to support comprehensive media insights for clients. Qualifications and Experience: Education : Bachelor’s degree in Statistics, Data Science, Computer Science, Marketing, or a related field. Experience : 4-6 years in a similar role, with substantial exposure to data analysis, reporting, and the digital/mobile media landscape. Technical Skills : Proficiency in ETL tools (preferably Matillion), Power BI, and data quality control. Industry Knowledge : Strong understanding of digital and mobile media, with familiarity in attribution, reporting practices, and performance metrics. Analytical Skills : Skilled in interpreting complex data, generating actionable insights, and presenting findings effectively to non-technical stakeholders. Communication : Excellent communicator with a proven ability to collaborate effectively across cross-functional teams and with clients. Tools & Platforms : Proficiency in GA4, platform reporting, first-party data analysis, and mobile measurement partners (MMPs). Desired Skills: Background in a media agency environment. Experience with cloud-based data platforms (e.g., AWS, Redshift) preferred. Experience with Power BI is must. Strong collaboration skills and the ability to work independently. What Can you look forward to Being a part of the world’s largest independent advertising holding group. Family Health Insurance Coverage. Flexible Working Hours. Regular events including Reece Lunch & indoor games. Employee Training/Learning Programs About M+C Saatchi Performance M+C Saatchi Performance has pledged its commitment to create a company that values difference, with an inclusive culture. As part of this, M+C Saatchi Performance continues to be an Equal Opportunity Employer which does not and shall not discriminate, celebrates diversity and bases all hiring and promotion decisions solely on merit, without regard for any personal characteristics. All employee information is kept confidential according to General Data Protection Regulation (GDPR).

Posted 3 weeks ago

Apply

6.0 - 11.0 years

12 - 22 Lacs

Bengaluru

Work from Office

Our engineering team is looking for a Data Engineer who is very proficient in python, has a very good understanding of AWS cloud computing, ETL Pipelines and demonstrates proficiency with SQL and relational database concepts In this role you will be a very mid-to senior-level individual contributor guiding our migration efforts by serving as a Senior data engineer working closely with the Data architects to evaluate best-fit solutions and processes for our team You will work with the rest of the team as we move away from legacy tech and introduce new tools and ETL pipeline solutions You will collaborate with subject matter experts, data architects, informaticists and data scientists to evolve our current cloud based ETL to the next Generation Responsibilities Independently prototypes/develop data solutions of high complexity to meet the needs of the organization and business customers. Designs proof-of-concept solutions utilizing an advanced understanding of multiple coding languages to meet technical and business requirements, with an ability to perform iterative solution testing to ensure specifications are met. Designs and develops data solutions that enables effective self-service data consumption, and can describe their value to the customer. Collaborates with stakeholders in defining metrics that are impactful to the business. Prioritize efforts based on customer value. Has an in-depth understanding of Agile techniques. Can set expectations for deliverables of high complexity. Can assist in the creation of roadmaps for data solutions. Can turn vague ideas or problems into data product solutions. Influences strategic thinking across the team and the broader organization. Maintains proof-of-concepts and prototype data solutions, and manages any assessment of their viability and scalability, with own team or in partnership with IT. Working with IT, assists in building robust systems focusing on long-term and ongoing maintenance and support. Ensure data solutions include deliverables required to achieve high quality data. Displays a strong understanding of complex multi-tier, multi-platform systems, and applies principles of metadata, lineage, business definitions, compliance, and data security to project work. Has an in-depth understanding of Business Intelligence tools, including visualization and user experience techniques. Can set expectations for deliverables of high complexity. Works with IT to help scale prototypes. Demonstrates a comprehensive understanding of new technologies as needed to progress initiatives. Requirements Expertise in Python programming, with demonstrated real-world experience building out data tools in a Python environment. Expertise in AWS Services , with demonstrated real-world experience building out data tools in a Python environment. Bachelor`s Degree in Computer Science, Computer Engineering, or related discipline preferred. Masters in same or related disciplines strongly preferred. 3+ years experience in coding for data management, data warehousing, or other data environments, including, but not limited to, Python and Spark. Experience with SAS is preferred. 3+ years experience as developer working in an AWS cloud computing environment. 3+ years experience using GIT or Bitbucket. Experience with Redshift, RDS, DynomoDB is preferred. Strong written and oral communication skills are required. Experience in the healthcare industry with healthcare data analytics products Experience with healthcare vocabulary and data standards (OMOP, FHIR) is a plus

Posted 3 weeks ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

Remote

We are looking for a Test Engineer who will become part of our team building and testing the Creditsafe data. You will be working closely with the database teams and data engineering to build specific systems facilitating the extraction and transformation of Creditsafe data. Based on the test strategy and approach you will develop,enhance and execute tests that add value to Creditsafe data. You will act as a primary source of guidance to Junior Test Engineers and Test Engineers in all areas of data quality. You will contribute to the team using data quality best practices and techniques. You can confidently communicate test results with your team members and stakeholders using evidence and reports. You act as a mentor and coach to the less experienced members of the test team. You will promote and coach leading practices in data test management, design, and implementation. You will be part of an Agile team and will effectively contribute to the ceremonies, acting as the quality specialist within that team. You are an influencer and will provide leadership in defining and implementing agreed standards and will actively promote this within your team and the wider development community. The ideal candidate has extensive experience in mentorship and leading by example and is able to communicate values consistent with the Creditsafe philosophy of engagement. You have critical thinking skills and can diplomatically communicate within and outside their areas of responsibility, challenging assumptions where required. Required Skills Proven working experience as a data test engineer or business data analyst or ETL tester. Technical expertise regarding data models, database design development, data mining and segmentation techniques Strong knowledge of and experience with SQL databases Hands on experience of best engineering practices (handling and logging errors,system monitoring and building human-fault-tolerant applications) Knowledge of statistics and experience using statistical packages for analysing datasets(Excel, SPSS, SAS etc.) is an advantage. Comfortable working with relational databases such as Redshift, Oracle,PostgreSQL, MySQL, and MariaDB (PostgreSQL preferred) Strong analytical skills with the abilityto collect, organise, analyse, and disseminate significant amounts of information with attention to detail and accuracy Adept at queries,report writing and presenting findings. BSin Mathematics, Economics, Computer Science, Information Management or Statistics is desirable but not essential A good understanding of cloud technology, preferably AWS and/orAzure DevOps A practical understanding of programming: JavaScript, Python Excellent communication skills Practical experience of testing in an Agile approach Desirable Skills An understanding of version control systems Practical experience of conducting code reviews Practical experience of pair testing and pair programming Primary Responsibilities Reports to Engineering Lead Work as part of the engineering team in data acquisition Designing and implementing processes and tools to monitor and improve the quality of Creditsafe's data. Developing and executing test plans to verify the accuracy and reliability of data. Working with data analysts and other stakeholders to establish and maintain data governance policies. Identifying and resolving issues with the data, such as errors,inconsistencies, or duplication. Collaborating with other teams, such as data analysts and data scientists to ensure the quality of data used for various projects and initiatives. Providing training and guidance to other team members on data quality best practices and techniques. Monitoring and reporting on key data quality metrics, such as data completeness and accuracy. Continuously improving data quality processes and tools based on feedback and analysis. Work closely with their Agile team to promote a whole team approach to quality Documents approaches and processes that improve the quality effort for use by team members and the wider test function Strong practical knowledge of software testing techniques and the ability to advise on, and select ,the correct technique dependent on the problem at hand Conducts analysis of the teams test approach, taking a proactive role in the formulation of the relevant quality criteria in line with the team goals Work with team members to define standards and processes applicable to their area of responsibility Monitor progress of team deliverables, injecting quality concerns in a timely, effective manner Gain a sufficient understanding of the system architecture to inform their test approach and that of the test engineers Creation and maintenance of concise and accurate defect reports in line with the established defect process Behavioural skills Teamwork – Leads by example in the areas of cooperation, collaboration and partnerships Quality Improvement – Takes the initiative to deliver improvements and results of value Problem Solving - Identifies and prioritises problems and works to deliver workable solutions Seeks feedback from team members and provides feedback to team members. Has an appreciation of others viewpoints, frequently soliciting differing opinions to their own Promotes an inclusive, merit-based approach to differing opinions Autonomy Is able to work independently within the constraints of their Agile team. Is able to determine when issues should be escalated. Takes responsibility and provides rationality for own decisions. Influence Interacts with and influences colleagues in a positive manner. Undertakes supervisory activities. Makes decisions which impact and optimises the work assigned to individuals or projects. Aspires to be regarded as the SME for quality related issues Complexity Is able to grasp complex concepts, form an understanding and explain to other team members. Is able to articulate complex concepts to stakeholders in a non-technical manner Performs a range of work, sometimes complex and non-routine, in a variety of environments. Applies a methodical approach to issue definition and resolution. Business skills Demonstrates an analytical and systematic approach to issue resolution, acting as the primary contact within their team. Takes the initiative in identifying and negotiating appropriate personal development opportunities with less experienced test team members Demonstrates effective communication skills and can vary message presentation dependent on the level of stakeholder Plans, schedules and monitors own work (and that of others) competently within limited deadlines and according to relevant legislation, standards and procedures. Appreciates the wider business context, and how their own role relates to other roles and to the business objectives of CreditSafe. Company Benefits: Competitive Salary Work from Home Pension Medical Insurance Cab facility for Women Dedicated Gaming Area

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

Remote

Job Title: Database Engineer X 8 Positions Location: Hyderabad, India Salary: Market Rate/Negotiable About us Creditsafe is the most used business data provider in the world, reducing risk and maximizing opportunities for our 110,000 business customers. Our journey began in Oslo, Norway in 1997, where we had a dream of using the then revolutionary internet to deliver instant access company credit reports to small and medium-sized businesses. Creditsafe realized this dream and changed the market for the better for businesses of all sizes. From there, we opened 15 more offices throughout Europe, the USA and Asia. We provide data on more than 300 million companies and provide customer notifications for billions of changes annually. We are a high growth company offering the freedom and flexibility of a start-up type culture due to the continuous innovation and new product development performed, coupled with the stability of being a profitable and growing company! With such a large customer base and breadth of data and analytics technology you will have real opportunities to help companies survive and thrive in challenging times by reducing business risk and choosing trustworthy customers and suppliers. Summary: This is your opportunity to develop your career with an exciting, fast paced and rapidly expanding business, one of the leading providers of Business Intelligence worldwide. As a Database Engineer with excellent database development skills, you will be responsible for developing and maintaining the databases and scripts that power the company’s products and websites, handling large data sets and having more than 20 million hits per day. You will work with your team to deliver work on time, in-line with the business requirements, and to a high level of quality This is your opportunity to develop your career with an exciting, fast paced and rapidly expanding business, one of the leading providers of Business Intelligence worldwide. You will work with your team to deliver work on time, in-line with the business requirements, and to a high level of quality. Primary Responsibilities: · 5+ year’s solid commercial experience of Oracle development under a 10g or 11g environment. · Advanced PL/SQL knowledge required. · ETL skills – Pentaho would be beneficial · Any wider DB experience would be desirable e.g., Redshift, Aurora DB, DynamoDB, MariaDB, MongoDB etc. · Cloud/AWS An interest in learning new technologies. · Experience in tuning Oracle queries in large databases. · Good experience in loading and extracting large data sets. · Experience of working with an Oracle database under a bespoke web development environment. · Analytical and critical thinking skills; agile problem-solving abilities. · Detail oriented, self-motivated, able to work independently with little or no supervision, and is committed to the highest standards of quality for the entire release process. · Excellent written and verbal communication skills. · Attention to detail. · Ability to work in a fast paced / changing environment. · Ability to thrive in a deadline driven, stressful project environment.3+ years of software development experience. Qualifications and Experience · Degree in Computer Science or similar. · Experience with loading data through SSIS. · Experience working on financial and business intelligence projects or in big data environments. · A desire to learn new skills and branch into development using a wide range of alternative technologies. Skills, Knowledge and Abilities · Write code for new development requirements as well as provide bug fixing, support and maintenance of existing code. · Test your code to ensure it functions as per the business requirements, considering the impact of your code on other areas of the solution. · Provide expert advice on performance tuning within Oracle. · Perform large-scale imports and extracts of data. · Assist the business in the collection and documentation of user's requirements where needed, provide estimates and work plans · Create and maintain technical documentation. · Follow all company procedures/standards/processes. · Contribute to architectural design and development making technically sound development recommendations. · Provide support to other staff in the department and act as a mentor to less experienced staff, including through code reviews. · Work as a team player in an agile environment. · Build release scripts and plans to facilitate the deployment of your code to testing and production environments. · Take ownership of any issues that occur within your area to ensure an appropriate solution is found. Assess opportunities for application and process improvement and share with team members and/or affected parties. Company Benefits: Competitive Salary Work from Home Pension Medical Insurance Cab facility for Women Dedicated Gaming Area

Posted 3 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies