Jobs
Interviews

3597 Redshift Jobs - Page 25

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Company Description WNS (Holdings) Limited (NYSE: WNS), is a leading Business Process Management (BPM) company. We combine our deep industry knowledge with technology and analytics expertise to co-create innovative, digital-led transformational solutions with clients across 10 industries. We enable businesses in Travel, Insurance, Banking and Financial Services, Manufacturing, Retail and Consumer Packaged Goods, Shipping and Logistics, Healthcare, and Utilities to re-imagine their digital future and transform their outcomes with operational excellence.We deliver an entire spectrum of BPM services in finance and accounting, procurement, customer interaction services and human resources leveraging collaborative models that are tailored to address the unique business challenges of each client. We co-create and execute the future vision of 400+ clients with the help of our 44,000+ employees. Job Description Key Responsibilities: Proficiency in building visualizations using tools like Tableau. Analyse complex data sets to identify trends, patterns, and insights. Write efficient and optimized SQL queries for data extraction and manipulation. Create interactive and informative visualizations to present data insights. Collaborate with cross-functional teams to understand business requirements and provide analytical solutions. Maintain and optimize existing data pipelines and workflows. Communicate complex data findings clearly and effectively to stakeholders. Required Skills: 5+ years of relevant experience in data analysis. Hands-on experience with SQL programming. Knowledge of AWS Redshift or similar database technologies. Preferred Skills / Not Mandatory Familiar with R / Python (nice to have) Other Requirements: Excellent communication skills. Strong business analysis Qualifications Graduate/Post Graduate

Posted 2 weeks ago

Apply

3.0 - 4.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Key Responsibilities JOB DESCRIPTION Design and develop high-volume, data engineering solutions for mission-critical systems with quality. Making enhancements to various applications that meets business and auditing requirements. Research and evaluate alternative solutions and make recommendations on improving the product to meet business and information risk requirements. Evaluate service level issues and suggested enhancements to diagnose and address underlying system problems and inefficiencies. Participate in full development lifecycle activities for the product (coding, testing, release activities). Support Release activities on weekends as required. Support any application issues reported during weekends. Coordinating day-To-day activities for multiple projects with onshore and offshore team members. Ensuring the availability of platform in lower environments Required Qualifications 3-4 years of overall IT experience, which includes hands on experience in Big Data technologies. Mandatory - Hands on experience in Python and PySpark. Build pySpark applications using Spark Dataframes in Python using Jupyter notebook and PyCharm(IDE). Worked on optimizing spark jobs that processes huge volumes of data. Hands on experience in version control tools like Git. Worked on Amazon’s Analytics services like Amazon EMR, Amazon Athena, AWS Glue. Worked on Amazon’s Compute services like Amazon Lambda, Amazon EC2 and Amazon’s Storage service like S3 and few other services like SNS. Experience/knowledge of bash/shell scripting will be a plus. Has built ETL processes to take data, copy it, structurally transform it etc. involving a wide variety of formats like CSV, TSV, XML and JSON. Experience in working with fixed width, delimited , multi record file formats etc. Good to have knowledge of datawarehousing concepts – dimensions, facts, schemas- snowflake, star etc. Have worked with columnar storage formats- Parquet,Avro,ORC etc. Well versed with compression techniques – Snappy, Gzip. Good to have knowledge of AWS databases (atleast one) Aurora, RDS, Redshift, ElastiCache, DynamoDB. Hands on experience in tools like Jenkins to build, test and deploy the applications Awareness of Devops concepts and be able to work in an automated release pipeline environment. Excellent debugging skills. Preferred Qualifications Experience working with US Clients and Business partners. Knowledge on Front end frameworks. Exposure to BFSI domain is a good to have. Hands on experience on any API Gateway and management platform.

Posted 2 weeks ago

Apply

5.0 - 10.0 years

10 - 20 Lacs

Hyderabad, Pune, Bengaluru

Hybrid

Who We Are:- We are a digitally native company that helps organizations reinvent themselves and unleash their potential. We are the place where innovation, design and engineering meet scale. Globant is 20 years old, NYSE listed public organization with more than 33,000+ employees worldwide working out of 35 countries globally. www.globant.com Job location: Pune/Hyderabad/Bangalore Work Mode: Hybrid Experience: 5 to 10 Years Must have skills are 1) AWS (EC2 & EMR & EKS) 2) RedShift 3) Lambda Functions 4) Glue 5) Python 6) Pyspark 7) SQL 8) Cloud watch 9) No SQL Database - DynamoDB/MongoDB/ OR any We are seeking a highly skilled and motivated Data Engineer to join our dynamic team. The ideal candidate will have a strong background in designing, developing, and managing data pipelines, working with cloud technologies, and optimizing data workflows. You will play a key role in supporting our data-driven initiatives and ensuring the seamless integration and analysis of large datasets. Design Scalable Data Models: Develop and maintain conceptual, logical, and physical data models for structured and semi-structured data in AWS environments. Optimize Data Pipelines: Work closely with data engineers to align data models with AWS-native data pipeline design and ETL best practices. AWS Cloud Data Services: Design and implement data solutions leveraging AWS Redshift, Athena, Glue, S3, Lake Formation, and AWS-native ETL workflows. Design, develop, and maintain scalable data pipelines and ETL processes using AWS services (Glue, Lambda, RedShift). Write efficient, reusable, and maintainable Python and PySpark scripts for data processing and transformation. Optimize SQL queries for performance and scalability. Expertise in writing complex SQL queries and optimizing them for performance. Monitor, troubleshoot, and improve data pipelines for reliability and performance. Focusing on ETL automation using Python and PySpark, responsible for design, build, and maintain efficient data pipelines, ensuring data quality and integrity for various applications.

Posted 2 weeks ago

Apply

2.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Description Amazon Transportation team is looking for an innovative, hands-on and customer-obsessed Business Analyst for Analytics team. Candidate must be detail oriented, have superior verbal and written communication skills, strong organizational skills, excellent technical skills and should be able to juggle multiple tasks at once. Ideal candidate must be able to identify problems before they happen and implement solutions that detect and prevent outages. The candidate must be able to accurately prioritize projects, make sound judgments, work to improve the customer experience and get the right things done. This job requires you to constantly hit the ground running and have the ability to learn quickly. Primary responsibilities include defining the problem and building analytical frameworks to help the operations to streamline the process, identifying gaps in the existing process by analyzing data and liaising with relevant team(s) to plug it and analyzing data and metrics and sharing update with the internal teams. Key job responsibilities Translate business problems into analytical requirements and define expected output Create analytical approach to solve the problem inline with stakeholder expectation Be the domain expert and have knowledge of data availability from various sources. Execute solution with scalable development practices in scripting, data extraction and data visualization Triangulate data from multiple sources to ensure data fidelity Responsible for deep-dive analysis on key metrics Manage top notch communication with stakeholders and team members with include project progress, blockers etc. A day in the life Solve analyses with well-defined inputs and outputs; drive to the heart of the problem and identify root causes Have the capability to handle large data sets in analysis Derive recommendations from analysis Understand the basics of test and control comparison; may provide insights through basic statistical measures such as hypothesis testing Communicate analytical insights effectively About The Team AOP (Analytics Operations and Programs) team is missioned to standardize BI and analytics capabilities, and reduce repeat analytics/reporting/BI workload for operations across IN, AU, BR, MX, SG, AE, EG, SA marketplace. AOP is responsible to provide visibility on operations performance and implement programs to improve network efficiency and defect reduction. The team has a diverse mix of strong engineers, Analysts and Scientists who champion customer obsession. We enable operations to make data-driven decisions through developing near real-time dashboards, self-serve dive-deep capabilities and building advanced analytics capabilities. We identify and implement data-driven metric improvement programs in collaboration (co-owning) with Operations teams. Basic Qualifications 2+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience Experience with data visualization using Tableau, Quicksight, or similar tools Experience with building and maintain basic data artifacts (e.g. ETL, data models, queries) Experience with scripting language (e.g., Python, Java, or R) 2+ years of ecommerce, transportation, finance or related analytical field experience Preferred Qualifications Master's degree, or Advanced technical degree Knowledge of data modeling and data pipeline design Experience with statistical analysis, co-relation analysis Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ASSPL - Telangana Job ID: A2910510

Posted 2 weeks ago

Apply

8.0 years

0 Lacs

Tamil Nadu, India

On-site

Job Title: Data Engineer About VXI VXI Global Solutions is a BPO leader in customer service, customer experience, and digital solutions. Founded in 1998, the company has 40,000 employees in more than 40 locations in North America, Asia, Europe, and the Caribbean. We deliver omnichannel and multilingual support, software development, quality assurance, CX advisory, and automation & process excellence to the world’s most respected brands. VXI is one of the fastest growing, privately held business services organizations in the United States and the Philippines, and one of the few US-based customer care organizations in China. VXI is also backed by private equity investor Bain Capital. Our initial partnership ran from 2012 to 2016 and was the beginning of prosperous times for the company. During this period, not only did VXI expand our footprint in the US and Philippines, but we also gained ground in the Chinese and Central American markets. We also acquired Symbio, expanding our global technology services offering and enhancing our competitive position. In 2022, Bain Capital re-invested in the organization after completing a buy-out from Carlyle. This is a rare occurrence in the private equity space and shows the level of performance VXI delivers for our clients, employees, and shareholders. With this recent investment, VXI has started on a transformation to radically improve the CX experience though an industry leading generative AI product portfolio that spans hiring, training, customer contact, and feedback. Job Description: We are seeking talented and motivated Data Engineers to join our dynamic team and contribute to our mission of harnessing the power of data to drive growth and success. As a Data Engineer at VXI Global Solutions, you will play a critical role in designing, implementing, and maintaining our data infrastructure to support our customer experience and management initiatives. You will collaborate with cross-functional teams to understand business requirements, architect scalable data solutions, and ensure data quality and integrity. This is an exciting opportunity to work with cutting-edge technologies and shape the future of data-driven decision-making at VXI Global Solutions. Responsibilities: Design, develop, and maintain scalable data pipelines and ETL processes to ingest, transform, and store data from various sources. Collaborate with business stakeholders to understand data requirements and translate them into technical solutions. Implement data models and schemas to support analytics, reporting, and machine learning initiatives. Optimize data processing and storage solutions for performance, scalability, and cost-effectiveness. Ensure data quality and integrity by implementing data validation, monitoring, and error handling mechanisms. Collaborate with data analysts and data scientists to provide them with clean, reliable, and accessible data for analysis and modeling. Stay current with emerging technologies and best practices in data engineering and recommend innovative solutions to enhance our data capabilities. Requirements: Bachelor's degree in Computer Science, Engineering, or a related field. Proven 8+ years' experience as a data engineer or similar role Proficiency in SQL, Python, and/or other programming languages for data processing and manipulation. Experience with relational and NoSQL databases (e.g., SQL Server, MySQL, Postgres, Cassandra, DynamoDB, MongoDB, Oracle), data warehousing (e.g., Vertica, Teradata, Oracle Exadata, SAP Hana), and data modeling concepts. Strong understanding of distributed computing frameworks (e.g., Apache Spark, Apache Flink, Apache Storm) and cloud-based data platforms (e.g., AWS Redshift, Azure, Google BigQuery, Snowflake) Familiarity with data visualization tools (e.g., Tableau, Power BI, Looker, Apache Superset) and data pipeline tools (e.g. Airflow, Kafka, Data Flow, Cloud Data Fusion, Airbyte, Informatica, Talend) is a plus. Understanding of data and query optimization, query profiling, and query performance monitoring tools and techniques. Solid understanding of ETL/ELT processes, data validation, and data security best practices Experience in version control systems (Git) and CI/CD pipelines. Excellent problem-solving skills and attention to detail. Strong communication and collaboration skills to work effectively with cross-functional teams. Join VXI Global Solutions and be part of a dynamic team dedicated to driving innovation and delivering exceptional customer experiences. Apply now to embark on a rewarding career in data engineering with us!

Posted 2 weeks ago

Apply

12.0 - 20.0 years

25 - 40 Lacs

Hyderabad

Work from Office

Minimum 10 yrs in IT project/program management with hands-on in tools like JIRA, Excel, MS Project, Planisware. Strong in data platform implementation (Snowflake/Redshift), ETL/ELT, scalable architecture & business-aligned solutions.

Posted 2 weeks ago

Apply

6.0 years

14 - 24 Lacs

India

On-site

6+ years of experience as a Data Engineer. Strong proficiency in SQL. Hands-on experience with modern cloud data warehousing solutions (Snowflake, Big Query, Redshift) Expertise in ETL/ELT processes, batch, and streaming data processing. Proven ability to troubleshoot data issues and propose effective solutions. Knowledge of AWS services (S3, DMS, Glue, Athena). Familiarity with DBT for data transformation and modeling. Must be fluent in English communication. Desired Experience 3 years of Experience with additional AWS services (EC2, ECS, EKS, VPC, IAM). Knowledge of Infrastructure as Code (IaC) tools like Terraform and Terragrunt. Proficiency in Python for data engineering tasks. Experience with orchestration tools like Dagster, Airflow, or AWS Step Functions. Familiarity with pub-sub, queuing, and streaming frameworks (AWS Kinesis, Kafka, SQS, SNS). Experience with CI/CD pipelines and automation for data processes. Skills: big query,snowflake,cloud,s3,sql,aws kinesis,airflow,aws,cd,etl/elt processes,kafka,ci,terraform,etl,aws lambda,data engineer,dbt,sns,aws step functions,glue,redshift,python,streaming data processing,sqs,cloud data warehousing solutions,aws services,terragrunt,batch data processing,athena,dagster,dms

Posted 2 weeks ago

Apply

2.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Description Are you passionate about transforming complex data into actionable business insights at a global scale? RBS Brand Experience (formerly APIE) is seeking an experienced Business Intelligence Engineer who thrives on ambiguity and can decipher evolving business needs to shape data-driven solutions. As a Business Intelligence Engineer, you'll be at the intersection of data and business strategy, translating complex requirements into actionable analytics solutions. You'll partner with stakeholders to unlock insights that elevate our global work authorization experiences and drive program scalability. Key job responsibilities A Successful Candidate Will Demonstrate Advanced SQL skills for writing complex queries and stored procedures to extract, transform, and analyze large datasets Proficiency in Python, particularly with libraries like pandas and PySpark, for data manipulation and ETL processes Strong analytical and problem-solving capabilities, with the ability to translate business requirements into efficient data solutions Experience in designing and implementing scalable ETL pipelines that can handle large volumes of data Expertise in data modeling and database optimization techniques to improve query performance Ability to work with various data sources and formats, integrating them into cohesive data structures Skill in developing and maintaining data warehouses and data lakes Proficiency in using BI tools to create insightful visualizations and dashboards Ability to thrive in ambiguous situations, identifying data needs and proactively proposing solutions Excellence in communicating technical concepts and data insights to both technical and non-technical audiences Customer-centric mindset with a focus on delivering data solutions that drive business value" A day in the life You'll work closely with Product Managers, Software Developers, and business stakeholders to: Build and maintain dashboards that drive business decisions Perform deep-dive analyses to uncover actionable insights Develop and automate data processes to improve efficiency Present findings and recommendations to leadership Partner with global teams to implement data-driven solutions Basic Qualifications 2+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience Experience with data visualization using Tableau, Quicksight, or similar tools Experience with scripting language (e.g., Python, Java, or R) Experience building and maintaining basic data artifacts (e.g., ETL, data models, queries) Experience applying basic statistical methods (e.g. regression) to difficult business problems Experience gathering business requirements, using industry standard business intelligence tool(s) to extract data, formulate metrics and build reports Preferred Qualifications Bachelor's degree, or Advanced technical degree Knowledge of data modeling and data pipeline design Experience with statistical analysis, co-relation analysis Experience in designing and implementing custom reporting systems using automation tools Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI - BLR 14 SEZ Job ID: A3036857

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

India

On-site

About The Role & Team We are seeking a Sr. Data Engineer to join our Data Engineering team within our Enterprise Data Insights organization to build data solutions, design and implement ETL/ELT processes and manage our data platform to enable our cross functional stakeholders. As a part of our Corporate Engineering division, our vision is to spearhead technology and data-led solutions and experiences to drive growth & innovation at scale. The ideal candidate will have a strong Data Engineering background, advanced Python knowledge and experience with cloud services and SQL/NoSQL databases. You will work closely with our cross functional stakeholders in Product, Finance and GTM along with Business and Enterprise Technology teams. As a Senior Data Engineer, you will: • Collaborating closely with various stakeholders to prioritize requests, identify improvements, and offer recommendations. • Taking the lead in analyzing, designing, and implementing data solutions, which involves constructing and designing data models and ETL processes. • Cultivating collaboration with corporate engineering, product teams, and other engineering groups. • Leading and mentoring engineering discussions, advocating for best practices. • Actively participating in design and code reviews. • Accessing and exploring third-party data APIs to determine the data required to meet business needs. • Ensuring data quality and integrity across different sources and systems. • Managing data pipelines for both analytics and operational purposes. • Continuously enhancing processes and policies to improve SLA and SOX compliance. You'll be a great addition to the team if you have: • Hold a B.S., M.S., or Ph.D. in Computer Science or a related technical field. • Possess over 5 years of experience in Data Engineering, focusing on building and maintaining data environments. • Demonstrate at least 5 years of experience in designing and constructing ETL/ELT processes, managing data solutions within an SLA-driven environment. • Exhibit a strong background in developing data products, APIs, and maintaining testing, monitoring, isolation, and SLA processes. • Possess advanced knowledge of SQL/NoSQL databases (such as Snowflake, Redshift, MongoDB). • Proficient in programming with Python or other scripting languages. • Have familiarity with columnar OLAP databases and data modeling. • Experience in building ELT/ETL processes using tools like dbt, AirFlow, Fivetran, CI/CD using GitHub, and reporting in Tableau. • Possess excellent communication and interpersonal skills to effectively collaborate with various business stakeholders and translate requirements. Added bonus if you also have: • A good understanding of Salesforce or Netsuite systems required • Experience in SAAS environments • Designed and deployed ML models • Experience with events and streaming data

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity We are the only professional services organization who has a separate business dedicated exclusively to the financial services marketplace. Join Digital Engineering Team and you will work with multi-disciplinary teams from around the world to deliver a global perspective. Aligned to key industry groups including Asset management, Banking and Capital Markets, Insurance and Private Equity, Health, Government, Power and Utilities, we provide integrated advisory, assurance, tax, and transaction services. Through diverse experiences, world-class learning and individually tailored coaching you will experience ongoing professional development. That’s how we develop outstanding leaders who team to deliver on our promises to all of our stakeholders, and in so doing, play a critical role in building a better working world for our people, for our clients and for our communities. Sound interesting? Well, this is just the beginning. Because whenever you join, however long you stay, the exceptional EY experience lasts a lifetime. We’re seeking a versatile Full Stack Developer with hands-on experience in Python (including multithreading and popular libraries) ,GenAI and AWS cloud services. The ideal candidate should be proficient in backend development using NodeJS, ExpressJS, Python Flask/FastAPI, and RESTful API design. On the frontend, strong skills in Angular, ReactJS, TypeScript, etc.EY Digital Engineering is a unique, industry-focused business unit that provides a broad range of integrated services that leverage deep industry experience with strong functional capability and product knowledge. The Digital Engineering (DE) practice works with clients to analyse, formulate, design, mobilize and drive digital transformation initiatives. We advise clients on their most pressing digital challenges and opportunities surround business strategy, customer, growth, profit optimization, innovation, technology strategy, and digital transformation. We also have a unique ability to help our clients translate strategy into actionable technical design, and transformation planning/mobilization. Through our unique combination of competencies and solutions, EY’s DE team helps our clients sustain competitive advantage and profitability by developing strategies to stay ahead of the rapid pace of change and disruption and supporting the execution of complex transformations. Your Key Responsibilities Application Development: Design and develop cloud-native applications and services using AWS services such as Lambda, API Gateway, ECS, EKS, and DynamoDB, Glue, Redshift, EMR. Deployment and Automation: Implement CI/CD pipelines using AWS CodePipeline, CodeBuild, and CodeDeploy to automate application deployment and updates. Architecture Design: Collaborate with architects and other engineers to design scalable and secure application architectures on AWS. Performance Tuning: Monitor application performance and implement optimizations to enhance reliability, scalability, and efficiency. Security: Implement security best practices for AWS applications, including identity and access management (IAM), encryption, and secure coding practices. Container Services Management: Design and deploy containerized applications using AWS services such as Amazon ECS (Elastic Container Service), Amazon EKS (Elastic Kubernetes Service), and AWS Fargate. Configure and manage container orchestration, scaling, and deployment strategies. Optimize container performance and resource utilization by tuning settings and configurations. Application Observability: Implement and manage application observability tools such as AWS CloudWatch, AWS X-Ray, Prometheus, Grafana, and ELK Stack (Elasticsearch, Logstash, Kibana). Develop and configure monitoring, logging, and alerting systems to provide insights into application performance and health. Create dashboards and reports to visualize application metrics and logs for proactive monitoring and troubleshooting. Integration: Integrate AWS services with application components and external systems, ensuring smooth and efficient data flow. Troubleshooting: Diagnose and resolve issues related to application performance, availability, and reliability. Documentation: Create and maintain comprehensive documentation for application design, deployment processes, and configuration. Skills And Attributes For Success Required Skills: AWS Services: Proficiency in AWS services such as Lambda, API Gateway, ECS, EKS, DynamoDB, S3, and RDS, Glue, Redshift, EMR. Backend: Python (multithreading, Flask, FastAPI), NodeJS, ExpressJS, REST APIs Frontend: Angular, ReactJS, TypeScript Cloud Engineering : Development with AWS (Lambda, EC2, S3, API Gateway, DynamoDB), Docker, Git, etc. Proven experience in developing and deploying AI solutions with Python, JavaScript Strong background in machine learning, deep learning, and data modelling. Good to have: CI/CD pipelines, full-stack architecture, unit testing, API integration Security: Understanding of AWS security best practices, including IAM, KMS, and encryption. Observability Tools: Proficiency in using observability tools like AWS CloudWatch, AWS X-Ray, Prometheus, Grafana, and ELK Stack. Container Orchestration: Knowledge of container orchestration concepts and tools, including Kubernetes and Docker Swarm. Monitoring: Experience with monitoring and logging tools such as AWS CloudWatch, CloudTrail, or ELK Stack. Collaboration: Strong teamwork and communication skills with the ability to work effectively with cross-functional teams. Preferred Qualifications: Certifications: AWS Certified Solutions Architect – Associate or Professional, AWS Certified Developer – Associate, or similar certifications. Experience: At least 5 Years of experience in an application engineering role with a focus on AWS technologies. Agile Methodologies: Familiarity with Agile development practices and methodologies. Problem-Solving: Strong analytical skills with the ability to troubleshoot and resolve complex issues. Education: Degree: Bachelor’s degree in Computer Science, Engineering, Information Technology, or a related field, or equivalent practical experience What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 2 weeks ago

Apply

10.0 years

0 Lacs

India

Remote

Job Title : Data Architect – AEP Competency Years of Experience : 10-12 Location : Remote Position Summary: Experienced data modelers, SQL, ETL, with some development background to provide defining new data schemas, data ingestion for Adobe Experience Platform customers. Interface directly with enterprise customers and collaborate with internal teams. What you’ll do • Interface with customer to gather requirements, design solutions & make recommendations. • Lead customer project conference calls or interface with a Project Manager. • Deliver Technical Specifications documents for customer review. • Strong collaboration with team software engineer consultants onshore & offshore. • Leverage understanding of data relationships and schemas to structure data to allow clients to perform dynamic customer-level analysis. • Construct processes to build Customer ID mapping files for use in building 360 degree view of customer across data sources. • Leverage scripting languages to automate key processes governing data movement, cleansing, and processing activities. • Bill & forecast time toward customer projects. • Innovate on new ideas to solve customer needs. Requirements: • 10+ years of strong experience with data transformation & ETL on large data sets. • Experience with designing customer centric datasets (i.e., CRM, Call Center, Marketing, Offline, Point of Sale etc.) • 5+ years of Data Modeling experience (i.e., Relational, Dimensional, Columnar, Big Data) • 5+ years of complex SQL or NoSQL experience. • Experience in advanced Data Warehouse concepts. • Experience in industry ETL tools (i.e., Informatica, Unifi) • Experience with Business Requirements definition and management, structured analysis,process design, use case documentation. • Experience with Reporting Technologies (i.e., Tableau, PowerBI) • Experience in professional software development • Demonstrate exceptional organizational skills and ability to multi-task simultaneous different customer projects. • Strong verbal & written communication skills to interface with Sales team & lead customers to successful outcome. • Must be self-managed, proactive and customer focused. • Degree in Computer Science, Information Systems, Data Science, or related field. Special Consideration given for: • Experience & knowledge with Adobe Experience Cloud solutions. • Experience & knowledge with Digital Analytics or Digital Marketing. • Experience in programming languages (Python, Java, or Bash scripting). • Experience with Big Data technologies (i.e., Hadoop, Spark, Redshift, Snowflake, Hive, Pigetc.) • Experience as an enterprise technical or engineer consultant.

Posted 2 weeks ago

Apply

4.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

DXFactor is a US-based tech company working with customers across the globe. We are a Great place to work with certified company. We are looking for candidates for Data Engineer (4 to 6 Yrs exp) We have our presence in: US India (Ahmedabad, Bangalore) Location : Ahmedabad Website : www.DXFactor.com Designation: Data Engineer (Expertise in SnowFlake, AWS & Python) Key Responsibilities Design, develop, and maintain scalable data pipelines for batch and streaming workflows Implement robust ETL/ELT processes to extract data from various sources and load into data warehouses Build and optimize database schemas following best practices in normalization and indexing Create and maintain documentation for data flows, pipelines, and processes Collaborate with cross-functional teams to translate business requirements into technical solutions Monitor and troubleshoot data pipelines to ensure optimal performance Implement data quality checks and validation processes Build and maintain CI/CD workflows for data engineering projects Stay current with emerging technologies and recommend improvements to existing systems Requirements Bachelor's degree in Computer Science, Information Technology, or related field Minimum 4+ years of experience in data engineering roles Strong proficiency in Python programming and SQL query writing Hands-on experience with relational databases (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g., MongoDB, Cassandra) Experience with data warehousing technologies (e.g., Snowflake, Redshift, BigQuery) Proven track record in building efficient and scalable data pipelines Practical knowledge of batch and streaming data processing approaches Experience implementing data validation, quality checks, and error handling mechanisms Working experience with cloud platforms, particularly AWS (S3, EMR, Glue, Lambda, Redshift) and/or Azure (Data Factory, Databricks, HDInsight) Understanding of different data architectures including data lakes, data warehouses, and data mesh Demonstrated ability to debug complex data flows and optimize underperforming pipelines Strong documentation skills and ability to communicate technical concepts effectively

Posted 2 weeks ago

Apply

15.0 years

0 Lacs

Bangalore Urban, Karnataka, India

On-site

Drive the Future of Data-Driven Entertainment Are you passionate about working with big data? Do you want to shape the direction of products that impact millions of users daily? If so, we want to connect with you. We’re seeking a leader for our Data Engineering team who will collaborate with Product Managers, Data Scientists, Software Engineers, and ML Engineers to support our AI infrastructure roadmap. In this role, you’ll design and implement the data architecture that guides decision-making and drives insights, directly impacting our platform’s growth and enriching user experiences. As a part of SonyLIV, you’ll work with some of the brightest minds in the industry, access one of the most comprehensive data sets in the world and leverage cutting-edge technology. Your contributions will have a tangible effect on the products we deliver and the viewers we engage. The ideal candidate will bring a strong foundation in data infrastructure and data architecture, a proven record of leading and scaling data teams, operational excellence to enhance efficiency and speed, and a visionary approach to how Data Engineering can drive company success. If you’re ready to make a significant impact in the world of OTT and entertainment, let’s talk. AVP, Data Engineering – SonyLIV Location: Bangalore Responsibilities: Define the Technical Vision for Scalable Data Infrastructure: Establish a robust technical strategy for SonyLIV’s data and analytics platform, architecting a scalable, high-performance data ecosystem using modern technologies like Spark, Kafka, Snowflake, and cloud services (AWS/GCP). Lead Innovation in Data Processing and Architecture: Advance SonyLIV’s data engineering practices by implementing real-time data processing, optimized ETL pipelines, and streaming analytics through tools like Apache Airflow, Spark, and Kubernetes. Enable high-speed data processing to support real-time insights for content and user engagement. Ensure Operational Excellence in Data Systems: Set and enforce standards for data reliability, privacy, and performance. Define SLAs for production data processes, using monitoring tools (Grafana, Prometheus) to maintain system health and quickly resolve issues. Build and Mentor a High-Caliber Data Engineering Team: Recruit and lead a skilled team with strengths in distributed computing, cloud infrastructure, and data security. Foster a collaborative and innovative culture, focused on technical excellence and efficiency. Collaborate with Cross-Functional Teams: Partner closely with Data Scientists, Software Engineers, and Product Managers to deliver scalable data solutions for personalization algorithms, recommendation engines, and content analytics. Architect and Manage Production Data Models and Pipelines: Design and launch production-ready data models and pipelines capable of supporting millions of users. Utilize advanced storage and retrieval solutions like Hive, Presto, and BigQuery to ensure efficient data access. Drive Data Quality and Business Insights: Implement automated quality frameworks to maintain data accuracy and reliability. Oversee the creation of BI dashboards and data visualizations using tools like Tableau and Looker, providing actionable insights into user engagement and content performance. This role offers the opportunity to lead SonyLIV’s data engineering strategy, driving technological innovation and operational excellence while enabling data-driven decisions that shape the future of OTT entertainment. Minimum Qualifications: 15+ years of progressive experience in data engineering, business intelligence, and data warehousing, including significant expertise in high-volume, real-time data environments. Proven track record in building, scaling, and managing large data engineering teams (10+ members), including experience managing managers and guiding teams through complex data challenges. Demonstrated success in designing and implementing scalable data architectures, with hands-on experience using modern data technologies (e.g., Spark, Kafka, Redshift, Snowflake, BigQuery) for data ingestion, transformation, and storage. Advanced proficiency in SQL and experience with at least one object-oriented programming language (Python, Java, or similar) for custom data solutions and pipeline optimization. Strong experience in establishing and enforcing SLAs for data availability, accuracy, and latency, with a focus on data reliability and operational excellence. Extensive knowledge of A/B testing methodologies and statistical analysis, including a solid understanding of the application of these techniques for user engagement and content analytics in OTT environments. Skilled in data governance, data privacy, and compliance, with hands-on experience implementing security protocols and controls within large data ecosystems. Preferred Qualifications: Bachelor's or Master’s degree in Computer Science, Mathematics, Physics, or a related technical field. Experience managing the end-to-end data engineering lifecycle, from model design and data ingestion through to visualization and reporting. Experience working with large-scale infrastructure, including cloud data warehousing, distributed computing, and advanced storage solutions. Familiarity with automated data lineage and data auditing tools to streamline data governance and improve transparency. Expertise with BI and visualization tools (e.g., Tableau, Looker) and advanced processing frameworks (e.g., Hive, Presto) for managing high-volume data sets and delivering insights across the organization. Why join us? CulverMax Entertainment Pvt Ltd (Formerly known as Sony Pictures Networks India) is home to some of India’s leading entertainment channels such as SET, SAB, MAX, PAL, PIX, Sony BBC Earth, Yay!, Sony Marathi, Sony SIX, Sony TEN, SONY TEN1, SONY Ten2, SONY TEN3, SONY TEN4, to name a few! Our foray into the OTT space with one of the most promising streaming platforms, Sony LIV brings us one step closer to being a progressive digitally led content powerhouse. Our independent production venture- Studio Next has already made its mark with original content and IPs for TV and Digital Media. But our quest to Go Beyond doesn’t end there. Neither does our search to find people who can take us there. We focus on creating an inclusive and equitable workplace where we celebrate diversity with our Bring Your Own Self Philosophy. We strive to remain an ‘Employer of Choice’ and have been recognized as: - India’s Best Companies to Work For 2021 by the Great Place to Work® Institute. - 100 Best Companies for Women in India by AVTAR & Seramount for 6 years in a row - UN Women Empowerment Principles Award 2022 for Gender Responsive Marketplace and Community Engagement & Partnership - ET Human Capital Awards 2023 for Excellence in HR Business Partnership & Team Building Engagement - ET Future Skills Awards 2022 for Best Learning Culture in an Organization and Best D&I Learning Initiative. The biggest award of course is the thrill our employees feel when they can Tell Stories Beyond the Ordinary!

Posted 2 weeks ago

Apply

8.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Position: Staff Engineer - Data, Digital Business Role Overview - Role involves leading SonyLIV's data engineering strategy, architecting scalable data infrastructure, driving innovation in data processing, ensuring operational excellence, and fostering a high-performance team to enable data-driven insights for OTT content and user engagement. Location - Mumbai Experience - 8+ years Responsibilities: Define the Technical Vision for Scalable Data Infrastructure: Establish a robust technical strategy for SonyLIV’s data and analytics platform, architecting a scalable, high-performance data ecosystem using modern technologies like Spark, Kafka, Snowflake, and cloud services (AWS/GCP). Lead Innovation in Data Processing and Architecture: Advance SonyLIV’s data engineering practices by implementing real-time data processing, optimized ETL pipelines, and streaming analytics through tools like Apache Airflow, Spark, and Kubernetes. Enable high-speed data processing to support real-time insights for content and user engagement. Ensure Operational Excellence in Data Systems: Set and enforce standards for data reliability, privacy, and performance. Define SLAs for production data processes, using monitoring tools (Grafana, Prometheus) to maintain system health and quickly resolve issues. Build and Mentor a High-Caliber Data Engineering Team: Recruit and lead a skilled team with strengths in distributed computing, cloud infrastructure, and data security. Foster a collaborative and innovative culture, focused on technical excellence and efficiency. Collaborate with Cross-Functional Teams: Partner closely with Data Scientists, Software Engineers, and Product Managers to deliver scalable data solutions for personalization algorithms, recommendation engines, and content analytics. Architect and Manage Production Data Models and Pipelines: Design and launch production-ready data models and pipelines capable of supporting millions of users. Utilize advanced storage and retrieval solutions like Hive, Presto, and BigQuery to ensure efficient data access. Drive Data Quality and Business Insights: Implement automated quality frameworks to maintain data accuracy and reliability. Oversee the creation of BI dashboards and data visualizations using tools like Tableau and Looker, providing actionable insights into user engagement and content performance. This role offers the opportunity to lead SonyLIV’s data engineering strategy, driving technological innovation and operational excellence while enabling data-driven decisions that shape the future of OTT entertainment. Minimum Qualifications: 8+ years of progressive experience in data engineering, business intelligence, and data warehousing, including significant expertise in high-volume, real-time data environments. Proven track record in building, scaling, and managing large data engineering teams (10+ members), including experience managing managers and guiding teams through complex data challenges. Demonstrated success in designing and implementing scalable data architectures, with hands-on experience using modern data technologies (e.g., Spark, Kafka, Redshift, Snowflake, BigQuery) for data ingestion, transformation, and storage. Advanced proficiency in SQL and experience with at least one object-oriented programming language (Python, Java, or similar) for custom data solutions and pipeline optimization. Strong experience in establishing and enforcing SLAs for data availability, accuracy, and latency, with a focus on data reliability and operational excellence. Extensive knowledge of A/B testing methodologies and statistical analysis, including a solid understanding of the application of these techniques for user engagement and content analytics in OTT environments. Skilled in data governance, data privacy, and compliance, with hands-on experience implementing security protocols and controls within large data ecosystems. Preferred Qualifications: Bachelor's or master’s degree in computer science, Mathematics, Physics, or a related technical field. Experience managing the end-to-end data engineering lifecycle, from model design and data ingestion through to visualization and reporting. Experience working with large-scale infrastructure, including cloud data warehousing, distributed computing, and advanced storage solutions. Familiarity with automated data lineage and data auditing tools to streamline data governance and improve transparency. Expertise with BI and visualization tools (e.g., Tableau, Looker) and advanced processing frameworks (e.g., Hive, Presto) for managing high-volume data sets and delivering insights across the organization. Why SPNI? Join Our Team at SonyLIV Drive the Future of Data-Driven Entertainment Are you passionate about working with big data? Do you want to shape the direction of products that impact millions of users daily? If so, we want to connect with you. We’re seeking a leader for our Data Engineering team who will collaborate with Product Managers, Data Scientists, Software Engineers, and ML Engineers to support our AI infrastructure roadmap. In this role, you’ll design and implement the data architecture that guides decision-making and drives insights, directly impacting our platform’s growth and enriching user experiences. As a part of SonyLIV, you’ll work with some of the brightest minds in the industry, access one of the most comprehensive data sets in the world and leverage cutting-edge technology. Your contributions will have a tangible effect on the products we deliver and the viewers we engage. The ideal candidate will bring a strong foundation in data infrastructure and data architecture, a proven record of leading and scaling data teams, operational excellence to enhance efficiency an

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Description and Requirements "At BMC trust is not just a word - it's a way of life!" Description And Requirements CareerArc Code CA-DN Hybrid "At BMC trust is not just a word - it's a way of life!" We are an award-winning, equal opportunity, culturally diverse, fun place to be. Giving back to the community drives us to be better every single day. Our work environment allows you to balance your priorities, because we know you will bring your best every day. We will champion your wins and shout them from the rooftops. Your peers will inspire, drive, support you, and make you laugh out loud! We help our customers free up time and space to become an Autonomous Digital Enterprise that conquers the opportunities ahead - and are relentless in the pursuit of innovation! BU Description We are the Technology and Automation team that drives competitive Advantage for BMC by enabling recurring revenue growth, customer centricity, operational efficiency and transformation through actionable insights, focused operational execution, and obsessive value realization. About You You are a self-motivated, proactive individual who thrives in a fast-paced environment. You have a strong eagerness to learn and grow, continuously staying updated with the latest trends and technologies in data engineering. Your passion for collaboration makes you a valuable team player, contributing to a positive work culture while also guiding and mentoring junior team members. You’re excited about problem-solving and have the ability to take ownership of projects from start to finish. With a keen interest in data-driven decision-making, you are ready to work on cutting-edge solutions that have a direct impact on the business. Role And Responsibilities As a Data Engineer, you will play a crucial role in leading and managing strategic data initiatives across the business. Your responsibilities will include: Leading data engineering projects across key business functions, including Marketing, Sales, Customer Success, and Product R&D. Developing and maintaining data pipelines to extract, transform, and load (ETL) data into data warehouses or data lakes. Designing and implementing ETL processes, ensuring the integrity, scalability, and performance of the data architecture. Leading data modeling efforts, ensuring that data is structured for optimal performance and that security best practices are maintained. Collaborating with data scientists, analysts, and stakeholders to understand data requirements and provide valuable insights across the customer journey. Guiding and mentoring junior engineers, providing technical leadership and ensuring best practices are followed. Maintaining documentation for data structures, ETL processes, and data lineage, ensuring clarity and ease of understanding across the team. Developing and maintaining data security, compliance, and retention protocols as part of best practice initiatives. Professional Expertise Must-Have Skills 5+ years of experience in data engineering, data warehousing, and building enterprise-level data integrations. Proficiency in SQL, including query optimization and tuning for relational databases (Snowflake, MS SQL Server, RedShift, etc.). 2+ years of experience working with cloud platforms (AWS, GCP, Azure, or OCI). Expertise in Python and Spark for data extraction, manipulation, and data pipeline development. Experience with structured, semi-structured, and unstructured data formats (JSON, XML, Parquet, CSV). Familiarity with version control systems (Git, Bitbucket) and Agile methodologies (Jira). Ability to collaborate with data scientists and business analysts, providing data support and insights. Proven ability to work effectively in a team setting, balancing multiple projects, and leading initiatives. Nice-to-Have Skills Experience in the SaaS software industry. Knowledge of analytics governance, data literacy, and core visualization tools (Tableau, MicroStrategy). Familiarity with CRM and marketing automation tools (Salesforce, HubSpot, Eloqua). Education Bachelor’s or master’s degree in computer science, Information Systems, or a related field (Advanced degree preferred). Our commitment to you! BMC’s culture is built around its people. We have 6000+ brilliant minds working together across the globe. You won’t be known just by your employee number, but for your true authentic self. BMC lets you be YOU! If after reading the above, You’re unsure if you meet the qualifications of this role but are deeply excited about BMC and this team, we still encourage you to apply! We want to attract talents from diverse backgrounds and experience to ensure we face the world together with the best ideas! BMC is committed to equal opportunity employment regardless of race, age, sex, creed, color, religion, citizenship status, sexual orientation, gender, gender expression, gender identity, national origin, disability, marital status, pregnancy, disabled veteran or status as a protected veteran. If you need a reasonable accommodation for any part of the application and hiring process, visit the accommodation request page. BMC Software maintains a strict policy of not requesting any form of payment in exchange for employment opportunities, upholding a fair and ethical hiring process. At BMC we believe in pay transparency and have set the midpoint of the salary band for this role at 2,033,200 INR. Actual salaries depend on a wide range of factors that are considered in making compensation decisions, including but not limited to skill sets; experience and training, licensure, and certifications; and other business and organizational needs. The salary listed is just one component of BMC's employee compensation package. Other rewards may include a variable plan and country specific benefits. We are committed to ensuring that our employees are paid fairly and equitably, and that we are transparent about our compensation practices. ( Returnship@BMC ) Had a break in your career? No worries. This role is eligible for candidates who have taken a break in their career and want to re-enter the workforce. If your expertise matches the above job, visit to https://bmcrecruit.avature.net/returnship know more and how to apply.

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Description Are you passionate about data? Does the prospect of dealing with massive volumes of data excite you? Do you want to create the next-generation tools for intuitive data access for transportation operations? We are looking for a Business Intelligence Engineer to help setup and deliver robust, structured reporting, analytics and models for the RBS Cost to Serve team. You will be a key contributor to shaping our strategic Defect Elimination program by equipping the program teams with the key analytics and insights. You will have an eye for detail, proficient/advanced SQL/DW/Python and have a knack for solving challenging data and reporting challenges. The role requires you to feel comfortable working with and clearly communicating with other functional teams, regionally and globally. The position will be based in Bangalore/Chennai/HYD. You will be reporting to a Sr Program Manager : Cost to Serve Analytics & Insights, working intensely with her (larger) project team, including Finance. The ideal candidate will be comfortable in a fast-paced, dynamic environment; will be a creative and an analytical problem solver with the opportunity to fulfill the Amazon motto to “Work Hard. Have Fun. Make History”. Key job responsibilities Analysis of business requirements and translation into technical requirements. By support of senior colleagues integration into a working, stable and scalable system Independent realization of requirements for Business Intelligence and custom software development products Creation of test cases and guidance of business stakeholders within the testing process Presentation of solutions and implemented features within weekly sync up with business stakeholders Ownership of maintenance and error handling of deployed solutions Focus on project delivery About The Team RBS Cost to Serve team aims to identify and eliminate waste, negative experiences, and non-value activities across the end-to-end remit of supply chain and dependent work streams that slow down resolution for our stakeholders. The primary objective is to reduce Cost To Serve for Amazon and enable “Free Cash Flow” by optimizing the Cost per shipped unit economics across the supply chain systems through Defect Elimination. Our program will support in establishing the end-to-end supply chain checkpoints on how the inventory moves inside Amazon to identify gaps, broken processes/policies to eliminate root causes of systemic difficulties rather than merely addressing symptoms, on behalf of our customers. This team will partner with internal/external stakeholders to establish the Cost to serve charter based on opportunity size and own specific unique initiatives that are beyond the existing team’s program scope. Basic Qualifications 3+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience Experience with data visualization using Tableau, Quicksight, or similar tools Experience with data modeling, warehousing and building ETL pipelines Experience in Statistical Analysis packages such as R, SAS and Matlab Experience using SQL to pull data from a database or data warehouse and scripting experience (Python) to process data for modeling Preferred Qualifications Experience with AWS solutions such as EC2, DynamoDB, S3, and Redshift Experience in data mining, ETL, etc. and using databases in a business environment with large-scale, complex datasets Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI - HYD 15 SEZ - E55 Job ID: A2999431

Posted 2 weeks ago

Apply

0 years

0 Lacs

New Delhi, Delhi, India

On-site

DailyObjects is a homegrown brand that creates aspirational everyday products designed to enhance modern lifestyles. Proudly designed and made in India, DailyObjects brings quality designer and Indian craftsmanship to the world. With over 30,000 styles in a dozen accessories categories, our products are loved by over 2 million customers globally. At DailyObjects, we are committed to designing exceptional products that blend distinctive aesthetics with practical functionality. We are a fast-growing D2C brand with a dynamic culture of innovation, adaptability, and excellence. We are looking for a talented 3D Designer who can bring products to life through detailed, photorealistic 3D renders, animations, and mockups. You will play a key role in visualizing products before they are physically manufactured and in creating compelling content for marketing, e-commerce, and social media. Responsibilities: 1) Develop high-quality 3D models, renders, and animations of lifestyle and tech accessory products 2) Collaborate with the product design, marketing, and UI/UX teams to visualize concepts, create mockups, and enhance customer experience 3) Prepare 3D assets for product listing pages, AR previews, and promotional materials 4) Ensure models are optimized for performance without compromising on quality 5) Maintain file organization and asset libraries 6) Stay updated with latest 3D design tools, trends, and best practices Requirements: 1) Bachelor's degree or diploma in Design, Animation, 3D Modelling, or related field 2) Proficiency in 3D software like Blender, Cinema 4D, Maya, or 3ds Max 3) Knowledge of rendering engines (Keyshot, V-Ray, Redshift, etc.) 4) Understanding of texture mapping, lighting, and material creation 5) Basic knowledge of Adobe Creative Suite (Photoshop, Illustrator) 6) Strong visual sense and attention to detail 7) Ability to manage time and multiple projects simultaneously 8) Experience in e-commerce or lifestyle product rendering is a plus

Posted 2 weeks ago

Apply

25.0 years

0 Lacs

India

On-site

Company Description Wellcove, operating as the DBA for CHCS Services Inc., is a premier full-service senior market solutions provider with over 25 years of experience. Specializing in long-term care and Medicare Supplement plans A-N, Wellcove offers comprehensive solutions for the insurance senior market sector. As a leading third-party administrator, Wellcove provides exceptional service and operates as an extension of the client's organization and brand, leveraging top-notch infrastructure and technology. Min. experience required for the job : 2+ years. Required Technical Skillset : Hands-on exp. On some of these technologies - SQL, PL/SQL, SSRS, SSIS, and AWS RedShift. Tools: MS SQL Server Candidate must possess working knowledge on MS SQL Server, including Joins, Constraints, Functions, Stored Procedures, etc Strong hands-on Skills in SSIS and SSRS, including package configurations, exception handling, and package deployment. Excellent problem-solving & analytical skills. Excellent verbal & written communication skills Nice to have hands-on knowledge of .NET, C#, and Power BI Job Description: Should be able to interact with customer for Requirement Gathering, problem solving independently Drive business requirements Independently perform the estimation process for WBS/Implementation & Execution plan for Projects Propose and come up with application specifications, artifacts, and Implementation plans Translate Business requirements into programmed applications/modules/ business processes, flows Conduct Daily stand-up meetings to take stock on the progress of team on all the Deliverables Manage application management deliverables Carry out Code Review, Documentation & review deliverables from junior developers Conduct Technical and functional Sessions periodically Understand the Application lifecycle management process Effective communication with proactive status reporting with different stakeholders in the account Monitoring applications and able to understand the project process. Well-versed in root cause analysis and ability to provide right solution to support requests Preparation of HLDs, LLDs, Design Docs, and technical Design Docs etc. Understand the Application lifecycle management process Understanding technical mini specifications and developing code from them by maintaining specified coding standards. Preparation of Test plans (unit/module level) and Execution of test plans.

Posted 2 weeks ago

Apply

3.0 years

6 - 8 Lacs

Hyderābād

On-site

- 3+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience - Experience with data visualization using Tableau, Quicksight, or similar tools - Experience with data modeling, warehousing and building ETL pipelines - Experience in Statistical Analysis packages such as R, SAS and Matlab - Experience using SQL to pull data from a database or data warehouse and scripting experience (Python) to process data for modeling Rest of World (ROW) Transportation Execution team in Hyderabad is looking for an innovative, hands-on and customer-obsessed BIE for its Analytics function. Candidate must be detail oriented, have superior verbal and written communication skills, strong organizational skills, excellent technical skills and should be able to juggle multiple tasks at once. Candidate must be able to identify problems before they happen and implement solutions that detect and prevent outages. The candidate must be able to accurately prioritize projects, make sound judgments, work to improve the customer experience and get the right things done. This job requires you to constantly hit the ground running and have the ability to learn quickly. Primary responsibilities include defining the problem and building analytical frameworks to help the operations to streamline the process, identifying gaps in the existing process by analyzing data and liaising with relevant team(s) to plug it and analyzing data and metrics and sharing update with the internal teams. Experience with AWS solutions such as EC2, DynamoDB, S3, and Redshift Experience in data mining, ETL, etc. and using databases in a business environment with large-scale, complex datasets Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.

Posted 2 weeks ago

Apply

3.0 years

6 - 8 Lacs

Hyderābād

On-site

DESCRIPTION Rest of World (ROW) Transportation Execution team in Hyderabad is looking for an innovative, hands-on and customer-obsessed BIE for its Analytics function. Candidate must be detail oriented, have superior verbal and written communication skills, strong organizational skills, excellent technical skills and should be able to juggle multiple tasks at once. Candidate must be able to identify problems before they happen and implement solutions that detect and prevent outages. The candidate must be able to accurately prioritize projects, make sound judgments, work to improve the customer experience and get the right things done. This job requires you to constantly hit the ground running and have the ability to learn quickly. Primary responsibilities include defining the problem and building analytical frameworks to help the operations to streamline the process, identifying gaps in the existing process by analyzing data and liaising with relevant team(s) to plug it and analyzing data and metrics and sharing update with the internal teams. BASIC QUALIFICATIONS 3+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience Experience with data visualization using Tableau, Quicksight, or similar tools Experience with data modeling, warehousing and building ETL pipelines Experience in Statistical Analysis packages such as R, SAS and Matlab Experience using SQL to pull data from a database or data warehouse and scripting experience (Python) to process data for modeling PREFERRED QUALIFICATIONS Experience with AWS solutions such as EC2, DynamoDB, S3, and Redshift Experience in data mining, ETL, etc. and using databases in a business environment with large-scale, complex datasets Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Job details IND, TS, Hyderabad Supply Chain/Transportation Management

Posted 2 weeks ago

Apply

8.0 years

30 - 38 Lacs

Gurgaon

Remote

Role: AWS Data Engineer Location: Gurugram Mode: Hybrid Type: Permanent Job Description: We are seeking a talented and motivated Data Engineer with requisite years of hands-on experience to join our growing data team. The ideal candidate will have experience working with large datasets, building data pipelines, and utilizing AWS public cloud services to support the design, development, and maintenance of scalable data architectures. This is an excellent opportunity for individuals who are passionate about data engineering and cloud technologies and want to make an impact in a dynamic and innovative environment. Key Responsibilities: Data Pipeline Development: Design, develop, and optimize end-to-end data pipelines for extracting, transforming, and loading (ETL) large volumes of data from diverse sources into data warehouses or lakes. Cloud Infrastructure Management: Implement and manage data processing and storage solutions in AWS (Amazon Web Services) using services like S3, Redshift, Lambda, Glue, Kinesis, and others. Data Modeling: Collaborate with data scientists, analysts, and business stakeholders to define data requirements and design optimal data models for reporting and analysis. Performance Tuning & Optimization: Identify bottlenecks and optimize query performance, pipeline processes, and cloud resources to ensure cost-effective and scalable data workflows. Automation & Scripting: Develop automated data workflows and scripts to improve operational efficiency using Python, SQL, or other scripting languages. Collaboration & Documentation: Work closely with data analysts, data scientists, and other engineering teams to ensure data availability, integrity, and quality. Document processes, architectures, and solutions clearly. Data Quality & Governance: Ensure the accuracy, consistency, and completeness of data. Implement and maintain data governance policies to ensure compliance and security standards are met. Troubleshooting & Support: Provide ongoing support for data pipelines and troubleshoot issues related to data integration, performance, and system reliability. Qualifications: Essential Skills: Experience: 8+ years of professional experience as a Data Engineer, with a strong background in building and optimizing data pipelines and working with large-scale datasets. AWS Experience: Hands-on experience with AWS cloud services, particularly S3, Lambda, Glue, Redshift, RDS, and EC2. ETL Processes: Strong understanding of ETL concepts, tools, and frameworks. Experience with data integration, cleansing, and transformation. Programming Languages: Proficiency in Python, SQL, and other scripting languages (e.g., Bash, Scala, Java). Data Warehousing: Experience with relational and non-relational databases, including data warehousing solutions like AWS Redshift, Snowflake, or similar platforms. Data Modeling: Experience in designing data models, schema design, and data architecture for analytical systems. Version Control & CI/CD: Familiarity with version control tools (e.g., Git) and CI/CD pipelines. Problem-Solving: Strong troubleshooting skills, with an ability to optimize performance and resolve technical issues across the data pipeline. Desirable Skills: Big Data Technologies: Experience with Hadoop, Spark, or other big data technologies. Containerization & Orchestration: Knowledge of Docker, Kubernetes, or similar containerization/orchestration technologies. Data Security: Experience implementing security best practices in the cloud and managing data privacy requirements. Data Streaming: Familiarity with data streaming technologies such as AWS Kinesis or Apache Kafka. Business Intelligence Tools: Experience with BI tools (Tableau, Quicksight) for visualization and reporting. Agile Methodology: Familiarity with Agile development practices and tools (Jira, Trello, etc.) Job Type: Permanent Pay: ₹3,000,000.00 - ₹3,800,000.00 per year Benefits: Work from home Schedule: Day shift Monday to Friday Experience: AWS Glue Catalog : 5 years (Required) Data Engineering : 6 years (Required) AWS CDK, Cloud-formation, Lambda, Step-function: 3 years (Required) AWS Elastic MapReduce (EMR): 3 years (Required) Work Location: In person

Posted 2 weeks ago

Apply

3.0 years

7 - 9 Lacs

Gurgaon

On-site

Who We Are Boston Consulting Group partners with leaders in business and society to tackle their most important challenges and capture their greatest opportunities. BCG was the pioneer in business strategy when it was founded in 1963. Today, we help clients with total transformation-inspiring complex change, enabling organizations to grow, building competitive advantage, and driving bottom-line impact. To succeed, organizations must blend digital and human capabilities. Our diverse, global teams bring deep industry and functional expertise and a range of perspectives to spark change. BCG delivers solutions through leading-edge management consulting along with technology and design, corporate and digital ventures—and business purpose. We work in a uniquely collaborative model across the firm and throughout all levels of the client organization, generating results that allow our clients to thrive. What You'll Do As a Data Engineer, you will play a crucial role in designing, building, and maintaining the data infrastructure and systems required for efficient and reliable data processing. You will collaborate with cross-functional teams, including data scientists, analysts, to ensure the availability, integrity, and accessibility of data for various business needs. This role requires a strong understanding of data management principles, database technologies, data integration, and data warehousing concepts. Key Responsibilities Develop and maintain data warehouse solutions, including data modeling, schema design, and indexing strategies Optimize data processing workflows for improved performance, reliability, and scalability Identify and integrate diverse data sources, both internal and external, into a centralized data platform Implement and manage data lakes, data marts, or other storage solutions as required Ensure data privacy and compliance with relevant data protection regulations Define and implement data governance policies, standards, and best practices Transform raw data into usable formats for analytics, reporting, and machine learning purposes Perform data cleansing, normalization, aggregation, and enrichment operations to enhance data quality and usability Collaborate with data analysts and data scientists to understand data requirements and implement appropriate data transformations What You'll Bring Bachelor's or Master's degree in Computer Science, Data Science, Information Systems, or a related field Proficiency in SQL and experience with relational databases (e.g., Snowflake, MySQL, PostgreSQL, Oracle) 3+ years of experience in data engineering or a similar role Hands-on programming skills in languages such as Python or Java is a plus Familiarity with cloud-based data platforms (e.g., AWS, Azure, GCP) and related services (e.g., S3, Redshift, BigQuery) is good to have Knowledge of data modeling and database design principles Familiarity with data visualization tools (e.g., Tableau, Power BI) is a plus Strong problem-solving and analytical skills with attention to detail Experience with HR data analysis and HR domain knowledge is preferred Who You'll Work With As part of the People Analytics team, you will modernize HR platforms, capabilities & engagement, automate/digitize core HR processes and operations and enable greater efficiency. You will collaborate with the global people team and colleagues across BCG to manage the life cycle of all BCG employees. The People Management Team (PMT) is comprised of several centers of expertise including HR Operations, People Analytics, Career Development, Learning & Development, Talent Acquisition & Branding, Compensation, and Mobility. Our centers of expertise work together to build out new teams and capabilities by sourcing, acquiring and retaining the best, diverse talent for BCG’s Global Services Business. We develop talent and capabilities, while enhancing managers’ effectiveness, and building affiliation and engagement in our new global offices. The PMT also harmonizes process efficiencies, automation, and global standardization. Through analytics and digitalization, we are always looking to expand our PMT capabilities and coverage Boston Consulting Group is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, age, religion, sex, sexual orientation, gender identity / expression, national origin, disability, protected veteran status, or any other characteristic protected under national, provincial, or local law, where applicable, and those with criminal histories will be considered in a manner consistent with applicable state and local laws. BCG is an E - Verify Employer. Click here for more information on E-Verify.

Posted 2 weeks ago

Apply

0 years

4 - 6 Lacs

Gurgaon

On-site

About Us KlearNow.AI is on a mission to futurize global trade. Our patented AI and machine learning platform digitizes and contextualizes unstructured trade documents to unlock real-time shipment visibility, drive smart analytics, and provide critical business intelligence—without the hassle of complex integrations. We empower supply chains to move faster, work smarter, and make data-driven decisions with confidence. With operations in the U.S., Canada, U.K., Spain, and the Netherlands—and aggressive growth plans underway—we’re scaling a global platform for the future of logistics. We achieve our goals by assembling a team of the best talents. As we expand, it's crucial to maintain and strengthen our culture, which places a high value on our people and teams. Our collective growth and triumphs are intrinsically linked to the success and well-being of every team member OUR VISION To empower people and optimize processes with AI-powered clarity. YOUR MISSION We’re building a team of bold thinkers, problem solvers, and storytellers. As part of our high-energy, inclusive workplace, you’ll challenge the status quo of traditional supply chains and help shape a more transparent, intelligent, and efficient world of trade. Whether you're a product innovator, logistics expert, or marketing storyteller—your work at KlearNow.AI will make a measurable impact. Why Klearnow.ai Global Impact : Be part of a platform live in five countries and expanding rapidly. Fast-Growing SaaS Company : Work in an agile environment with enterprise backing. Cutting-Edge Tech : AI-powered customs clearance, freight visibility, document automation, and drayage intelligence—all in one. People-First Culture : We invest in our team’s growth and well-being. Make Your Mark : Shape the future of trade with your ideas and energy. About Us KlearNow.AI digitizes and contextualizes unstructured trade documents to create shipment visibility, business intelligence, and advanced analytics for supply chain stakeholders. It provides unparalleled transparency and insights, empowering businesses to operate efficiently. We futurize supply chains with AI&ML-powered collaborative digital platforms created from ingesting required trade documentation without the pain of complex integrations. We achieve our goals by assembling a team of the best talents. As we expand, it's crucial to maintain and strengthen our culture, which places a high value on our people and teams. Our collective growth and triumphs are intrinsically linked to the success and well-being of every team member. OUR VISION To futurize global trade, empowering people and optimizing processes with AI-powered clarity. YOUR MISSION As part of a diverse, high-energy workplace, you will challenge the status quo of supply chain operations with your knack for engaging clients and sharing great stories. KlearNow is operational and a certified Customs Business provider in US, Canada, UK, Spain and Netherlands with plans to grow in many more markets in near future. Business Analyst - Data Science & Business Intelligence Location: India Employment Type: Full-time The Role: Join our Data & Analytics team as a Business Analyst where you'll transform data from our modern data warehouse into actionable business insights and strategic recommendations. You'll work with advanced analytics tools and techniques to create compelling reports, dashboards, and predictive models that drive data-driven decision making across the organization. Key Responsibilities: Analyze data from cloud data warehouses (like Amazon Redshift) to identify business trends and opportunities Create interactive dashboards and reports using Business Intelligence platforms(like ThoughtSpot, PowerBI) Develop statistical models and perform predictive analytics using tools (like Python, R) Collaborate with stakeholders to understand business requirements and translate them into analytical solutions Design and implement KPIs, metrics, and performance indicators for various business functions Conduct ad-hoc analysis to support strategic business decisions and initiatives Present findings and recommendations to leadership through compelling data visualizations Monitor and troubleshoot existing reports and dashboards to ensure accuracy and performance Ensure data quality and consistency in all analytical outputs and reporting Support business teams with self-service analytics training and best practices Required Qualifications: Strong analytical and problem-solving skills with business acumen Experience with Business Intelligence tools and dashboard creation Proficiency in data analysis using programming languages (like Python, R) or advanced Excel Experience querying cloud data warehouses and relational databases Strong data visualization and storytelling capabilities Experience with statistical analysis and basic predictive modeling Preferred Qualifications: Experience with advanced BI platforms (like ThoughtSpot) is a significant advantage Machine learning and advanced statistical modeling experience Experience with modern analytics tools and frameworks Advanced data visualization and presentation skills Experience with business process optimization and data-driven strategy Join our vibrant and forward-thinking team at KlearNow.ai as we continue to push the boundaries of AI/ML technology. We offer a competitive salary, flexible work arrangements, and ample opportunities for professional growth. We are committed to diversity, equality and inclusion. If you are passionate about shaping the future of logistics and supply chain and making a difference, we invite you to apply . Join our vibrant and forward-thinking team at KlearNow.ai as we continue to push the boundaries of AI/ML technology. We offer a competitive salary, flexible work arrangements, and ample opportunities for professional growth. We are committed to diversity, equality and inclusion. If you are passionate about shaping the future of logistics and supply chain and making a difference, we invite you to apply .

Posted 2 weeks ago

Apply

6.0 years

0 Lacs

Hyderabad, Telangana, India

Remote

Overview As a member of the data engineering team, you will be the key technical expert developing and overseeing PepsiCo's data product build & operations and drive a strong vision for how data engineering can proactively create a positive impact on the business. You'll be an empowered member of a team of data engineers who build data pipelines into various source systems, rest data on the PepsiCo Data Lake, and enable exploration and access for analytics, visualization, machine learning, and product development efforts across the company. As a member of the data engineering team, you will help lead the development of very large and complex data applications into public cloud environments directly impacting the design, architecture, and implementation of PepsiCo's flagship data products around topics like revenue management, supply chain, manufacturing, and logistics. You will work closely with process owners, product owners and business users. You'll be working in a hybrid environment with in-house, on-premise data sources as well as cloud and remote systems Responsibilities Active contributor to code development in projects and services. Manage and scale data pipelines from internal and external data sources to support new product launches and drive data quality across data products. Build and own the automation and monitoring frameworks that captures metrics and operational KPIs for data pipeline quality and performance. Responsible for implementing best practices around systems integration, security, performance and data management. Empower the business by creating value through the increased adoption of data, data science and business intelligence landscape. Collaborate with internal clients (data science and product teams) to drive solutioning and POC discussions. Evolve the architectural capabilities and maturity of the data platform by engaging with enterprise architects and strategic internal and external partners. Develop and optimize procedures to “productionalize” data science models. Define and manage SLA’s for data products and processes running in production. Support large-scale experimentation done by data scientists. Prototype new approaches and build solutions at scale. Research in state-of-the-art methodologies. Create documentation for learnings and knowledge transfer. Create and audit reusable packages or libraries Qualifications 6+ years of overall technology experience that includes at least 4+ years of hands-on software development, data engineering, and systems architecture. 4+ years of experience with Data Lake Infrastructure, Data Warehousing, and Data Analytics tools. 4+ years of experience in SQL optimization and performance tuning, and development experience in programming languages like Python, PySpark, Scala etc.). 2+ years in cloud data engineering experience in Azure. Fluent with Azure cloud services. Azure Certification is a plus. Experience with integration of multi cloud services with on-premises technologies. Experience with data modeling, data warehousing, and building high-volume ETL/ELT pipelines. Experience with data profiling and data quality tools like Apache Griffin, Deequ, and Great Expectations. Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets. Experience with at least one MPP database technology such as Redshift, Synapse or SnowFlake. Experience with running and scaling applications on the cloud infrastructure and containerized services like Kubernetes. Experience with version control systems like Github and deployment & CI tools. Experience with Azure Data Factory, Azure Databricks and Azure Machine learning tools. Experience with Statistical/ML techniques is a plus. Experience with building solutions in the retail or in the supply chain space is a plus Understanding of metadata management, data lineage, and data glossaries is a plus. Working knowledge of agile development, including DevOps and DataOps concepts. Familiarity with business intelligence tools (such as PowerBI). BA/BS in Computer Science, Math, Physics, or other technical fields.

Posted 2 weeks ago

Apply

0 years

0 Lacs

Ahmedabad

On-site

ROLES RESPONSIBILITIES: Work with business IT partners to understand the business and data requirements. Acquire data from primary or secondary data sources and perform data profiling. Interpret data, analyze results and provide quick data analysis. Identify, analyze, and interpret trends or patterns in complex datasets. Build bus matrices, proof of concepts, source mapping document and raw source data model. Locate and define new process improvement opportunities. MANDATORY SKILLS: Strong knowledge of and experience with Excel, databases (Redshift, Oracle etc.) and programming languages (Python, R). Ability to write SQL queries to perform data profiling, data analysis and present the insights to the business. Exposure to Datawarehousing and Data Modelling concepts. Strong Exposure to IT Project Lifecycle. Finance/Life Science Domain experience. BI Tool knowledge is an added advantage.

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies