Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
9.0 years
0 Lacs
Andhra Pradesh
On-site
Data Engineer Must have 9+ years of experience in below mentioned skills. Must Have: Big Data Concepts Python(Core Python- Able to write code), SQL, Shell Scripting, AWS S3 Good to Have: Event-driven/AWA SQS, Microservices, API Development,Kafka, Kubernetes, Argo, Amazon Redshift, Amazon Aurora About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.
Posted 4 days ago
5.0 - 8.0 years
7 - 10 Lacs
Bengaluru
Work from Office
We are seeking a highly skilled Senior Data Engineer to join our dynamic team in Bangalore. You will design, develop, and maintain scalable data ingestion frameworks and ELT pipelines using tools such as DBT, Apache Airflow, and Prefect. The ideal candidate will have deep technical expertise in cloud platforms (especially AWS), data architecture, and orchestration tools. You will work with modern cloud data warehouses like Snowflake, Redshift, or Databricks and integrate pipelines with AWS services such as S3, Lambda, Step Functions, and Glue. A strong background in SQL, scripting, and CI/CD practices is essential. Experience with data systems in manufacturing is a plus.
Posted 4 days ago
3.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Description IES Prime is building a team to take Prime experience of customers to the next level by building capabilities that are relevant for Prime as well as non-Prime customers in IN and other EMs. Our Development team plays a pivotal role in this program, with the mission to build a comprehensive solution for the India Prime business. This is a rare opportunity to be part of a team that will be responsible for building a successful, sustainable and strategic business for Amazon Prime and to expand the coverage of recurring payments for Prime in India and take it to new emerging markets. The candidate will be instrumental in shaping the product direction and will be actively involved in defining key product features that impact the business. You will work with Sr. and Principal Engineers at Amazon Prime to evolve the design and architecture of the products owned by this team. You will be responsible to set up and hold a high software quality bar besides providing technical direction to a highly technical team of Software Engineers. As part of this team you will work to ensure Amazon.in Prime experience is seamless and has the best shopping experience. It’s a great opportunity to develop and enhance experiences for Mobile devices first. You will work on analyzing the latency across the various Amazon.in pages using RedShift, DynamoDB, S3, Java, and Spark. You will get the opportunity to code on almost all key pages on retail website building features and improving business metrics. You will also contribute reducing latency for customers by reducing the bytes on wire and adapting the UX based on network bandwidth. You will be part of a team that obsesses about the performance of our customer’s experience and enjoy flexibility to pursue what makes sense. Come enjoy an exploratory and research oriented team of Cowboys working in a fast paced environment, who are always eager to take on big challenges. Basic Qualifications 3+ years of non-internship professional software development experience 2+ years of non-internship design or architecture (design patterns, reliability and scaling) of new and existing systems experience 3+ years of Video Games Industry (supporting title Development, Release, or Live Ops) experience Experience programming with at least one software programming language Preferred Qualifications 3+ years of full software development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations experience Bachelor's degree in computer science or equivalent Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI - Haryana Job ID: A3017843
Posted 4 days ago
1.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Technology @Dream11: Technology is at the core of everything we do. Our technology team helps us deliver a mobile-first experience across platforms (Android & iOS) while managing over 700 million rpm (requests per minute) at peak with a user concurrency of over 16.5 million. We have over 190+ micro-services written in Java and backed by a Vert.x framework. These work with isolated product features with discrete architectures to cater to the respective use cases. We work with terabytes of data, the infrastructure for which is built on top of Kafka, Redshift, Spark, Druid, etc. and it powers a number of use cases like Machine Learning and Predictive Analytics. Our tech stack is hosted on AWS, with distributed systems like Cassandra, Aerospike, Akka, Voltdb, Ignite, etc. Your Role: Solving real business problems; defining and proactively tracking primary and other related metrics & building hypothesis around it Designing & executing product experiments with minimal supervision Co-owning business metrics and identifying opportunities to improve them Designing RCA Frameworks, establishing KPI trees (under guidance) and creating alerting & anomaly detection mechanisms Proactively identifying user problems through RPA, metrics anomalies, funnels and product usage behaviours Implementing best practices in documenting, backing up code, and overall data compliance Qualifiers: 1+ years of relevant professional experience in Analytics domain Proficient in advanced SQL and visualization tools (Looker/Tableau, etc.) Base competency in programming languages (R/Python/Jupyter Notebooks etc.) and Product Analytics tools (Omniture/GA/Amplitude etc). Basic understanding of Machine Learning About Dream Sports: Dream Sports is India’s leading sports technology company with 250 million users, housing brands such as Dream11 , the world’s largest fantasy sports platform, FanCode , a premier sports content & commerce platform and DreamSetGo , a sports experiences platform. Dream Sports is based in Mumbai and has a workforce of close to 1,000 ‘Sportans’. Founded in 2008 by Harsh Jain and Bhavit Sheth, Dream Sports’ vision is to ‘Make Sports Better’ for fans through the confluence of sports and technology. For more information: https://dreamsports.group/ Dream11 is the world’s largest fantasy sports platform with 230 million users playing fantasy cricket, football, basketball & hockey on it. Dream11 is the flagship brand of Dream Sports, India’s leading Sports Technology company and has partnerships with several national & international sports bodies and cricketers.
Posted 4 days ago
8.0 years
25 - 30 Lacs
India
Remote
Job Title: Informatica ETL Developer Primary Skills: Informatica ETL, SQL, Data Migration, Redshift Informatica ETL SQL Data Migration Redshift Total Experience Required: 8+ Years Relevant Informatica Experience: Minimum 5+ Years Location: [Remote ] Employment Type: [Contract] Job Overview We are seeking an experienced Informatica ETL Developer with strong expertise in ETL development, data migration, and SQL optimization —especially on Amazon Redshift . The ideal candidate will have a solid foundation in data warehousing principles and hands-on experience with Informatica PowerCenter and/or Informatica Cloud , along with proven experience migrating ETL processes from platforms like Talend or IBM DataStage to Informatica. Key Responsibilities Lead or support the migration of ETL processes from Talend or DataStage to Informatica. Design, develop, and maintain efficient ETL workflows using Informatica PowerCenter or Informatica Cloud. Write, optimize, and troubleshoot complex SQL queries—especially in Amazon Redshift. Work on data lakehouse architectures and ensure smooth integration with ETL processes. Understand and implement data warehousing concepts, including star/snowflake schema design, SCDs, data partitioning, etc. Ensure ETL performance, data integrity, scalability, and data quality in all stages of processing. Collaborate with business analysts, data engineers, and other developers to gather requirements and design end-to-end data solutions. Perform performance tuning, issue resolution, and support production ETL jobs as needed. Contribute to design and architecture discussions, documentation, and code reviews. Work with structured and unstructured data and transform it as per business logic and reporting needs. Required Skills & Qualifications Minimum 8+ years of experience in data engineering or ETL development. At least 5+ years of hands-on experience with Informatica PowerCenter and/or Informatica Cloud. Experience in ETL migration projects, specifically from Talend or DataStage to Informatica. Proficiency in Amazon Redshift and advanced SQL scripting, tuning, and debugging. Strong grasp of data warehousing principles, dimensional modeling, and ETL performance optimization. Experience working with data lakehouse architecture (e.g., S3, Glue, Athena, etc. with Redshift). Ability to handle large data volumes, complex transformations, and data reconciliation. Strong understanding of data integrity, security, and governance best practices. Effective communication skills and ability to work cross-functionally with both technical and non-technical stakeholders. Nice To Have Experience with CI/CD for data pipelines or version control tools like Git. Exposure to Agile/Scrum development methodologies. Familiarity with Informatica Intelligent Cloud Services (IICS). Experience with Python or Shell scripting for automation. Skills: etl performance optimization,data integrity,redshift,informatica etl,amazon redshift,sql,informatica,amazon,data migration,data lakehouse architecture,etl development,data warehousing,data governance,data,architecture,etl
Posted 4 days ago
3.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Data Analytics Good to have skills : Microsoft SQL Server, AWS Redshift Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various stakeholders to gather requirements, overseeing the development process, and ensuring that the applications meet the specified needs. You will also engage in problem-solving discussions, providing insights and solutions to enhance application performance and user experience. Additionally, you will mentor team members, fostering a collaborative environment that encourages innovation and continuous improvement. Roles & Responsibilities: - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Facilitate knowledge sharing sessions to enhance team capabilities. - Analyze application performance metrics and implement improvements. Professional & Technical Skills: - Must To Have Skills: Proficiency in Data Analytics. - Good To Have Skills: Experience with Microsoft SQL Server, AWS Redshift. - Strong analytical skills to interpret complex data sets. - Experience with data visualization tools to present findings effectively. - Ability to work with large datasets and perform data cleaning and transformation. Additional Information: - The candidate should have minimum 3 years of experience in Data Analytics. - This position is based at our Gurugram office. - A 15 years full time education is required.
Posted 4 days ago
8.0 years
0 Lacs
Chandigarh, India
Remote
Senior AWS Data Engineer Location: Remote Experience: 7–8 years Open Positions: 4 Work Hours: Must be comfortable working in UK shift timings Job Summary: We are looking for experienced AWS Data Engineers with a strong foundation in data engineering principles and hands-on expertise in building scalable data pipelines and assets. The ideal candidate should have in-depth experience in Python, PySpark, and AWS services, with a focus on performance optimization and modern software engineering practices. Key Responsibilities: Design, develop, and optimize data pipelines using PySpark and Spark SQL. Refactor legacy codebases to improve clarity, maintainability, and performance. Apply test-driven development (TDD) principles, writing unit tests to ensure robust and reliable code. Debug complex issues, including performance bottlenecks, concurrency challenges, and logical errors. Utilize Python best practices, libraries, and frameworks effectively. Implement and manage code versioning and artifact repositories (Git, JFrog Artifactory). Work with various AWS services, including S3, EC2, Lambda, Redshift, CloudFormation, etc. Provide architectural insights and articulate the benefits and trade-offs of AWS components. Requirements: 7–8 years of strong hands-on experience with PySpark and Python (including Boto3 and relevant libraries). Proven ability to write clean, optimized, and scalable code in Spark SQL and PySpark. Solid understanding of version control systems, preferably Git, and experience with JFrog Artifactory. Strong knowledge of AWS services and architecture; ability to explain service benefits and use cases. Experience in modernizing legacy systems and implementing clean code practices. Strong debugging skills with the ability to identify and fix complex bugs and performance issues. Familiarity with TDD and writing comprehensive unit tests
Posted 4 days ago
3.0 - 8.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Location- Gurugram, Mumbai, Pune & Bangalore Years of experience - 3- 8years We are seeking skilled and dynamic Cloud Data Engineers specializing in AWS, Azure, Databricks. The ideal candidate will have a strong background in data engineering, with a focus on data ingestion, transformation, and warehousing. They should also possess excellent knowledge of PySpark or Spark, and a proven ability to optimize performance in Spark job executions. Key Responsibilities: - Design, build, and maintain scalable data pipelines for a variety of cloud platforms including AWS, Azure, Databricks. - Implement data ingestion and transformation processes to facilitate efficient data warehousing. - Utilize cloud services to enhance data processing capabilities: - AWS: Glue, Athena, Lambda, Redshift, Step Functions, DynamoDB, SNS. - Azure: Data Factory, Synapse Analytics, Functions, Cosmos DB, Event Grid, Logic Apps, Service Bus. - Optimize Spark job performance to ensure high efficiency and reliability. - Stay proactive in learning and implementing new technologies to improve data processing frameworks. - Collaborate with cross-functional teams to deliver robust data solutions. - Work on Spark Streaming for real-time data processing as necessary. Qualifications: - 3-8 years of experience in data engineering with a strong focus on cloud environments. - Proficiency in PySpark or Spark is mandatory. - Proven experience with data ingestion, transformation, and data warehousing. - In-depth knowledge and hands-on experience with cloud services(AWS/Azure): - Demonstrated ability in performance optimization of Spark jobs. - Strong problem-solving skills and the ability to work independently as well as in a team. - Cloud Certification (AWS, Azure) is a plus. - Familiarity with Spark Streaming is a bonus. Apply with your details at below link https://forms.office.com/r/mtWD1WL6Rt
Posted 4 days ago
0 years
0 Lacs
Ayanavaram, Tamil Nadu, India
On-site
The purpose of this role is to create and support our business intelligence solutions. The role will help drive BI requirements with clients and ensure accurate reporting documentation is captured. This role will have strong understanding of BI tools and technologies. This role will directly interface with clients, third-party and other teams. Job Description: Key responsibilities: Possesses higher-level understanding of database and data management concepts and principles Communicates at very high level, both written and verbal, to both internal and external stakeholders Defines best practices and the design/development of the data layers for reporting development and deployment Participates in the overall client engagement as the reporting Subject Matter expert (SME) in delivering reporting solutions while collaborating with database administrators, database developers, data stewards and others to ensure the accurate collection and analysis of reporting requirements; including new data requirements, reporting layout and user needs Develops complex worksheets and dashboards for effective storytelling, along with ad-hoc data sets for end users Develops custom tables/views and data models across Databases: SQL Server, Snowflake, BigQuery and/or Redshift Location: DGS India - Chennai - Anna Nagar Tyche Towers Brand: Paragon Time Type: Full time Contract Type: Permanent
Posted 4 days ago
7.0 years
0 Lacs
Coimbatore, Tamil Nadu, India
Remote
Data Engineering Lead | 7+ Years | Remote | Work Timings: 1:00 PM to 10:00 PM Or 2:00 PM to 11:00 PM | Contract Duration- 3 Months + Job Description: Provide technical leadership on data engineering (acquisition, transformation, distribution), across multiple projects and squads Guide, nurture, and coach a team of data engineers, architects and database administrators working in an agile, fast- paced environment Drive adoption and evolution of frameworks and products based on best- in-class data management technologies, such as ETL/ELT tools, open-source frameworks, workflow management tools, etc. Provide suggestions and recommendation on how to improve the overall data engineering architecture towards a scalable, cost effective ecosystem Your Deliverables: Design, implement, and manage data pipelines using our customer data platform Treasure Data. Develop, test, and maintain custom workflows. Monitor and troubleshoot Airflow workflows to ensure optimal performance and reliability. Collaborate with data engineers and data scientists to understand data requirements and translate them into scalable workflows. Ensure data quality and integrity across all workflows. Document processes, workflows, and configurations. Interact with Redshift on a regular basis ensuring pipelines are functioning as expected Your Qualities: Inherent curiosity and empathy for customer success Obsessive about solving customer problems Think long and Act short Collaborative with your peers, partners and your team. Excited about the mission and milestones not titles and hierarchies Nurture an environment of experimentation and learning Your Expertise: Excellent understanding of enterprise data warehousing principles of ETL/ELT and the tools/technologies used to implement them Proven experience with Treasure Data, including designing and maintaining complex workflows. Strong knowledge of SQL programming and databases Strong knowledge of DigDag files
Posted 4 days ago
4.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Us: We’re building a powerful Data Analytics Engine to transform raw market and historical data into actionable intelligence in the electronics supply chain industry. We’re now looking to bring on a full-time Data Engineer to help us accelerate and scale up our pipeline development. Job Overview: We’re seeking a hands-on Data Engineer to design, build, and optimize scalable ETL pipelines in AWS. You will take over and expand an existing pipeline currently being transitioned from prototype code (Python-based) into production-ready infrastructure. You’ll collaborate with our engineering team who will support and advise you in this transition. This is a key role with strong ownership in shaping and scaling our data infrastructure to support real-time analytics and ML/AI-driven features. Responsibilities: Build, deploy, and maintain scalable ETL pipelines for millions of data records in AWS (using tools such as AWS Glue, Step Functions S3 and PostgreSQL). Implement and manage pipeline orchestration using AWS Step Functions, transitioning towards automated execution Collaborate with the ML and product team to integrate Python-based data cleaning and anomaly detection modules into the ETL pipeline. Automate and optimize jobs with CI/CD workflows, ensuring performance, observability, and reliability. Design and manage data lakes or warehouse schemas optimized for analytics and ML. Implement robust data quality checks, versioning, and validation steps. Work closely with our engineering team who will provide architectural support. Nice to Have: Contribute to Infrastructure as Code (CDKTF) efforts for environment management. Required Qualifications: Minimum 4 years of experience as a Data Engineer or in a similar backend/data role. Proven experience building data pipelines and ETL workflows in AWS. Strong programming skills in Python. Experience working with relational databases like PostgreSQL, Redshift, or similar. Familiarity with CI/CD tools and infrastructure-as-code (e.g., Terraform, AWS CDK, CloudFormation). Understanding of basic ML/AI workflows and how to support them with data engineering. Preferred Qualifications: Experience working in AWS or other cloud environments such as Azure or GCP. Background in data science or knowledge of ML model deployment and feature engineering. Familiarity with GitHub Actions, Docker, or serverless computing models. Exposure to large-scale data processing tools like Spark, Airflow, or dbt. What We Offer: Opportunity to work on impactful problems with real-world applications in electronics supply chain intelligence. Direct mentorship from experienced engineers and AI product leads. A flexible, startup-friendly environment where your ideas and execution matter. Competitive compensation and the chance to grow with the company. Industry Business Intelligence Platforms Employment Type Full-time
Posted 4 days ago
8.0 years
0 Lacs
Jaipur, Rajasthan, India
On-site
🧠 We're Hiring: Data Architect (8+ Years Experience) 📍 Location: Jaipur 🏢 Hiring Company: ThoughtsWin System 📩 Send your CV to: shobha.jain@appzime.com or pragya.pandey@appzime.com 💼 Job Summary: We’re seeking a skilled Data Architect to lead the design and delivery of innovative, scalable data solutions that power our business. With deep expertise in Databricks and cloud platforms like Azure and AWS , you’ll architect high-performance data systems and drive impactful analytics. If you thrive on solving complex data challenges, this is your chance to shine. 🔧 Key Responsibilities: 🧱 Design scalable data lakes, warehouses, and real-time streaming architectures ⚙️ Build and optimize data pipelines and Delta Lake solutions using Databricks (Spark, Workflows, SQL Analytics) ☁️ Develop cloud-native data platforms on Azure (Synapse, Data Factory, Data Lake) and AWS (Redshift, Glue, S3) 🔄 Create and automate ETL/ELT workflows with Apache Spark, PySpark, and cloud tools 📊 Design robust data models (dimensional, normalized, star schemas) for analytics and reporting 🚀 Leverage big data tech like Hadoop, Kafka, Scala for large-scale processing 🔐 Ensure data governance , security, and compliance (GDPR, HIPAA) ⚡ Optimize Spark workloads and storage for performance 🤝 Collaborate with engineering, analytics, and business teams to align data solutions with goals ✅ Required Skills & Qualifications: 🧠 8+ years in data architecture, engineering , or analytics roles 🔥 Hands-on with Databricks (Delta Lake, Spark, MLflow, pipelines) ☁️ Expertise in Azure (Synapse, Data Lake, Data Factory) & AWS (Redshift, S3, Glue) 🐍 Strong coding in SQL , Python , or Scala 🗃️ Experience with NoSQL (e.g., MongoDB), streaming tools (e.g., Kafka) 📋 Knowledge of data governance and compliance practices ✨ Excellent problem-solving & communication skills 👥 Ability to work cross-functionally with multiple teams 🚀 Ready to architect the future of data? Send your CV to shobha.jain@appzime.com and be part of a visionary team. #DataArchitect #BigData #Azure #AWS #Databricks #ETL #DataEngineering #Spark #Hiring #TechJobs #JaipurJobs #ThoughtsWinSystem #AppZimeHiring
Posted 4 days ago
3.0 - 5.0 years
0 Lacs
Gandhinagar, Gujarat, India
On-site
Position: Data Engineer Experience: 3 - 5 years Salary: upto 14 LPA Location: Gandhinagar, Gujarat Role : To lead a team of data engineers for various data development projects. To develop, test & deliver data systems on industry leading cloud platforms and implement methods to improve data reliability and quality. Requirement : Minimum 3 years of professional experience as a data engineer Must have strong working experience in Python and its various data analysis packages – Pandas / NumPy Must have strong understanding of prevalent cloud ecosystems and experience in one of the cloud platforms – AWS / Azure / GCP . Must have strong working experience in one of the leading MPP Databases – Snowflake / Amazon Redshift / Azure Synapse / Google Big Query Must have strong working experience in one of the leading data orchestration tools in cloud – Azure Data Factory / Amazon Glue / Apache Airflow Must have experience working with Agile methodologies, Test Driven Development, and implementing CI/CD pipelines using one of leading services – GITLab / Azure DevOps / Jenkins / AWS Code Pipeline / Google Cloud Build Must have Data Governance / Data Management / Data Quality project implementation experience Must have experience in big data processing using Spark Must have strong experience with SQL databases (SQL Server, Oracle, Postgres etc.) Must have stakeholder management experience and very good communication skills Must have working experience on end-to-end project delivery including requirement gathering, design, development, testing, deployment, and warranty support Must have working experience with various testing levels, such as, unit testing, integration testing and system testing Working experience with large, heterogeneous datasets in building and optimizing data pipelines, pipeline architectures Nice to have Skills : Working experience in DataBricks notebooks and managing DataBricks clusters Experience in Data Modelling tool such as Erwin or ER Studio Experience in one of the data architectures, such as Data Mesh or Data Fabric Has handled real time data or near real time data Experience in one of the leading Reporting & analysis tools, such as Power BI, Qlik, Tableau or Amazon Quick Sight Working experience with API integration General insurance / banking / finance domain understanding
Posted 4 days ago
8.0 years
0 Lacs
Greater Kolkata Area
On-site
Responsibilities Design, develop, and maintain interactive dashboards and reports in Tableau to support business needs. Optimize Tableau dashboards for performance, scalability, and usability. Develop complex calculations, parameters, LOD expressions, and custom visualizations in Tableau. Work closely with business analysts, data engineers, and stakeholders to translate business requirements into meaningful visual analytics. Create and maintain data models, data blending, and relationships within Tableau. Implement governance and security protocols for Tableau Cloud /Server environments. Provide training and support to business users on self-service analytics using You Are : 8+ years of experience in Tableau development and BI reporting. Strong proficiency in SQL (writing complex queries, stored procedures, performance tuning) Experience working with databases such as SQL Server, Snowflake, Redshift, BigQuery, etc. Strong understanding of data warehousing, dimensional modeling, and ETL processes Experience with Tableau Cloud/Server administration (publishing, security, permissions). Knowledge of data visualization best practices and UX/UI Qualifications : Tableau Desktop & Server Certification (e.g., Tableau Certified Data Analyst, Tableau Desktop Specialist). (ref:hirist.tech)
Posted 4 days ago
4.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Job Description: ETL + QA Engineer Role Overview: We are looking for a skilled ETL + QA Engineer to join our team and ensure the quality and reliability of data pipelines, ETL processes, and applications. The ideal candidate should have expertise in both ETL testing and QA methodologies , with a strong understanding of data validation, automation, and performance testing. Key Responsibilities: ETL Testing: Validate ETL workflows, data transformations, and data migration processes. Perform data integrity checks across multiple databases (SQL, NoSQL, Data Lakes). Verify data extraction, transformation, and loading (ETL) processes from source to destination. Write and execute SQL queries to validate data accuracy and consistency. Identify and report issues related to data loss, truncation, and incorrect transformations. Quality Assurance: Develop, maintain, and execute test cases, test plans, and test scripts for ETL processes and applications. Perform manual and automation testing for APIs, UI, and backend systems. Conduct functional, integration, system, and regression testing. Utilize API testing tools (Postman, RestAssured) to validate endpoints and data responses. Automate test cases using Selenium, Python, or any other scripting language. Participate in performance and scalability testing for ETL jobs. CI/CD & Automation: [ Preferrable ] Implement automation frameworks for ETL testing. Integrate tests within CI/CD pipelines using Jenkins, GitHub Actions, or Azure DevOps. Collaborate with developers and data engineers to improve testing strategies and defect resolution. Required Skills & Experience: Technical Skills: Strong experience in ETL Testing and Data Validation. Proficiency in SQL, PL/SQL for complex queries and data verification. Hands-on experience with ETL tools like Informatica, Talend, SSIS, or Apache Nifi. Experience in API Testing (Postman, RestAssured, SoapUI). Knowledge of automation tools (Selenium, Python, Java, or C#). Familiarity with cloud platforms (AWS, GCP, Azure) and data warehousing solutions (Snowflake, Redshift, BigQuery). Experience working with CI/CD pipelines and Git-based workflows. Knowledge of scripting languages (Python, Shell, or PowerShell) is a plus. Soft Skills: Strong analytical and problem-solving skills. Excellent communication and documentation skills. Ability to work in an Agile/Scrum environment. Strong collaboration skills to work with cross-functional teams. Preferred Qualifications: Bachelor’s/Master’s degree in Computer Science, IT, or related field. 4-7 years of experience in ETL Testing, QA. Certifications in ISTQB, AWS, Azure, or any relevant ETL tool is a plus.
Posted 4 days ago
2.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Description Cross Border (XB) plays a pivotal role in Amazon's global expansion, enabling customer acquisition and Prime engagement by offering worldwide selection in both Footprint (FP) and Non-Footprint (NFP) countries. By analyzing cross-border data, we can more accurately identify customer preferences, predict demand, and tailor our offerings accordingly, enhancing sales and customer satisfaction. As a Business Intelligence Engineer, you will be deciphering our customers' ever-evolving needs and shaping solutions that elevate their experience with Amazon. A Successful Candidate Will Possess Strong analytical and problem-solving skills, leveraging data and metrics to inform strategic decisions. Impeccable attention to detail and a clear and compelling communication skills, capable of articulating data insights to diverse stakeholders. Key job responsibilities Deliver on all analytics requirements across business areas. Own the design, development, and maintenance of ongoing metrics, reports, analyses, dashboards, data pipelines etc. to drive key business decisions. Ensure data accuracy by validating data for new and existing tools. Perform both ad-hoc and strategic data analyses Build various data visualizations to tell the story and let the data speak for itself. Recognize and adopt best practices in reporting and analysis: data integrity, test design, analysis, validation, and documentation. Build automation to reduce dependencies on manual data pulls etc. A day in the life A day in the life of Business Intelligence Engineer will include working closely with Product Managers and Software Developers. Working on building dashboards, performing root cause analysis and sharing actionable insights with the stakeholders to enable data-informed decision making. Lead reporting and analytics initiatives to drive data-informed decision making. Design, develop, and maintain ETL processes and data visualization dashboards using Amazon QuickSight. Transform complex business requirements into actionable analytics solutions. Basic Qualifications 2+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience Experience with data visualization using Tableau, Quicksight, or similar tools Experience with one or more industry analytics visualization tools (e.g. Excel, Tableau, QuickSight, MicroStrategy, PowerBI) and statistical methods (e.g. t-test, Chi-squared) Experience with scripting language (e.g., Python, Java, or R) Preferred Qualifications Master's degree, or Advanced technical degree Knowledge of data modeling and data pipeline design Experience with statistical analysis, co-relation analysis Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI MAA 15 SEZ Job ID: A3015460
Posted 4 days ago
0 years
0 Lacs
India
On-site
Before you start reading, please read this carefully! I've spent a good amount of time thinking about what to write :D We will fast track interview anyone who scores highly on this 15 minute assessment https://www.ondemandassessment.com/o/JB-836Q461AE/landing?u=1173417 Who we are Atlas Technologies is an anti-CRM built solely for recruitment professionals. A customer relationship management tool is a required software to run any business. The problem is that data entry is extremely onerous and these systems are only as useful as the quality of data entry. We believe that if we know everything our users say, hear, read and write, a swarm of agents can do all the data entry for them. Our values are a big part of what we do. You can expect us to honour these principles: Values at Atlas Technologies. Your role You will use your technical skills in conjunction with AI coding assistants to create a streamlined and high-quality method of moving our incoming clients from their legacy platforms to our platform. You can anticipate data coming from 10-15 different systems, none of which have been founded in the last decade. As a result, we need to migrate older generation data models to our Snowflake-style data architecture at scale. The good news is that with a lot of founder elbow grease and Open AI, we've already created migrate scripts for a good number of platforms. Still, these scripts will need constant evolution as our competitors (and our own) data models are updated. Data Migrations : Design, manage, and optimize complex data migrations, Pipeline Development : Develop and maintain ETL/ELT processes using dbt, AWS, and Redshift to efficiently manage and transform large datasets. Data Transformation & Modeling : Build a deep understanding of recruitment CRMs (people, projects, companies) so the thought process becomes second nature Own the end-to-end process : Including the injection of new data into our production environments and supporting customer success to delight the client during this migration Documentation : Establish clear, efficient processes for data handling and write documentation for end-to-end pipeline management. Core Stack : AWS Redshift, dbt, Fivetran, SQL, Cursor Who you are Customer migration experience : It's vital that you have conducted a number of customer migrations (+20). This means specifically transforming data in order to onboard a customer into your software/solution Intelligent These complex problems require a degree of IQ (we will use cognitive assessments for this role) Dislike meetings : You prefer fewer meetings, with high quality, rapid alignment and then focus time to deliver your tasks. You enjoy organising the unknown: We have the base tools for success (standard migration scripts and QA scripts) but there is still more to go. We expect the team to iterate on the current process to make it scalable What you'll get Competitive Salary : We will pay competitively based upon location and experience, supplemented with many equity options (the founder has pledged almost 30% of the company to the small team). Work with experienced founders An opportunity to help build a business from scratch with founders who’ve done it before. Location Our current team is spread across Greece, Italy, and India , and we have a preference for candidates in similar time zones. However, we’re open to other locations if there’s a reasonable time zone alignment. Interview Approach 45-minute Call with Jordan, Co-Founder : Discuss your experience and the vision for this role. IQ + Competency test (no coding here) 15-minute catch-up to ask questions Technical Challenge : A data migration or transformation exercise to demonstrate your technical skills and efficiency with dbt, Redshift, or similar tools. 60-Minute Technical Interview with Pavel, VP of Engineering : An in-depth conversation to evaluate technical expertise and problem-solving. Cultural Interviews : Brief chats with other team members to ensure a strong cultural fit. Offer – Ready to begin the journey with us?
Posted 5 days ago
4.0 - 6.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Kyndryl Software Engineering, Data Science Mumbai, Maharashtra, India Hyderabad, Telangana, India Bengaluru, Karnataka, India Pune, Maharashtra, India Posted on Jun 24, 2025 Apply now Who We Are At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. The Role As a Data Engineer at Kyndryl, you'll be at the forefront of the data revolution, crafting and shaping data platforms that power our organization's success. This role is not just about code and databases; it's about transforming raw data into actionable insights that drive strategic decisions and innovation. Technical Professional to Design, Build and Manage the infrastructure and systems that enable organizations to collect, process, store, and analyze large volumes of data. He will be the architects and builders of the data pipelines, ensuring that data is accessible, reliable, and optimized for various uses, including analytics, machine learning, and business intelligence. In this role, you'll be engineering the backbone of our data infrastructure, ensuring the availability of pristine, refined data sets. With a well-defined methodology, critical thinking, and a rich blend of domain expertise, consulting finesse, and software engineering prowess, you'll be the mastermind of data transformation. Your journey begins by understanding project objectives and requirements from a business perspective, converting this knowledge into a data puzzle. You'll be delving into the depths of information to uncover quality issues and initial insights, setting the stage for data excellence. But it doesn't stop there. You'll be the architect of data pipelines, using your expertise to cleanse, normalize, and transform raw data into the final dataset—a true data alchemist. Armed with a keen eye for detail, you'll scrutinize data solutions, ensuring they align with business and technical requirements. Your work isn't just a means to an end; it's the foundation upon which data-driven decisions are made – and your lifecycle management expertise will ensure our data remains fresh and impactful. Technical Professional to Design, Build and Manage the infrastructure and systems that enable organizations to collect, process, store, and analyze large volumes of data. You will be the architects and builders of the data pipelines, ensuring that data is accessible, reliable, and optimized for various uses, including analytics, machine learning, and business intelligence. Key Responsibilities Designing and Building Data Pipelines: Creating robust, scalable, and efficient ETL (Extract, Transform, Load) or ELT (Extract, Load, Transform) pipelines to move data from various sources into data warehouses, data lakes, or other storage systems. Ingest data which is structured, unstructured, streaming, realtime. Data Architecture: Designing and implementing data models, schemas, and database structures that support business requirements and data analysis needs. Data Storage and Management: Selecting and managing appropriate data storage solutions (e.g., relational databases, NoSQL databases, data lakes like HDFS or S3, data warehouses like Snowflake, BigQuery, Redshift). Data Integration: Connecting disparate data sources, ensuring data consistency and quality across different systems. Performance Optimization: Optimizing data processing systems for speed, efficiency, and scalability, often dealing with large datasets (Big Data). Data Governance and Security: Implementing measures for data quality, security, privacy, and compliance with regulations. Collaboration: Working closely with Data Scientists, Data Analysts, Business Intelligence Developers, and other stakeholders to understand their data needs and provide them with clean, reliable data. Automation: Automating data processes and workflows to reduce manual effort and improve reliability. So, if you're a technical enthusiast with a passion for data, we invite you to join us in the exhilarating world of data engineering at Kyndryl. Let's transform data into a compelling story of innovation and growth. Your Future at Kyndryl Every position at Kyndryl offers a way forward to grow your career. We have opportunities that you won’t find anywhere else, including hands-on experience, learning opportunities, and the chance to certify in all four major platforms. Whether you want to broaden your knowledge base or narrow your scope and specialize in a specific sector, you can find your opportunity here. Who You Are You’re good at what you do and possess the required experience to prove it. However, equally as important – you have a growth mindset; keen to drive your own personal and professional development. You are customer-focused – someone who prioritizes customer success in their work. And finally, you’re open and borderless – naturally inclusive in how you work with others. Required Technical and Professional Expertise 4 - 6 years of experience as an Data Engineer . Programming Languages: Strong proficiency in languages like Python, Java, Scala Database Management: Expertise in SQL and experience with various database systems (e.g., PostgreSQL, MySQL, SQL Server, Oracle). Big Data Technologies: Experience with frameworks and tools like Apache Spark, Ni-Fi, Kafka, Flink, or similar distributed processing technologies. Cloud Platforms: Proficiency with cloud data services from providers like Microsoft Azure (Azure Data Lake, Azure Synapse Analytics), Fabric, Cloudera etc Data Warehousing: Understanding of data warehousing concepts, dimensional modelling, and schema design. ETL/ELT Tools: Experience with data integration tools and platforms. Version Control: Familiarity with Git and collaborative development workflows. Preferred Technical And Professional Experience Degree in a scientific discipline, such as Computer Science, Software Engineering, or Information Technology Being You Diversity is a whole lot more than what we look like or where we come from, it’s how we think and who we are. We welcome people of all cultures, backgrounds, and experiences. But we’re not doing it single-handily: Our Kyndryl Inclusion Networks are only one of many ways we create a workplace where all Kyndryls can find and provide support and advice. This dedication to welcoming everyone into our company means that Kyndryl gives you – and everyone next to you – the ability to bring your whole self to work, individually and collectively, and support the activation of our equitable culture. That’s the Kyndryl Way. What You Can Expect With state-of-the-art resources and Fortune 100 clients, every day is an opportunity to innovate, build new capabilities, new relationships, new processes, and new value. Kyndryl cares about your well-being and prides itself on offering benefits that give you choice, reflect the diversity of our employees and support you and your family through the moments that matter – wherever you are in your life journey. Our employee learning programs give you access to the best learning in the industry to receive certifications, including Microsoft, Google, Amazon, Skillsoft, and many more. Through our company-wide volunteering and giving platform, you can donate, start fundraisers, volunteer, and search over 2 million non-profit organizations. At Kyndryl, we invest heavily in you, we want you to succeed so that together, we will all succeed. Get Referred! If you know someone that works at Kyndryl, when asked ‘How Did You Hear About Us’ during the application process, select ‘Employee Referral’ and enter your contact's Kyndryl email address. Apply now See more open positions at Kyndryl
Posted 5 days ago
5.0 years
0 Lacs
Jaipur, Rajasthan, India
On-site
Job Summary Auriga IT is seeking a proactive, problem-solving Data Analyst with 3–5 years of experience owning end-to-end data pipelines. You’ll partner with stakeholders across engineering, product, marketing, and finance to turn raw data into actionable insights that drive business decisions. You must be fluent in the core libraries, tools, and cloud services listed below. Your Responsibilities Pipeline Management Design, build, and maintain ETL/ELT workflows using orchestration frameworks (e.g., Airflow, dbt). Exploratory Data Analysis & Visualization Perform EDA and statistical analysis using Python or R. Prototype and deliver interactive charts and dashboards. BI & Reporting Develop dashboards and scheduled reports to surface KPIs and trends. Configure real-time alerts for data anomalies or thresholds. Insights Delivery & Storytelling Translate complex analyses (A/B tests, forecasting, cohort analysis) into clear recommendations. Present findings to both technical and non-technical audiences. Collaboration & Governance Work cross-functionally to define data requirements, ensure quality, and maintain governance. Mentor junior team members on best practices in code, version control, and documentation. Key Skills You must know at least one technology from each category below: Data Manipulation & Analysis Python: pandas, NumPy R: tidyverse (dplyr, tidyr) Visualization & Dashboarding Python: matplotlib, seaborn, Plotly R: ggplot2, Shiny BI Platforms Commercial or Open-Source (e.g., Tableau, Power BI, Apache Superset, Grafana) ETL/ELT Orchestration Apache Airflow, dbt, or equivalent Cloud Data Services AWS (Redshift, Athena, QuickSight) GCP (BigQuery, Data Studio) Azure (Synapse, Data Explorer) Databases & Querying RDBMS Strong SQL Skill (PostgreSQL, MySQL, Snowflake) Decent Knowledge of NoSQL databases Additionally Bachelor’s or Master’s in a quantitative field (Statistics, CS, Economics, etc.). 3–5 years in a data analyst (or similar) role with end-to-end pipeline ownership. Strong problem-solving mindset and excellent communication skills. Certification of power BI, Tableau is a plus Desired Skills & Attributes Familiarity with version control (Git) and CI/CD for analytics code. Exposure to basic machine-learning workflows (scikit-learn, caret). Comfortable working in Agile/Scrum environments and collaborating across domains. About Company Hi there! We are Auriga IT. We power businesses across the globe through digital experiences, data and insights. From the apps we design to the platforms we engineer, we're driven by an ambition to create world-class digital solutions and make an impact. Our team has been part of building the solutions for the likes of Zomato, Yes Bank, Tata Motors, Amazon, Snapdeal, Ola, Practo, Vodafone, Meesho, Volkswagen, Droom and many more. We are a group of people who just could not leave our college-life behind and the inception of Auriga was solely based on a desire to keep working together with friends and enjoying the extended college life. Who Has not Dreamt of Working with Friends for a Lifetime Come Join In Our Website - https://aurigait.com/
Posted 5 days ago
0 years
0 Lacs
Greater Kolkata Area
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in business application consulting specialise in consulting services for a variety of business applications, helping clients optimise operational efficiency. These individuals analyse client needs, implement software solutions, and provide training and support for seamless integration and utilisation of business applications, enabling clients to achieve their strategic objectives. As a business application consulting generalist at PwC, you will provide consulting services for a wide range of business applications. You will leverage a broad understanding of various software solutions to assist clients in optimising operational efficiency through analysis, implementation, training, and support. *Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Responsibilities: Design and build data pipelines & Data lakes to automate ingestion of structured and unstructured data that provide fast, optimized, and robust end-to-end solutions Knowledge about the concepts of data lake and dat warehouse Experience working with AWS big data technologies Improve the data quality and reliability of data pipelines through monitoring, validation and failure detection. Deploy and configure components to production environments Technology: Redshift, S3, AWS Glue, Lambda, SQL, PySpark, SQL Mandatory skill sets: AWS Data Engineer Preferred skill sets: AWS Data Engineer Years of experience required: 4-8 Education qualification: Btech/MBA/MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master of Business Administration, Bachelor of Engineering Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills AWS Devops Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Analytical Reasoning, Analytical Thinking, Application Software, Business Data Analytics, Business Management, Business Technology, Business Transformation, Communication, Creativity, Documentation Development, Embracing Change, Emotional Regulation, Empathy, Implementation Research, Implementation Support, Implementing Technology, Inclusion, Intellectual Curiosity, Learning Agility, Optimism, Performance Assessment, Performance Management Software {+ 16 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date
Posted 5 days ago
0 years
0 Lacs
Greater Kolkata Area
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in business application consulting specialise in consulting services for a variety of business applications, helping clients optimise operational efficiency. These individuals analyse client needs, implement software solutions, and provide training and support for seamless integration and utilisation of business applications, enabling clients to achieve their strategic objectives. As a business application consulting generalist at PwC, you will provide consulting services for a wide range of business applications. You will leverage a broad understanding of various software solutions to assist clients in optimising operational efficiency through analysis, implementation, training, and support. *Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Responsibilities: Design and build data pipelines & Data lakes to automate ingestion of structured and unstructured data that provide fast, optimized, and robust end-to-end solutions Knowledge about the concepts of data lake and dat warehouse Experience working with AWS big data technologies Improve the data quality and reliability of data pipelines through monitoring, validation and failure detection. Deploy and configure components to production environments Technology: Redshift, S3, AWS Glue, Lambda, SQL, PySpark, SQL Mandatory skill sets: AWS Data Engineer Preferred skill sets: AWS Data Engineer Years of experience required: 4-8 Education qualification: Btech/MBA/MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering, Master of Business Administration Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills AWS Devops Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Analytical Reasoning, Analytical Thinking, Application Software, Business Data Analytics, Business Management, Business Technology, Business Transformation, Communication, Creativity, Documentation Development, Embracing Change, Emotional Regulation, Empathy, Implementation Research, Implementation Support, Implementing Technology, Inclusion, Intellectual Curiosity, Learning Agility, Optimism, Performance Assessment, Performance Management Software {+ 16 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date
Posted 5 days ago
6.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Our client is a leading mobile marketing and audience platform that empowers the app ecosystem through cutting-edge solutions in mobile marketing, audience building, and monetization. With direct integration into over 500,000 monthly active mobile apps, leverages global first-party data to unlock valuable insights, predict behaviors, and drive growth. We are looking for an experienced and innovative Senior Business Analyst to join their Operational Department. Job Descriptions: Key Responsibilities: • Cross-Functional Collaboration: Act as a key analytics partner for business, product, and R&D teams, aligning projects with strategic goals. • Data Analysis & Insights: Design and execute analytics projects, including quantitative analysis, statistical modeling, automated monitoring tools, and advanced data insights. • Business Opportunity Identification: Leverage our client's extensive first-party data to identify trends, predict behaviors, and uncover growth opportunities. • Strategic Reporting: Create impactful dashboards, reports, and presentations to communicate insights and recommendations to stakeholders at all levels. • Innovation: Drive the use of advanced analytics techniques, such as machine learning and predictive modeling, to enhance decision-making processes. Requirements: • Experience: 6+ years as a Data Analyst (or similar role) in media, marketing, or a related industry • Technical Skills: Proficiency in SQL, and Excel, with experience working with large datasets and big data tools (e.g., Vertica, Redshift, Hadoop, Spark). Familiarity with BI and visualization tools (e.g., Tableau, MicroStrategy). • Analytical Expertise: Strong problem-solving skills, statistical modelling knowledge, and familiarity with predictive analytics and machine learning algorithms. • Strategic Thinking: Ability to align data insights with business objectives, demonstrating creativity and out-of-the-box thinking. • Soft Skills: Proactive, independent, collaborative, and results-driven with excellent communication skills in English. Educational Background: BSc in Industrial Engineering, Computer Science, Mathematics, or a related field (MSc/MBA is an advantage). *** Only candidates residing in Bangalore will be considered.
Posted 5 days ago
3.0 - 5.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Business Intelligence Analyst (Minimum Three Years Of Total Experience) We are seeking an experienced and highly skilled Business Intelligence Analyst who is passionate about transforming innovative ideas into effective, data-driven solutions. In this role, you will be responsible for designing and delivering business intelligence solutions, which include ETL (Extract, Transform, Load) processes, data visualization, and data analysis. You will collaborate with cross-functional teams, including IT, Finance, and business stakeholders, to understand their requirements, design scalable and efficient solutions, and ensure the availability of insightful visualizations and reports. Your expertise will play a crucial role in driving data-driven decision-making and empowering the organization with actionable insights. Success in this role will require creativity and the ability to work independently to analyze sales data and create business intelligence dashboards. This position entails complete end-to-end project ownership, which includes conducting discovery meetings with stakeholders, leveraging existing SQL queries, building new SQL queries, creating ETL processes using AWS Glue and Lambda, working with AWS Redshift, developing automation workflows, and utilizing data visualization tools such as AWS QuickSight and Power BI. Key Responsibilities Visualization and Reporting : Design and develop interactive dashboards, reports, and visualizations that provide actionable insights to business stakeholders using industry-leading tools and technologies. Data Analysis : Analyze data, identify insights, and collaborate with sales leaders to recommend business actions. Solution Design : Work closely with business stakeholders to understand their requirements and translate them into comprehensive and scalable business intelligence solutions. Data Asset Creation : Leverage internal data warehouses and external datasets to build new data assets for analysis and visualization. ETL Development : Design and implement ETL processes to extract, transform, and load data, focusing on automation, accuracy, and reusability. Collaboration and Stakeholder Management : Collaborate with cross-functional teams, including data analysts and sales leaders, to understand requirements, gather feedback, and ensure the successful delivery of solutions. Documentation and Training : Create comprehensive documentation of BI solutions (including ETL processes and visualizations) and provide training and support to users and stakeholders on the effective use of business intelligence dashboards and analytics. We recognize that skills and competencies can manifest in various ways and may stem from diverse life experiences. If you do not meet all the listed requirements, we still encourage you to apply for the position. Qualifications The ideal candidate will have: Excellent problem-solving and analytical skills, with the ability to apply knowledge and creativity to resolve complex issues. Strong thought leadership and a quick understanding of how data and insights can be transformed into valuable features. Experience with: Data Visualization Tools: AWS QuickSight (experience with Power BI or similar tools is also acceptable). ETL Tools: AWS Glue and AWS Lambda. Databases: SQL programming (experience with PostgreSQL and AWS Redshift). Exceptional project management skills, with the ability to organize and prioritize multiple tasks effectively. Strong interpersonal skills and the ability to collaborate with partners from various business units and levels within the organization. 3-5 years of experience in business intelligence, analytics, or related roles. Minimum 2 Year of hand-on experience on AWS Glue, Lambda, RDS, RedShift, S3 A BS or MS degree in Engineering, Data & Analytics, Information Systems, or a related field (a master's degree is a plus).
Posted 5 days ago
5.0 - 8.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Business Intelligence Analyst (Minimum Five Years Of Total Experience) We are seeking an experienced and highly skilled Business Intelligence Analyst who is passionate about transforming innovative ideas into effective, data-driven solutions. In this role, you will be responsible for designing and delivering business intelligence solutions, which include ETL (Extract, Transform, Load) processes, data visualization, and data analysis. You will collaborate with cross-functional teams, including IT, Finance, and business stakeholders, to understand their requirements, design scalable and efficient solutions, and ensure the availability of insightful visualizations and reports. Your expertise will play a crucial role in driving data-driven decision-making and empowering the organization with actionable insights. Success in this role will require creativity and the ability to work independently to analyze sales data and create business intelligence dashboards. This position entails complete end-to-end project ownership, which includes conducting discovery meetings with stakeholders, leveraging existing SQL queries, building new SQL queries, creating ETL processes using AWS Glue and Lambda, working with AWS Redshift, developing automation workflows, and utilizing data visualization tools such as AWS QuickSight and Power BI. Key Responsibilities Visualization and Reporting : Design and develop interactive dashboards, reports, and visualizations that provide actionable insights to business stakeholders using industry-leading tools and technologies. Data Analysis : Analyze data, identify insights, and collaborate with sales leaders to recommend business actions. Solution Design : Work closely with business stakeholders to understand their requirements and translate them into comprehensive and scalable business intelligence solutions. Data Asset Creation : Leverage internal data warehouses and external datasets to build new data assets for analysis and visualization. ETL Development : Design and implement ETL processes to extract, transform, and load data, focusing on automation, accuracy, and reusability. Collaboration and Stakeholder Management : Collaborate with cross-functional teams, including data analysts and sales leaders, to understand requirements, gather feedback, and ensure the successful delivery of solutions. Documentation and Training : Create comprehensive documentation of BI solutions (including ETL processes and visualizations) and provide training and support to users and stakeholders on the effective use of business intelligence dashboards and analytics. We recognize that skills and competencies can manifest in various ways and may stem from diverse life experiences. If you do not meet all the listed requirements, we still encourage you to apply for the position. Qualifications The ideal candidate will have: Excellent problem-solving and analytical skills, with the ability to apply knowledge and creativity to resolve complex issues. Strong thought leadership and a quick understanding of how data and insights can be transformed into valuable features. Experience with: Data Visualization Tools: AWS QuickSight (experience with Power BI or similar tools is also acceptable). ETL Tools: AWS Glue and AWS Lambda. Databases: SQL programming (experience with PostgreSQL and AWS Redshift). Exceptional project management skills, with the ability to organize and prioritize multiple tasks effectively. Strong interpersonal skills and the ability to collaborate with partners from various business units and levels within the organization. 5-8 years of experience in business intelligence, analytics, or related roles. A BS or MS degree in Engineering, Data & Analytics, Information Systems, or a related field (a master's degree is a plus).
Posted 5 days ago
5.0 - 7.0 years
9 - 12 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
Hiring Data Engineers with 3+ yrs in Databricks, PySpark, Delta Lake, and AWS (S3, Glue, Redshift, Lambda, EMR). Must have strong SQL/Python, CI/CD, and data pipeline experience. Only Tier-1 company backgrounds are considered.
Posted 5 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
20312 Jobs | Dublin
Wipro
11977 Jobs | Bengaluru
EY
8165 Jobs | London
Accenture in India
6667 Jobs | Dublin 2
Uplers
6464 Jobs | Ahmedabad
Amazon
6352 Jobs | Seattle,WA
Oracle
5993 Jobs | Redwood City
IBM
5803 Jobs | Armonk
Capgemini
3897 Jobs | Paris,France
Tata Consultancy Services
3776 Jobs | Thane