Jobs
Interviews

3597 Redshift Jobs - Page 48

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 years

0 Lacs

Greater Kolkata Area

Remote

Experience : 5 to 8 years. Location : Remote. We are seeking a Senior Informatica ETL Developer with 5 to 8 years of hands-on experience in Informatica development and ETL processes. The ideal candidate will have a strong background in designing, developing, and optimizing ETL workflows, with significant expertise in Amazon Redshift, SQL, and ETL migration projects from tools like Talend or DataStage to Informatica (PowerCenter or IICS). Key Responsibilities Design, develop, and maintain ETL workflows using Informatica PowerCenter and/or Informatica Cloud (IICS). Migrate ETL jobs from Talend or DataStage to Informatica, ensuring performance, accuracy, and maintainability. Write and optimize complex SQL queries, primarily in Amazon Redshift. Collaborate with business and technical teams to gather and understand requirements and deliver data solutions. Work with large volumes of structured and unstructured data, ensuring performance and data integrity. Support and troubleshoot complex ETL jobs and data pipelines. Conduct performance tuning and issue resolution for ETL workflows. Leverage data warehousing and data lake house architecture principles to design robust data integration solutions. Ensure scalability, reliability, and quality of data systems. Required Skills & Qualifications 5+ years of hands-on experience in Informatica ETL development (PowerCenter/IICS). Strong experience in ETL migration from Talend or DataStage to Informatica. Proficiency in Amazon Redshift with solid SQL development and optimization skills. Deep understanding of data warehousing concepts, architectures, and best practices. Familiarity with data lake house architectures and integration strategies. Experience in Cloud Data Platforms (AWS, Azure, GCP) is a plus. Knowledge of additional ETL tools and big data ecosystems is a bonus. Familiarity with Agile/Scrum methodologies. (ref:hirist.tech)

Posted 4 weeks ago

Apply

5.0 - 12.0 years

0 Lacs

Greater Kolkata Area

Remote

Job Title : Data Engineer Experience : 5 to 12 Years Location : Pan-India (Remote Work Model) Employment Type : Full-Time Job Summary We are hiring an experienced Data Engineer to join our growing data team. The ideal candidate will have hands-on expertise in SQL, AWS, Snowflake, and Python, and will be responsible for building scalable data pipelines and supporting end-to-end data architecture. This is a hybrid opportunity open to candidates across India. Key Responsibilities Design, build, and maintain robust and scalable ETL/ELT pipelines. Work with structured and unstructured data from multiple sources using AWS services like S3, Redshift, Glue, and Lambda. Develop data integration and transformation workflows using Python and SQL. Optimize and manage Snowflake data warehouse performance, including data loading, partitioning, and query tuning. Collaborate with analysts, data scientists, and business stakeholders to understand data needs and deliver clean, reliable datasets. Ensure data quality, governance, and security across all stages of the pipeline. Required Skills 512 years of experience in Data Engineering or a similar role Strong knowledge of SQL and relational databases Hands-on experience with AWS data services (e.g., S3, Redshift, Glue, Lambda) Proficiency in Python for scripting, automation, and data processing Experience working with Snowflake data loading, transformation, optimization Solid understanding of data warehousing concepts and best practices Preferred Qualifications Experience with data orchestration tools (e.g., Airflow, AWS Step Functions) Familiarity with CI/CD and version control tools (e.g., Git) Understanding of data governance, privacy, and security principles Strong communication and problem-solving skills What We Offer Remote work model with flexibility across major cities in India. Competitive salary with performance-based bonuses. Opportunity to work on modern data architecture and scalable solutions. (ref:hirist.tech)

Posted 4 weeks ago

Apply

5.0 years

0 Lacs

Greater Kolkata Area

On-site

Key Responsibilities Design and architect robust and scalable data platform and analytics solutions on AWS, with a focus on Amazon Redshift. Lead technical design sessions, define architectural blueprints, and oversee the implementation of data warehousing and BI solutions. Develop and implement workload isolation strategies within Amazon Redshift to ensure optimal performance and resource utilization. Perform performance tuning, including RPU sizing and concurrency scaling, for Amazon Redshift clusters (Serverless and/or Provisioned). Design and implement data sharing strategies and cross-cluster access patterns within Amazon Redshift. Collaborate closely with BI and reporting teams to understand their requirements and optimize the data infrastructure to support their needs (MicroStrategy experience is a significant plus). Create comprehensive technical documentation, including architecture diagrams, design specifications, and implementation guides. Effectively communicate technical concepts and solutions to both technical and non-technical stakeholders. Provide technical guidance and mentorship to other team members. Stay up-to-date with the latest AWS data services and best practices. Technical Skills (Must Have) Minimum 5 years of experience designing data platform or analytics architectures. Strong and demonstrable knowledge of Amazon Redshift (both Serverless and Provisioned). Proven ability to plan and implement workload isolation strategies within data warehousing environments. Solid understanding of Amazon Redshift performance tuning methodologies, RPU sizing, and concurrency scaling. Experience with Amazon Redshift Data Sharing and cross-cluster access patterns. Experience leading technical design sessions and overseeing the end-to-end implementation of data solutions. Solid background in Business Intelligence (BI) and reporting environments. Strong documentation skills with the ability to create clear and concise technical documentation. Excellent stakeholder communication skills, with the ability to effectively interact with technical and business audiences. Good To Have Hands-on experience with MicroStrategy or other enterprise BI platforms. Experience with other AWS data services such as S3, Glue, EMR, Athena, and QuickSight. Knowledge of data modeling techniques (e.g., Kimball, Inmon). Experience with data governance and security best practices on AWS. Familiarity with Infrastructure-as-Code (IaC) tools like CloudFormation or Terraform. (ref:hirist.tech)

Posted 4 weeks ago

Apply

8.0 years

0 Lacs

Greater Kolkata Area

On-site

Job Description We're seeking an experienced Data Architect to lead data modernization projects for top brands in communication, media, and technology industries. You'll design and deliver analytics solutions, lead cross-functional teams, and drive data architecture strategies. Key Responsibilities Lead data architecture and integration for multiple projects Design and deliver analytics solutions and applications Collaborate with stakeholders to define strategy and requirements Drive cross-functional development and delivery Provide insight and guidance on emerging data analytics trends Requirements 8+ years of experience with 3+ years as a Data Architect Strong knowledge of cloud platforms (AWS, Azure, GCP) and EDW platforms (Snowflake, Redshift, BigQuery) Experience with data warehouse ETL design and development Strong SQL and scripting skills (PL/SQL, UNIX) Excellent communication and leadership skills Nice To Have Data security and access controls expertise DevOps tools experience (GitHub, Cloud DevOps) Agile development framework experience (ref:hirist.tech)

Posted 4 weeks ago

Apply

0.0 - 2.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Description As a Data Engineer Intern you will be working on building and maintaining complex data pipelines, assemble large and complex datasets to generate business insights and to enable data driven decision making and support the rapidly growing and dynamic business demand for data. You will have an opportunity to collaborate and work with various teams of Business analysts, Managers, Software Dev Engineers, and Data Engineers to determine how best to design, implement and support solutions. You will be challenged and provided with tremendous growth opportunity in a customer facing, fast paced, agile environment. Job Locations-By applying to this position your application will be considered for all locations we hire for in India. This includes but is not limited to Bengaluru, Chennai, Hyderabad, Delhi and Pune. Please note that Amazon internships require full-time commitment during the duration of the internship. During the course of internship, interns should not have any conflicts including but not limited to academic projects, classes or other internships/employment. Any exam related details must be shared with the hiring manager to plan for absence during those days. Specific team norms around working hours will be communicated by the hiring/ reporting manager at the time of commencement of internship. Candidates receiving internship will be required to submit declaration of their availability to complete the entire duration of internship duly signed by a competent authority at their University. Internship offer will be subjected to successful submission of the declaration Key job responsibilities Design data schema and operate internal data warehouses and SQL/NOSQL database systems. Design data models, implement, automate, optimization and monitor data pipelines Own the design, development and maintenance of ongoing metrics, reports, analyses, dashboards, etc. to drive key business decisions Analyze and solve problems at their root, stepping back to understand the broader context Manage Redshift/Spectrum/EMR infrastructure, and drive architectural plans and implementation for future data storage, reporting, and analytic solutions Work on different AWS technologies such as S3, Redshift, Lambda, Glue, etc.. and Explore and learn the latest AWS technologies to provide new capabilities and increase efficiency Work on data lake platform and different components in the data lake such as Hadoop, Amazon S3 etc. Work on SQL technologies on Hadoop such as Spark. Hive, Impala etc. Recognize and adopt best practices in reporting and analysis: data integrity, test design, analysis, validation, and documentation. Must possess strong verbal and written communication skills, be self-driven, and deliver high quality results in a fast-paced environment. Conduct rapid prototyping and proof of concepts Conceptualize and develop automation tools for bench marking data collection and analytics Interface with other technology teams to extract, transform, and load data from a wide variety of data sources using SQL and AWS big data technologies Basic Qualifications Enrolled in Bachelor or Master degree in Computer Science, Engineering, Mathematics, or a related technical discipline. Industry experience in Data Engineering, BI Engineer, or related field. Hands on experience in building big data solution using EMR/Elastic Search/Reshift or equivalent MPP database. Hands-on experience and advanced knowledge of SQL and scripting languages such as Python, Shell, Ruby etc. Hands-on experience in working with different reporting/visualization tools available in the Industry. Demonstrated strength and experience in data modeling, ETL development and data warehousing concepts. Preferred Qualifications 0-2 years of experience as a Data Engineer, BI Engineer, or related field in a company with large, complex data sources. Experience working with AWS big data technologies (EMR, Redshift, S3, Glue, Kinesis and Lambda) or equivalent industry tools. Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets. Experience on working with different SQL/NOSQL databases. Knowledge on data lake platform and difference technologies used in data lake to retrieve and process the data. Basic Qualifications Do you have knowledge of computer science fundamentals such as object-oriented design, operating systems, algorithms, data structures, and complexity analysis? Preferred Qualifications Are enrolled in or have completed a Bachelor's degree within last 12 months Knowledge of different reporting/visualization tools in the industry Company - ADCI - Karnataka Job ID: A2732288

Posted 4 weeks ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Description As a Data Engineer Intern you will be working on building and maintaining complex data pipelines, assemble large and complex datasets to generate business insights and to enable data driven decision making and support the rapidly growing and dynamic business demand for data. You will have an opportunity to collaborate and work with various teams of Business analysts, Managers, Software Dev Engineers, and Data Engineers to determine how best to design, implement and support solutions. You will be challenged and provided with tremendous growth opportunity in a customer facing, fast paced, agile environment. Job Locations-By applying to this position your application will be considered for all locations we hire for in India. This includes but is not limited to Bengaluru, Chennai, Hyderabad, Delhi and Pune. Please note that Amazon internships require full-time commitment during the duration of the internship. During the course of internship, interns should not have any conflicts including but not limited to academic projects, classes or other internships/employment. Any exam related details must be shared with the hiring manager to plan for absence during those days. Specific team norms around working hours will be communicated by the hiring/ reporting manager at the time of commencement of internship. Candidates receiving internship will be required to submit declaration of their availability to complete the entire duration of internship duly signed by a competent authority at their University. Internship offer will be subjected to successful submission of the declaration. Key job responsibilities Job Responsibilities Design, implement and support an analytical data platform solutions for data driven decisions and insights Design data schema and operate internal data warehouses & SQL/NOSQL database systems Work on different data model designs, architecture, implementation, discussions and optimizations Interface with other teams to extract, transform, and load data from a wide variety of data sources using AWS big data technologies like EMR, RedShift, Elastic Search etc. Work on different AWS technologies such as S3, RedShift, Lambda, Glue, etc.. and Explore and learn the latest AWS technologies to provide new capabilities and increase efficiency Work on data lake platform and different components in the data lake such as Hadoop, Amazon S3 etc. Work on SQL technologies on Hadoop such as Spark, Hive, Impala etc.. Help continually improve ongoing analysis processes, optimizing or simplifying self-service support for customers Must possess strong verbal and written communication skills, be self-driven, and deliver high quality results in a fast-paced environment. Recognize and adopt best practices in reporting and analysis: data integrity, test design, analysis, validation, and documentation. Enjoy working closely with your peers in a group of talented engineers and gain knowledge. Be enthusiastic about building deep domain knowledge on various Amazon’s business domains. Own the development and maintenance of ongoing metrics, reports, analyses, dashboards, etc. to drive key business decisions. Basic Qualifications Candidates must be in their Final year of Bachelor or Master Course in Computer Science or Engineering or related field Hands-on experience in SQL Hands on experience in languages like Python etc. Knowledge of RDBMS, Big Data, NOSQL, ETL and Data-warehousing Concepts Candidate must have good written and oral communication skills, be a fast learner and have the ability to adapt quickly to a fast-paced development environment Preferred Qualifications Hands on Experience with AWS technologies like Amazon S3, EMR, Amazon RDS, Amazon Redshift etc. Knowledge of software engineering best practices across the development life cycle, including agile methodologies, coding standards, code reviews, source management, build processes, testing, and operations Knowledge of cloud services such as AWS or equivalent Knowledge on different reporting/visualization tools in the industry Sharp problem solving skills and ability to resolve ambiguous requirements Basic Qualifications Are enrolled in or have completed a Bachelor's degree Preferred Qualifications Knowledge of computer science fundamentals such as object-oriented design, operating systems, algorithms, data structures, and complexity analysis Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI HYD 13 SEZ Job ID: A2936655

Posted 4 weeks ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Description As a Data Engineer Intern you will be working on building and maintaining complex data pipelines, assemble large and complex datasets to generate business insights and to enable data driven decision making and support the rapidly growing and dynamic business demand for data. You will have an opportunity to collaborate and work with various teams of Business analysts, Managers, Software Dev Engineers, and Data Engineers to determine how best to design, implement and support solutions. You will be challenged and provided with tremendous growth opportunity in a customer facing, fast paced, agile environment. Job Locations-By applying to this position your application will be considered for all locations we hire for in India. This includes but is not limited to Bengaluru, Chennai, Hyderabad, Delhi and Pune. Please note that Amazon internships require full-time commitment during the duration of the internship. During the course of internship, interns should not have any conflicts including but not limited to academic projects, classes or other internships/employment. Any exam related details must be shared with the hiring manager to plan for absence during those days. Specific team norms around working hours will be communicated by the hiring/ reporting manager at the time of commencement of internship. Candidates receiving internship will be required to submit declaration of their availability to complete the entire duration of internship duly signed by a competent authority at their University. Internship offer will be subjected to successful submission of the declaration. Key job responsibilities Job Responsibilities Design, implement and support an analytical data platform solutions for data driven decisions and insights Design data schema and operate internal data warehouses & SQL/NOSQL database systems Work on different data model designs, architecture, implementation, discussions and optimizations Interface with other teams to extract, transform, and load data from a wide variety of data sources using AWS big data technologies like EMR, RedShift, Elastic Search etc. Work on different AWS technologies such as S3, RedShift, Lambda, Glue, etc.. and Explore and learn the latest AWS technologies to provide new capabilities and increase efficiency Work on data lake platform and different components in the data lake such as Hadoop, Amazon S3 etc. Work on SQL technologies on Hadoop such as Spark, Hive, Impala etc.. Help continually improve ongoing analysis processes, optimizing or simplifying self-service support for customers Must possess strong verbal and written communication skills, be self-driven, and deliver high quality results in a fast-paced environment. Recognize and adopt best practices in reporting and analysis: data integrity, test design, analysis, validation, and documentation. Enjoy working closely with your peers in a group of talented engineers and gain knowledge. Be enthusiastic about building deep domain knowledge on various Amazon’s business domains. Own the development and maintenance of ongoing metrics, reports, analyses, dashboards, etc. to drive key business decisions. Basic Qualifications Candidates must be in their Final year of Bachelor or Master Course in Computer Science or Engineering or related field Hands-on experience in SQL Hands on experience in languages like Python etc. Knowledge of RDBMS, Big Data, NOSQL, ETL and Data-warehousing Concepts Candidate must have good written and oral communication skills, be a fast learner and have the ability to adapt quickly to a fast-paced development environment Preferred Qualifications Hands on Experience with AWS technologies like Amazon S3, EMR, Amazon RDS, Amazon Redshift etc. Knowledge of software engineering best practices across the development life cycle, including agile methodologies, coding standards, code reviews, source management, build processes, testing, and operations Knowledge of cloud services such as AWS or equivalent Knowledge on different reporting/visualization tools in the industry Sharp problem solving skills and ability to resolve ambiguous requirements Basic Qualifications Are enrolled in or have completed a Bachelor's degree Preferred Qualifications Knowledge of computer science fundamentals such as object-oriented design, operating systems, algorithms, data structures, and complexity analysis Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI HYD 13 SEZ Job ID: A2900027

Posted 4 weeks ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Description Amazon is looking for a data-savvy professional to create, report on, and monitor business and operations metrics. Amazon has a culture of data-driven decision-making, and demands business intelligence that is timely, accurate, and actionable. This role will help scope, influence, and evaluate process improvements, and will contribute to Amazon’s success by enabling data-driven decision making that will impact the customer experience. Key job responsibilities Skills You love working with data, can create clear and effective reports and data visualizations, and can partner with customers to answer key business questions. You will also have the opportunity to display your skills in the following areas: Own the design, development, and maintenance of ongoing metrics, reports, analyses, dashboards, etc. to drive key business decisions. Analyze the current testing processes and identify improvement opportunities with define requirements and work with technical teams and managers to integrate into their development schedules. Demonstrate good judgment in solving problems as well as identifying problems in advance, and proposing solutions. Derive actionable insights and present recommendations from your analyses to partner teams and organizational leadership. Translate technical testing results into business-friendly reports Have a strong desire to dive deep and demonstrate the ability to do it effectively. Share your expertise - partner with and empower product teams to perform their own analyses. Produce high-quality documentation for processes and analysis results. A day in the life We are looking for a Business Analyst to join our team. This person will be a creative problem solver who cares deeply about what our customers experience, and is a highly analytical, team-oriented individual with excellent communication skills. In this highly visible role, you will provide reporting, analyze data, make sense of the results and be able to explain what it all means to key stakeholders, such as Front Line Managers (FLMs), QA Engineers and Project Managers. You are a self-starter while being a reliable teammate, you are comfortable with ambiguity in a fast-paced and ever-changing environment, you are able to see the big picture while paying meticulous attention to detail, you know what it takes to build trust, you are curious and thrive on learning. You will become a subject matter expert in the Device OS world. About The Team The Amazon Devices group delivers delightfully unique Amazon experiences, giving customers instant access to everything, digital or physical. The Device OS team plays a central role in creating these innovative devices at Lab126. The Device OS team is responsible for the board bring up, low level software, core operating system architecture, innovative framework feature development, associated cloud services and end-to-end system functions that brings these devices to life. The software built by the Device OS team runs on all Amazon consumer electronics devices. Basic Qualifications 3+ years of Excel or Tableau (data manipulation, macros, charts and pivot tables) experience 2+ years of complex Excel VBA macros writing experience Experience defining requirements and using data and metrics to draw business insights Experience with SQL or ETL Preferred Qualifications Experience creating complex SQL queries joining multiple datasets, ETL DW concepts Experience in Amazon Redshift and other AWS technologies Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI - Tamil Nadu Job ID: A3025356

Posted 4 weeks ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Description As a Data Engineer Intern you will be working on building and maintaining complex data pipelines, assemble large and complex datasets to generate business insights and to enable data driven decision making and support the rapidly growing and dynamic business demand for data. You will have an opportunity to collaborate and work with various teams of Business analysts, Managers, Software Dev Engineers, and Data Engineers to determine how best to design, implement and support solutions. You will be challenged and provided with tremendous growth opportunity in a customer facing, fast paced, agile environment. Job Locations-By applying to this position your application will be considered for all locations we hire for in India. This includes but is not limited to Bengaluru, Chennai, Hyderabad, Delhi and Pune. Please note that Amazon internships require full-time commitment during the duration of the internship. During the course of internship, interns should not have any conflicts including but not limited to academic projects, classes or other internships/employment. Any exam related details must be shared with the hiring manager to plan for absence during those days. Specific team norms around working hours will be communicated by the hiring/ reporting manager at the time of commencement of internship. Candidates receiving internship will be required to submit declaration of their availability to complete the entire duration of internship duly signed by a competent authority at their University. Internship offer will be subjected to successful submission of the declaration. Key job responsibilities Job Responsibilities Design, implement and support an analytical data platform solutions for data driven decisions and insights Design data schema and operate internal data warehouses & SQL/NOSQL database systems Work on different data model designs, architecture, implementation, discussions and optimizations Interface with other teams to extract, transform, and load data from a wide variety of data sources using AWS big data technologies like EMR, RedShift, Elastic Search etc. Work on different AWS technologies such as S3, RedShift, Lambda, Glue, etc.. and Explore and learn the latest AWS technologies to provide new capabilities and increase efficiency Work on data lake platform and different components in the data lake such as Hadoop, Amazon S3 etc. Work on SQL technologies on Hadoop such as Spark, Hive, Impala etc.. Help continually improve ongoing analysis processes, optimizing or simplifying self-service support for customers Must possess strong verbal and written communication skills, be self-driven, and deliver high quality results in a fast-paced environment. Recognize and adopt best practices in reporting and analysis: data integrity, test design, analysis, validation, and documentation. Enjoy working closely with your peers in a group of talented engineers and gain knowledge. Be enthusiastic about building deep domain knowledge on various Amazon’s business domains. Own the development and maintenance of ongoing metrics, reports, analyses, dashboards, etc. to drive key business decisions. Basic Qualifications Candidates must be in their Final year of Bachelor or Master Course in Computer Science or Engineering or related field Hands-on experience in SQL Hands on experience in languages like Python etc. Knowledge of RDBMS, Big Data, NOSQL, ETL and Data-warehousing Concepts Candidate must have good written and oral communication skills, be a fast learner and have the ability to adapt quickly to a fast-paced development environment Preferred Qualifications Hands on Experience with AWS technologies like Amazon S3, EMR, Amazon RDS, Amazon Redshift etc. Knowledge of software engineering best practices across the development life cycle, including agile methodologies, coding standards, code reviews, source management, build processes, testing, and operations Knowledge of cloud services such as AWS or equivalent Knowledge on different reporting/visualization tools in the industry Sharp problem solving skills and ability to resolve ambiguous requirements Basic Qualifications Are enrolled in or have completed a Bachelor's degree Preferred Qualifications Knowledge of computer science fundamentals such as object-oriented design, operating systems, algorithms, data structures, and complexity analysis Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI HYD 13 SEZ Job ID: A2802226

Posted 4 weeks ago

Apply

0.0 - 2.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Description As a Data Engineer Intern you will be working on building and maintaining complex data pipelines, assemble large and complex datasets to generate business insights and to enable data driven decision making and support the rapidly growing and dynamic business demand for data. You will have an opportunity to collaborate and work with various teams of Business analysts, Managers, Software Dev Engineers, and Data Engineers to determine how best to design, implement and support solutions. You will be challenged and provided with tremendous growth opportunity in a customer facing, fast paced, agile environment. Job Locations-By applying to this position your application will be considered for all locations we hire for in India. This includes but is not limited to Bengaluru, Chennai, Hyderabad, Delhi and Pune. Please note that Amazon internships require full-time commitment during the duration of the internship. During the course of internship, interns should not have any conflicts including but not limited to academic projects, classes or other internships/employment. Any exam related details must be shared with the hiring manager to plan for absence during those days. Specific team norms around working hours will be communicated by the hiring/ reporting manager at the time of commencement of internship. Candidates receiving internship will be required to submit declaration of their availability to complete the entire duration of internship duly signed by a competent authority at their University. Internship offer will be subjected to successful submission of the declaration Key job responsibilities Design data schema and operate internal data warehouses and SQL/NOSQL database systems. Design data models, implement, automate, optimization and monitor data pipelines Own the design, development and maintenance of ongoing metrics, reports, analyses, dashboards, etc. to drive key business decisions Analyze and solve problems at their root, stepping back to understand the broader context Manage Redshift/Spectrum/EMR infrastructure, and drive architectural plans and implementation for future data storage, reporting, and analytic solutions Work on different AWS technologies such as S3, Redshift, Lambda, Glue, etc.. and Explore and learn the latest AWS technologies to provide new capabilities and increase efficiency Work on data lake platform and different components in the data lake such as Hadoop, Amazon S3 etc. Work on SQL technologies on Hadoop such as Spark. Hive, Impala etc. Recognize and adopt best practices in reporting and analysis: data integrity, test design, analysis, validation, and documentation. Must possess strong verbal and written communication skills, be self-driven, and deliver high quality results in a fast-paced environment. Conduct rapid prototyping and proof of concepts Conceptualize and develop automation tools for bench marking data collection and analytics Interface with other technology teams to extract, transform, and load data from a wide variety of data sources using SQL and AWS big data technologies Basic Qualifications Enrolled in Bachelor or Master degree in Computer Science, Engineering, Mathematics, or a related technical discipline. Industry experience in Data Engineering, BI Engineer, or related field. Hands on experience in building big data solution using EMR/Elastic Search/Reshift or equivalent MPP database. Hands-on experience and advanced knowledge of SQL and scripting languages such as Python, Shell, Ruby etc. Hands-on experience in working with different reporting/visualization tools available in the Industry. Demonstrated strength and experience in data modeling, ETL development and data warehousing concepts. Preferred Qualifications 0-2 years of experience as a Data Engineer, BI Engineer, or related field in a company with large, complex data sources. Experience working with AWS big data technologies (EMR, Redshift, S3, Glue, Kinesis and Lambda) or equivalent industry tools. Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets. Experience on working with different SQL/NOSQL databases. Knowledge on data lake platform and difference technologies used in data lake to retrieve and process the data. Basic Qualifications Do you have knowledge of computer science fundamentals such as object-oriented design, operating systems, algorithms, data structures, and complexity analysis? Preferred Qualifications Are enrolled in or have completed a Bachelor's degree within last 12 months Knowledge of different reporting/visualization tools in the industry Company - ADCI - Karnataka Job ID: A2732288

Posted 4 weeks ago

Apply

0.0 - 4.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Description "ABOUT AMAZON We’re on the lookout for the curious, those who think big and want to define the world of tomorrow. At Amazon, you will grow into the high impact, visionary person you know you’re ready to be. Every day will be filled with exciting new challenges, developing new skills, and achieving personal growth. How often can you say that your work changes the world? At Amazon, you’ll say it often. Join us and define tomorrow. Overview Of The Role As a Business Analyst, you will own the creation, expansion and maintenance of a suite of analytics reports, visual dashboards and automation tools that provide insights to our leaders for taking decisions that impact customer or selling partner experience. To this end, you develop in depth knowledge of the data available in your domain and write efficient queries. You translate basic business problem statements into requirements and define the analytical approach. You, then, derive actionable recommendations from data and communicate the findings back with business stakeholders. Roles And Responsibilities Create, support and continuously improve reports and metrics that support the business operations Convert data into digestible business intelligence and actionable information using tools like Quicksight/Tableau Dive deep into large data sets to answer specific business questions using Excel, SQL and other data manipulation languages Work directly with business teams to utilize metrics and analysis to drive productivity improvement and customer experience Identify peculiar behavior or trends from the data, surface insights and provide objective interpretation of the insights that enables executive decision making Create small to medium complexity automations to reduce repetitive manual work. A Business Analyst will use the following tools - SQL (Oracle and Redshift) Excel Advanced VBA Tableau/Quicksight Report scheduling tools (internal)" Basic Qualifications " Bachelor's degree in Engineering, Statistics, Mathematics, Data Science or related field 0-4 years of prior years prior work experience Ability to create complex SQL Queries joining multiple datasets, ETL DW concepts. Knowledge of Python, VBA, Macros, Selenium scripts Demonstrate strength in analytical skills – has ability to ask right questions, analyze data, draw conclusion by making appropriate assumptions Working knowledge of Data Visualization tools such as Quick Sight / Tableau / Power BI or other BI packages Advanced ability to draw insights from data and clearly communicate them to the stakeholders and senior management as required" Preferred Qualifications Experience using very large datasets Prior experience in a BI role with a technology company or financial institution NLP & Text Processing Using Python / R for data analysis, statistical tools such as SAS Coding skills in Scala, R & Experience in Amazon Redshift and other AWS technologies" Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI - Karnataka Job ID: A2961603

Posted 4 weeks ago

Apply

0.0 - 2.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Description As a Data Engineer Intern you will be working on building and maintaining complex data pipelines, assemble large and complex datasets to generate business insights and to enable data driven decision making and support the rapidly growing and dynamic business demand for data. You will have an opportunity to collaborate and work with various teams of Business analysts, Managers, Software Dev Engineers, and Data Engineers to determine how best to design, implement and support solutions. You will be challenged and provided with tremendous growth opportunity in a customer facing, fast paced, agile environment. Job Locations-By applying to this position your application will be considered for all locations we hire for in India. This includes but is not limited to Bengaluru, Chennai, Hyderabad, Delhi and Pune. Please note that Amazon internships require full-time commitment during the duration of the internship. During the course of internship, interns should not have any conflicts including but not limited to academic projects, classes or other internships/employment. Any exam related details must be shared with the hiring manager to plan for absence during those days. Specific team norms around working hours will be communicated by the hiring/ reporting manager at the time of commencement of internship. Candidates receiving internship will be required to submit declaration of their availability to complete the entire duration of internship duly signed by a competent authority at their University. Internship offer will be subjected to successful submission of the declaration Key job responsibilities Design data schema and operate internal data warehouses and SQL/NOSQL database systems. Design data models, implement, automate, optimization and monitor data pipelines Own the design, development and maintenance of ongoing metrics, reports, analyses, dashboards, etc. to drive key business decisions Analyze and solve problems at their root, stepping back to understand the broader context Manage Redshift/Spectrum/EMR infrastructure, and drive architectural plans and implementation for future data storage, reporting, and analytic solutions Work on different AWS technologies such as S3, Redshift, Lambda, Glue, etc.. and Explore and learn the latest AWS technologies to provide new capabilities and increase efficiency Work on data lake platform and different components in the data lake such as Hadoop, Amazon S3 etc. Work on SQL technologies on Hadoop such as Spark. Hive, Impala etc. Recognize and adopt best practices in reporting and analysis: data integrity, test design, analysis, validation, and documentation. Must possess strong verbal and written communication skills, be self-driven, and deliver high quality results in a fast-paced environment. Conduct rapid prototyping and proof of concepts Conceptualize and develop automation tools for bench marking data collection and analytics Interface with other technology teams to extract, transform, and load data from a wide variety of data sources using SQL and AWS big data technologies Basic Qualifications Enrolled in Bachelor or Master degree in Computer Science, Engineering, Mathematics, or a related technical discipline. Industry experience in Data Engineering, BI Engineer, or related field. Hands on experience in building big data solution using EMR/Elastic Search/Reshift or equivalent MPP database. Hands-on experience and advanced knowledge of SQL and scripting languages such as Python, Shell, Ruby etc. Hands-on experience in working with different reporting/visualization tools available in the Industry. Demonstrated strength and experience in data modeling, ETL development and data warehousing concepts. Preferred Qualifications 0-2 years of experience as a Data Engineer, BI Engineer, or related field in a company with large, complex data sources. Experience working with AWS big data technologies (EMR, Redshift, S3, Glue, Kinesis and Lambda) or equivalent industry tools. Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets. Experience on working with different SQL/NOSQL databases. Knowledge on data lake platform and difference technologies used in data lake to retrieve and process the data. Basic Qualifications Do you have knowledge of computer science fundamentals such as object-oriented design, operating systems, algorithms, data structures, and complexity analysis? Preferred Qualifications Are enrolled in or have completed a Bachelor's degree within last 12 months Knowledge of different reporting/visualization tools in the industry Company - ADCI - Karnataka Job ID: A2732288

Posted 4 weeks ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Description As a Data Engineer Intern you will be working on building and maintaining complex data pipelines, assemble large and complex datasets to generate business insights and to enable data driven decision making and support the rapidly growing and dynamic business demand for data. You will have an opportunity to collaborate and work with various teams of Business analysts, Managers, Software Dev Engineers, and Data Engineers to determine how best to design, implement and support solutions. You will be challenged and provided with tremendous growth opportunity in a customer facing, fast paced, agile environment. Job Locations-By applying to this position your application will be considered for all locations we hire for in India. This includes but is not limited to Bengaluru, Chennai, Hyderabad, Delhi and Pune. Please note that Amazon internships require full-time commitment during the duration of the internship. During the course of internship, interns should not have any conflicts including but not limited to academic projects, classes or other internships/employment. Any exam related details must be shared with the hiring manager to plan for absence during those days. Specific team norms around working hours will be communicated by the hiring/ reporting manager at the time of commencement of internship. Candidates receiving internship will be required to submit declaration of their availability to complete the entire duration of internship duly signed by a competent authority at their University. Internship offer will be subjected to successful submission of the declaration. Key job responsibilities Job Responsibilities Design, implement and support an analytical data platform solutions for data driven decisions and insights Design data schema and operate internal data warehouses & SQL/NOSQL database systems Work on different data model designs, architecture, implementation, discussions and optimizations Interface with other teams to extract, transform, and load data from a wide variety of data sources using AWS big data technologies like EMR, RedShift, Elastic Search etc. Work on different AWS technologies such as S3, RedShift, Lambda, Glue, etc.. and Explore and learn the latest AWS technologies to provide new capabilities and increase efficiency Work on data lake platform and different components in the data lake such as Hadoop, Amazon S3 etc. Work on SQL technologies on Hadoop such as Spark, Hive, Impala etc.. Help continually improve ongoing analysis processes, optimizing or simplifying self-service support for customers Must possess strong verbal and written communication skills, be self-driven, and deliver high quality results in a fast-paced environment. Recognize and adopt best practices in reporting and analysis: data integrity, test design, analysis, validation, and documentation. Enjoy working closely with your peers in a group of talented engineers and gain knowledge. Be enthusiastic about building deep domain knowledge on various Amazon’s business domains. Own the development and maintenance of ongoing metrics, reports, analyses, dashboards, etc. to drive key business decisions. Basic Qualifications Candidates must be in their Final year of Bachelor or Master Course in Computer Science or Engineering or related field Hands-on experience in SQL Hands on experience in languages like Python etc. Knowledge of RDBMS, Big Data, NOSQL, ETL and Data-warehousing Concepts Candidate must have good written and oral communication skills, be a fast learner and have the ability to adapt quickly to a fast-paced development environment Preferred Qualifications Hands on Experience with AWS technologies like Amazon S3, EMR, Amazon RDS, Amazon Redshift etc. Knowledge of software engineering best practices across the development life cycle, including agile methodologies, coding standards, code reviews, source management, build processes, testing, and operations Knowledge of cloud services such as AWS or equivalent Knowledge on different reporting/visualization tools in the industry Sharp problem solving skills and ability to resolve ambiguous requirements Basic Qualifications Are enrolled in or have completed a Bachelor's degree Preferred Qualifications Knowledge of computer science fundamentals such as object-oriented design, operating systems, algorithms, data structures, and complexity analysis Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI HYD 13 SEZ Job ID: A2936655

Posted 4 weeks ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Description As a Data Engineer Intern you will be working on building and maintaining complex data pipelines, assemble large and complex datasets to generate business insights and to enable data driven decision making and support the rapidly growing and dynamic business demand for data. You will have an opportunity to collaborate and work with various teams of Business analysts, Managers, Software Dev Engineers, and Data Engineers to determine how best to design, implement and support solutions. You will be challenged and provided with tremendous growth opportunity in a customer facing, fast paced, agile environment. Job Locations-By applying to this position your application will be considered for all locations we hire for in India. This includes but is not limited to Bengaluru, Chennai, Hyderabad, Delhi and Pune. Please note that Amazon internships require full-time commitment during the duration of the internship. During the course of internship, interns should not have any conflicts including but not limited to academic projects, classes or other internships/employment. Any exam related details must be shared with the hiring manager to plan for absence during those days. Specific team norms around working hours will be communicated by the hiring/ reporting manager at the time of commencement of internship. Candidates receiving internship will be required to submit declaration of their availability to complete the entire duration of internship duly signed by a competent authority at their University. Internship offer will be subjected to successful submission of the declaration. Key job responsibilities Job Responsibilities Design, implement and support an analytical data platform solutions for data driven decisions and insights Design data schema and operate internal data warehouses & SQL/NOSQL database systems Work on different data model designs, architecture, implementation, discussions and optimizations Interface with other teams to extract, transform, and load data from a wide variety of data sources using AWS big data technologies like EMR, RedShift, Elastic Search etc. Work on different AWS technologies such as S3, RedShift, Lambda, Glue, etc.. and Explore and learn the latest AWS technologies to provide new capabilities and increase efficiency Work on data lake platform and different components in the data lake such as Hadoop, Amazon S3 etc. Work on SQL technologies on Hadoop such as Spark, Hive, Impala etc.. Help continually improve ongoing analysis processes, optimizing or simplifying self-service support for customers Must possess strong verbal and written communication skills, be self-driven, and deliver high quality results in a fast-paced environment. Recognize and adopt best practices in reporting and analysis: data integrity, test design, analysis, validation, and documentation. Enjoy working closely with your peers in a group of talented engineers and gain knowledge. Be enthusiastic about building deep domain knowledge on various Amazon’s business domains. Own the development and maintenance of ongoing metrics, reports, analyses, dashboards, etc. to drive key business decisions. Basic Qualifications Candidates must be in their Final year of Bachelor or Master Course in Computer Science or Engineering or related field Hands-on experience in SQL Hands on experience in languages like Python etc. Knowledge of RDBMS, Big Data, NOSQL, ETL and Data-warehousing Concepts Candidate must have good written and oral communication skills, be a fast learner and have the ability to adapt quickly to a fast-paced development environment Preferred Qualifications Hands on Experience with AWS technologies like Amazon S3, EMR, Amazon RDS, Amazon Redshift etc. Knowledge of software engineering best practices across the development life cycle, including agile methodologies, coding standards, code reviews, source management, build processes, testing, and operations Knowledge of cloud services such as AWS or equivalent Knowledge on different reporting/visualization tools in the industry Sharp problem solving skills and ability to resolve ambiguous requirements Basic Qualifications Are enrolled in or have completed a Bachelor's degree Preferred Qualifications Knowledge of computer science fundamentals such as object-oriented design, operating systems, algorithms, data structures, and complexity analysis Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI HYD 13 SEZ Job ID: A2900027

Posted 4 weeks ago

Apply

0.0 - 2.0 years

0 Lacs

Delhi, India

On-site

Description As a Data Engineer Intern you will be working on building and maintaining complex data pipelines, assemble large and complex datasets to generate business insights and to enable data driven decision making and support the rapidly growing and dynamic business demand for data. You will have an opportunity to collaborate and work with various teams of Business analysts, Managers, Software Dev Engineers, and Data Engineers to determine how best to design, implement and support solutions. You will be challenged and provided with tremendous growth opportunity in a customer facing, fast paced, agile environment. Job Locations-By applying to this position your application will be considered for all locations we hire for in India. This includes but is not limited to Bengaluru, Chennai, Hyderabad, Delhi and Pune. Please note that Amazon internships require full-time commitment during the duration of the internship. During the course of internship, interns should not have any conflicts including but not limited to academic projects, classes or other internships/employment. Any exam related details must be shared with the hiring manager to plan for absence during those days. Specific team norms around working hours will be communicated by the hiring/ reporting manager at the time of commencement of internship. Candidates receiving internship will be required to submit declaration of their availability to complete the entire duration of internship duly signed by a competent authority at their University. Internship offer will be subjected to successful submission of the declaration Key job responsibilities Design data schema and operate internal data warehouses and SQL/NOSQL database systems. Design data models, implement, automate, optimization and monitor data pipelines Own the design, development and maintenance of ongoing metrics, reports, analyses, dashboards, etc. to drive key business decisions Analyze and solve problems at their root, stepping back to understand the broader context Manage Redshift/Spectrum/EMR infrastructure, and drive architectural plans and implementation for future data storage, reporting, and analytic solutions Work on different AWS technologies such as S3, Redshift, Lambda, Glue, etc.. and Explore and learn the latest AWS technologies to provide new capabilities and increase efficiency Work on data lake platform and different components in the data lake such as Hadoop, Amazon S3 etc. Work on SQL technologies on Hadoop such as Spark. Hive, Impala etc. Recognize and adopt best practices in reporting and analysis: data integrity, test design, analysis, validation, and documentation. Must possess strong verbal and written communication skills, be self-driven, and deliver high quality results in a fast-paced environment. Conduct rapid prototyping and proof of concepts Conceptualize and develop automation tools for bench marking data collection and analytics Interface with other technology teams to extract, transform, and load data from a wide variety of data sources using SQL and AWS big data technologies Basic Qualifications Enrolled in Bachelor or Master degree in Computer Science, Engineering, Mathematics, or a related technical discipline. Industry experience in Data Engineering, BI Engineer, or related field. Hands on experience in building big data solution using EMR/Elastic Search/Reshift or equivalent MPP database. Hands-on experience and advanced knowledge of SQL and scripting languages such as Python, Shell, Ruby etc. Hands-on experience in working with different reporting/visualization tools available in the Industry. Demonstrated strength and experience in data modeling, ETL development and data warehousing concepts. Preferred Qualifications 0-2 years of experience as a Data Engineer, BI Engineer, or related field in a company with large, complex data sources. Experience working with AWS big data technologies (EMR, Redshift, S3, Glue, Kinesis and Lambda) or equivalent industry tools. Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets. Experience on working with different SQL/NOSQL databases. Knowledge on data lake platform and difference technologies used in data lake to retrieve and process the data. Basic Qualifications Do you have knowledge of computer science fundamentals such as object-oriented design, operating systems, algorithms, data structures, and complexity analysis? Preferred Qualifications Are enrolled in or have completed a Bachelor's degree within last 12 months Knowledge of different reporting/visualization tools in the industry Company - ADCI - Karnataka Job ID: A2732288

Posted 4 weeks ago

Apply

0.0 - 2.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Description As a Data Engineer Intern you will be working on building and maintaining complex data pipelines, assemble large and complex datasets to generate business insights and to enable data driven decision making and support the rapidly growing and dynamic business demand for data. You will have an opportunity to collaborate and work with various teams of Business analysts, Managers, Software Dev Engineers, and Data Engineers to determine how best to design, implement and support solutions. You will be challenged and provided with tremendous growth opportunity in a customer facing, fast paced, agile environment. Job Locations-By applying to this position your application will be considered for all locations we hire for in India. This includes but is not limited to Bengaluru, Chennai, Hyderabad, Delhi and Pune. Please note that Amazon internships require full-time commitment during the duration of the internship. During the course of internship, interns should not have any conflicts including but not limited to academic projects, classes or other internships/employment. Any exam related details must be shared with the hiring manager to plan for absence during those days. Specific team norms around working hours will be communicated by the hiring/ reporting manager at the time of commencement of internship. Candidates receiving internship will be required to submit declaration of their availability to complete the entire duration of internship duly signed by a competent authority at their University. Internship offer will be subjected to successful submission of the declaration Key job responsibilities Design data schema and operate internal data warehouses and SQL/NOSQL database systems. Design data models, implement, automate, optimization and monitor data pipelines Own the design, development and maintenance of ongoing metrics, reports, analyses, dashboards, etc. to drive key business decisions Analyze and solve problems at their root, stepping back to understand the broader context Manage Redshift/Spectrum/EMR infrastructure, and drive architectural plans and implementation for future data storage, reporting, and analytic solutions Work on different AWS technologies such as S3, Redshift, Lambda, Glue, etc.. and Explore and learn the latest AWS technologies to provide new capabilities and increase efficiency Work on data lake platform and different components in the data lake such as Hadoop, Amazon S3 etc. Work on SQL technologies on Hadoop such as Spark. Hive, Impala etc. Recognize and adopt best practices in reporting and analysis: data integrity, test design, analysis, validation, and documentation. Must possess strong verbal and written communication skills, be self-driven, and deliver high quality results in a fast-paced environment. Conduct rapid prototyping and proof of concepts Conceptualize and develop automation tools for bench marking data collection and analytics Interface with other technology teams to extract, transform, and load data from a wide variety of data sources using SQL and AWS big data technologies Basic Qualifications Enrolled in Bachelor or Master degree in Computer Science, Engineering, Mathematics, or a related technical discipline. Industry experience in Data Engineering, BI Engineer, or related field. Hands on experience in building big data solution using EMR/Elastic Search/Reshift or equivalent MPP database. Hands-on experience and advanced knowledge of SQL and scripting languages such as Python, Shell, Ruby etc. Hands-on experience in working with different reporting/visualization tools available in the Industry. Demonstrated strength and experience in data modeling, ETL development and data warehousing concepts. Preferred Qualifications 0-2 years of experience as a Data Engineer, BI Engineer, or related field in a company with large, complex data sources. Experience working with AWS big data technologies (EMR, Redshift, S3, Glue, Kinesis and Lambda) or equivalent industry tools. Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets. Experience on working with different SQL/NOSQL databases. Knowledge on data lake platform and difference technologies used in data lake to retrieve and process the data. Basic Qualifications Do you have knowledge of computer science fundamentals such as object-oriented design, operating systems, algorithms, data structures, and complexity analysis? Preferred Qualifications Are enrolled in or have completed a Bachelor's degree within last 12 months Knowledge of different reporting/visualization tools in the industry Company - ADCI - Karnataka Job ID: A2732288

Posted 4 weeks ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Description As a Data Engineer Intern you will be working on building and maintaining complex data pipelines, assemble large and complex datasets to generate business insights and to enable data driven decision making and support the rapidly growing and dynamic business demand for data. You will have an opportunity to collaborate and work with various teams of Business analysts, Managers, Software Dev Engineers, and Data Engineers to determine how best to design, implement and support solutions. You will be challenged and provided with tremendous growth opportunity in a customer facing, fast paced, agile environment. Job Locations-By applying to this position your application will be considered for all locations we hire for in India. This includes but is not limited to Bengaluru, Chennai, Hyderabad, Delhi and Pune. Please note that Amazon internships require full-time commitment during the duration of the internship. During the course of internship, interns should not have any conflicts including but not limited to academic projects, classes or other internships/employment. Any exam related details must be shared with the hiring manager to plan for absence during those days. Specific team norms around working hours will be communicated by the hiring/ reporting manager at the time of commencement of internship. Candidates receiving internship will be required to submit declaration of their availability to complete the entire duration of internship duly signed by a competent authority at their University. Internship offer will be subjected to successful submission of the declaration. Key job responsibilities Job Responsibilities Design, implement and support an analytical data platform solutions for data driven decisions and insights Design data schema and operate internal data warehouses & SQL/NOSQL database systems Work on different data model designs, architecture, implementation, discussions and optimizations Interface with other teams to extract, transform, and load data from a wide variety of data sources using AWS big data technologies like EMR, RedShift, Elastic Search etc. Work on different AWS technologies such as S3, RedShift, Lambda, Glue, etc.. and Explore and learn the latest AWS technologies to provide new capabilities and increase efficiency Work on data lake platform and different components in the data lake such as Hadoop, Amazon S3 etc. Work on SQL technologies on Hadoop such as Spark, Hive, Impala etc.. Help continually improve ongoing analysis processes, optimizing or simplifying self-service support for customers Must possess strong verbal and written communication skills, be self-driven, and deliver high quality results in a fast-paced environment. Recognize and adopt best practices in reporting and analysis: data integrity, test design, analysis, validation, and documentation. Enjoy working closely with your peers in a group of talented engineers and gain knowledge. Be enthusiastic about building deep domain knowledge on various Amazon’s business domains. Own the development and maintenance of ongoing metrics, reports, analyses, dashboards, etc. to drive key business decisions. Basic Qualifications Candidates must be in their Final year of Bachelor or Master Course in Computer Science or Engineering or related field Hands-on experience in SQL Hands on experience in languages like Python etc. Knowledge of RDBMS, Big Data, NOSQL, ETL and Data-warehousing Concepts Candidate must have good written and oral communication skills, be a fast learner and have the ability to adapt quickly to a fast-paced development environment Preferred Qualifications Hands on Experience with AWS technologies like Amazon S3, EMR, Amazon RDS, Amazon Redshift etc. Knowledge of software engineering best practices across the development life cycle, including agile methodologies, coding standards, code reviews, source management, build processes, testing, and operations Knowledge of cloud services such as AWS or equivalent Knowledge on different reporting/visualization tools in the industry Sharp problem solving skills and ability to resolve ambiguous requirements Basic Qualifications Are enrolled in or have completed a Bachelor's degree Preferred Qualifications Knowledge of computer science fundamentals such as object-oriented design, operating systems, algorithms, data structures, and complexity analysis Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI HYD 13 SEZ Job ID: A2936655

Posted 4 weeks ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Description As a Data Engineer Intern you will be working on building and maintaining complex data pipelines, assemble large and complex datasets to generate business insights and to enable data driven decision making and support the rapidly growing and dynamic business demand for data. You will have an opportunity to collaborate and work with various teams of Business analysts, Managers, Software Dev Engineers, and Data Engineers to determine how best to design, implement and support solutions. You will be challenged and provided with tremendous growth opportunity in a customer facing, fast paced, agile environment. Job Locations-By applying to this position your application will be considered for all locations we hire for in India. This includes but is not limited to Bengaluru, Chennai, Hyderabad, Delhi and Pune. Please note that Amazon internships require full-time commitment during the duration of the internship. During the course of internship, interns should not have any conflicts including but not limited to academic projects, classes or other internships/employment. Any exam related details must be shared with the hiring manager to plan for absence during those days. Specific team norms around working hours will be communicated by the hiring/ reporting manager at the time of commencement of internship. Candidates receiving internship will be required to submit declaration of their availability to complete the entire duration of internship duly signed by a competent authority at their University. Internship offer will be subjected to successful submission of the declaration. Key job responsibilities Job Responsibilities Design, implement and support an analytical data platform solutions for data driven decisions and insights Design data schema and operate internal data warehouses & SQL/NOSQL database systems Work on different data model designs, architecture, implementation, discussions and optimizations Interface with other teams to extract, transform, and load data from a wide variety of data sources using AWS big data technologies like EMR, RedShift, Elastic Search etc. Work on different AWS technologies such as S3, RedShift, Lambda, Glue, etc.. and Explore and learn the latest AWS technologies to provide new capabilities and increase efficiency Work on data lake platform and different components in the data lake such as Hadoop, Amazon S3 etc. Work on SQL technologies on Hadoop such as Spark, Hive, Impala etc.. Help continually improve ongoing analysis processes, optimizing or simplifying self-service support for customers Must possess strong verbal and written communication skills, be self-driven, and deliver high quality results in a fast-paced environment. Recognize and adopt best practices in reporting and analysis: data integrity, test design, analysis, validation, and documentation. Enjoy working closely with your peers in a group of talented engineers and gain knowledge. Be enthusiastic about building deep domain knowledge on various Amazon’s business domains. Own the development and maintenance of ongoing metrics, reports, analyses, dashboards, etc. to drive key business decisions. Basic Qualifications Candidates must be in their Final year of Bachelor or Master Course in Computer Science or Engineering or related field Hands-on experience in SQL Hands on experience in languages like Python etc. Knowledge of RDBMS, Big Data, NOSQL, ETL and Data-warehousing Concepts Candidate must have good written and oral communication skills, be a fast learner and have the ability to adapt quickly to a fast-paced development environment Preferred Qualifications Hands on Experience with AWS technologies like Amazon S3, EMR, Amazon RDS, Amazon Redshift etc. Knowledge of software engineering best practices across the development life cycle, including agile methodologies, coding standards, code reviews, source management, build processes, testing, and operations Knowledge of cloud services such as AWS or equivalent Knowledge on different reporting/visualization tools in the industry Sharp problem solving skills and ability to resolve ambiguous requirements Basic Qualifications Are enrolled in or have completed a Bachelor's degree Preferred Qualifications Knowledge of computer science fundamentals such as object-oriented design, operating systems, algorithms, data structures, and complexity analysis Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI HYD 13 SEZ Job ID: A2900027

Posted 4 weeks ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Description As a Data Engineer Intern you will be working on building and maintaining complex data pipelines, assemble large and complex datasets to generate business insights and to enable data driven decision making and support the rapidly growing and dynamic business demand for data. You will have an opportunity to collaborate and work with various teams of Business analysts, Managers, Software Dev Engineers, and Data Engineers to determine how best to design, implement and support solutions. You will be challenged and provided with tremendous growth opportunity in a customer facing, fast paced, agile environment. Job Locations-By applying to this position your application will be considered for all locations we hire for in India. This includes but is not limited to Bengaluru, Chennai, Hyderabad, Delhi and Pune. Please note that Amazon internships require full-time commitment during the duration of the internship. During the course of internship, interns should not have any conflicts including but not limited to academic projects, classes or other internships/employment. Any exam related details must be shared with the hiring manager to plan for absence during those days. Specific team norms around working hours will be communicated by the hiring/ reporting manager at the time of commencement of internship. Candidates receiving internship will be required to submit declaration of their availability to complete the entire duration of internship duly signed by a competent authority at their University. Internship offer will be subjected to successful submission of the declaration. Key job responsibilities Job Responsibilities Design, implement and support an analytical data platform solutions for data driven decisions and insights Design data schema and operate internal data warehouses & SQL/NOSQL database systems Work on different data model designs, architecture, implementation, discussions and optimizations Interface with other teams to extract, transform, and load data from a wide variety of data sources using AWS big data technologies like EMR, RedShift, Elastic Search etc. Work on different AWS technologies such as S3, RedShift, Lambda, Glue, etc.. and Explore and learn the latest AWS technologies to provide new capabilities and increase efficiency Work on data lake platform and different components in the data lake such as Hadoop, Amazon S3 etc. Work on SQL technologies on Hadoop such as Spark, Hive, Impala etc.. Help continually improve ongoing analysis processes, optimizing or simplifying self-service support for customers Must possess strong verbal and written communication skills, be self-driven, and deliver high quality results in a fast-paced environment. Recognize and adopt best practices in reporting and analysis: data integrity, test design, analysis, validation, and documentation. Enjoy working closely with your peers in a group of talented engineers and gain knowledge. Be enthusiastic about building deep domain knowledge on various Amazon’s business domains. Own the development and maintenance of ongoing metrics, reports, analyses, dashboards, etc. to drive key business decisions. Basic Qualifications Candidates must be in their Final year of Bachelor or Master Course in Computer Science or Engineering or related field Hands-on experience in SQL Hands on experience in languages like Python etc. Knowledge of RDBMS, Big Data, NOSQL, ETL and Data-warehousing Concepts Candidate must have good written and oral communication skills, be a fast learner and have the ability to adapt quickly to a fast-paced development environment Preferred Qualifications Hands on Experience with AWS technologies like Amazon S3, EMR, Amazon RDS, Amazon Redshift etc. Knowledge of software engineering best practices across the development life cycle, including agile methodologies, coding standards, code reviews, source management, build processes, testing, and operations Knowledge of cloud services such as AWS or equivalent Knowledge on different reporting/visualization tools in the industry Sharp problem solving skills and ability to resolve ambiguous requirements Basic Qualifications Are enrolled in or have completed a Bachelor's degree Preferred Qualifications Knowledge of computer science fundamentals such as object-oriented design, operating systems, algorithms, data structures, and complexity analysis Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI HYD 13 SEZ Job ID: A2802226

Posted 4 weeks ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Description As a Data Engineer Intern you will be working on building and maintaining complex data pipelines, assemble large and complex datasets to generate business insights and to enable data driven decision making and support the rapidly growing and dynamic business demand for data. You will have an opportunity to collaborate and work with various teams of Business analysts, Managers, Software Dev Engineers, and Data Engineers to determine how best to design, implement and support solutions. You will be challenged and provided with tremendous growth opportunity in a customer facing, fast paced, agile environment. Job Locations-By applying to this position your application will be considered for all locations we hire for in India. This includes but is not limited to Bengaluru, Chennai, Hyderabad, Delhi and Pune. Please note that Amazon internships require full-time commitment during the duration of the internship. During the course of internship, interns should not have any conflicts including but not limited to academic projects, classes or other internships/employment. Any exam related details must be shared with the hiring manager to plan for absence during those days. Specific team norms around working hours will be communicated by the hiring/ reporting manager at the time of commencement of internship. Candidates receiving internship will be required to submit declaration of their availability to complete the entire duration of internship duly signed by a competent authority at their University. Internship offer will be subjected to successful submission of the declaration. Key job responsibilities Job Responsibilities Design, implement and support an analytical data platform solutions for data driven decisions and insights Design data schema and operate internal data warehouses & SQL/NOSQL database systems Work on different data model designs, architecture, implementation, discussions and optimizations Interface with other teams to extract, transform, and load data from a wide variety of data sources using AWS big data technologies like EMR, RedShift, Elastic Search etc. Work on different AWS technologies such as S3, RedShift, Lambda, Glue, etc.. and Explore and learn the latest AWS technologies to provide new capabilities and increase efficiency Work on data lake platform and different components in the data lake such as Hadoop, Amazon S3 etc. Work on SQL technologies on Hadoop such as Spark, Hive, Impala etc.. Help continually improve ongoing analysis processes, optimizing or simplifying self-service support for customers Must possess strong verbal and written communication skills, be self-driven, and deliver high quality results in a fast-paced environment. Recognize and adopt best practices in reporting and analysis: data integrity, test design, analysis, validation, and documentation. Enjoy working closely with your peers in a group of talented engineers and gain knowledge. Be enthusiastic about building deep domain knowledge on various Amazon’s business domains. Own the development and maintenance of ongoing metrics, reports, analyses, dashboards, etc. to drive key business decisions. Basic Qualifications Candidates must be in their Final year of Bachelor or Master Course in Computer Science or Engineering or related field Hands-on experience in SQL Hands on experience in languages like Python etc. Knowledge of RDBMS, Big Data, NOSQL, ETL and Data-warehousing Concepts Candidate must have good written and oral communication skills, be a fast learner and have the ability to adapt quickly to a fast-paced development environment Preferred Qualifications Hands on Experience with AWS technologies like Amazon S3, EMR, Amazon RDS, Amazon Redshift etc. Knowledge of software engineering best practices across the development life cycle, including agile methodologies, coding standards, code reviews, source management, build processes, testing, and operations Knowledge of cloud services such as AWS or equivalent Knowledge on different reporting/visualization tools in the industry Sharp problem solving skills and ability to resolve ambiguous requirements Basic Qualifications Are enrolled in or have completed a Bachelor's degree Preferred Qualifications Knowledge of computer science fundamentals such as object-oriented design, operating systems, algorithms, data structures, and complexity analysis Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI HYD 13 SEZ Job ID: A2802226

Posted 4 weeks ago

Apply

3.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Job Title: BI Engineer – Amazon QuickSight Developer Job Summary We are seeking an experienced Amazon QuickSight Developer to join our BI team. This role requires deep expertise in designing and deploying intuitive, high-impact dashboards and managing all aspects of QuickSight administration. You’ll collaborate closely with data engineers and business stakeholders to create scalable BI solutions that empower data-driven decisions across the organization. Key Responsibilities Dashboard Development & Visualization Design, develop, and maintain interactive QuickSight dashboards using advanced visuals, parameters, and controls. Create reusable datasets and calculated fields using both SPICE and Direct Query modes. Implement advanced analytics such as level-aware calculations, ranking, period-over-period comparisons, and custom KPIs. Build dynamic, user-driven dashboards with multi-select filters, dropdowns, and custom date ranges. Optimize performance and usability to maximize business value and user engagement. QuickSight Administration Manage users, groups, and permissions through QuickSight and AWS IAM roles. Implement and maintain row-level security (RLS) to ensure appropriate data access. Monitor usage, SPICE capacity, and subscription resources to maintain system performance. Configure and maintain themes, namespaces, and user interfaces for consistent experiences. Work with IT/cloud teams on account-level settings and AWS integrations. Collaboration & Data Integration Partner with data engineers and analysts to understand data structures and business needs. Integrate QuickSight with AWS services such as Redshift, Athena, S3, and Glue. Ensure data quality and accuracy through robust data modeling and SQL optimization. Required Skills & Qualifications 3+ years of hands-on experience with Amazon QuickSight (development and administration). Strong SQL skills and experience working with large, complex datasets. Expert-level understanding of QuickSight security, RLS, SPICE management, and user/group administration. Strong sense of data visualization best practices and UX design principles. Proficiency with AWS data services including Redshift, Athena, S3, Glue, and IAM. Solid understanding of data modeling and business reporting frameworks. Nice To Have Experience with Python, AWS Lambda, or automating QuickSight administration via SDK or CLI. Familiarity with modern data stack tools (e.g., dbt, Snowflake, Tableau, Power BI). Apply Now If you’re passionate about building scalable BI solutions and making data, come alive through visualization, we’d love to hear from you!

Posted 4 weeks ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

About Saarthee: Saarthee is a Global Strategy, Analytics, Technology and AI consulting company, where our passion for helping others fuels our approach and our products and solutions. We are a onestop shop for all things data and analytics. Unlike other analytics consulting firms that are technology or platform specific, Saarthee’s holistic and tool agnostic approach is unique in the marketplace. Our Consulting Value Chain framework meets our customers where they are in their data journey. Our diverse and global team work with one objective in mind: Our Customers’ Success. At Saarthee, we are passionate about guiding organizations towards insights fueled success. That’s why we call ourselves Saarthee–inspired by the Sanskrit word ‘Saarthi’, which means charioteer, trusted guide, or companion. Cofounded in 2015 by Mrinal Prasad and Shikha Miglani, Saarthee already encompasses all the components of Data Analytics consulting. Saarthee is based out of Philadelphia, USA with office in UK and India We are seeking a talented Talent Acquisition Executive/Lead. The ideal candidate will be responsible for driving talent acquisition strategies to support our company's growth objectives. You will work closely with the HR department, business leaders, and hiring managers to identify, attract, and hire top talent in the industry. If you are passionate about building high-performing teams and have a proven track record of sourcing, hiring, and retaining top talent in the data analytics industry and related, we encourage you to apply for this exciting opportunity. Key Responsibilities: Technical Talent Acquisition: Lead the end-to-end recruitment process for roles in Data Engineering, Data Science, and Data Analytics, including software engineers, data scientists, machine learning engineers, and data architects. Utilize your technical expertise to assess candidates' proficiency in programming languages (Python, Java, Scala), data pipelines (ETL, Kafka), cloud platforms (AWS, Azure, GCP), and big data technologies (Hadoop, Spark). Technical Screening & Assessment: Design and implement rigorous technical assessment processes, including coding tests, algorithm challenges, and system design interviews, to ensure candidates meet the high technical standards required for our projects. Stakeholder Collaboration: Partner with CTO, Engineering Leads, and Data Science teams to understand the specific technical requirements of each role. Translate these needs into effective job descriptions, recruitment strategies, and candidate evaluation criteria. Pipeline Development: Build and maintain a robust pipeline of highly qualified candidates by leveraging networks, industry events, online platforms (GitHub, Stack Overflow), and advanced sourcing techniques such as Boolean search, AI-driven talent matching, and targeted outreach. Industry Expertise: Stay current with trends in Data Engineering, Data Science, and Analytics, including advancements in AI/ML, data warehousing (Snowflake, Redshift), real-time analytics, and DevOps practices. Use this knowledge to proactively identify and engage with potential candidates who are at the forefront of these fields. Diversity & Inclusion in Tech: Implement strategies to ensure diverse and inclusive hiring practices, focusing on underrepresented groups in technology. Develop partnerships with organizations and communities that support diversity in tech. Talent Development & Retention: Work with technical leadership to create clear career pathways for data and engineering professionals within the company. Support ongoing training and development initiatives to keep teams updated with the latest technologies and methodologies. Qualifications: Experience: 3+ years in Talent Acquisition, with significant experience recruiting for Data Engineering, Data Science, Data Analytics, and Technology roles in high-growth or technically complex environments. Technical Knowledge: Strong background in the technologies and tools used in Data Engineering, Data Science, and Data Analytics, including but not limited to: AI-ML Programming languages: Python, R, Java, Scala Big Data technologies: Hadoop, Spark, Kafka Cloud platforms: AWS, Azure, GCP Data processing: ETL, data pipelines, real-time streaming Analytics and BI tools: Tableau, Power BI, Looker Leadership: Proven experience in building teams with a focus on technical roles, driving strategies that result in successful, high-impact hires. Analytical & Data-Driven: Expertise in using data to guide recruitment decisions and strategies, including metrics on sourcing, pipeline health, and hiring efficiency. Communication: Excellent ability to communicate complex technical requirements to both technical and non-technical stakeholders. Commitment to Excellence: A relentless focus on quality, with a keen eye for identifying top technical talent who can thrive in challenging, innovative environments. Soft Skills: Problem-Solving: Strong analytical and troubleshooting skills. Collaboration: Excellent teamwork and communication skills to work effectively with cross-functional teams. Adaptability: Ability to manage multiple tasks and projects in a fast-paced environment. Attention to Detail: Precision in diagnosing and fixing issues. Continuous Learning: A proactive attitude towards learning new technologies and improving existing skills Excellent Verbal and Writing skills.

Posted 4 weeks ago

Apply

2.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Description We have an exciting and rewarding opportunity for you to take your software engineering career to the next level. As a Software Engineer II at JPMorgan Chase within the Consumer Banking-Trust & Security, you serve as a seasoned member of an agile team to design and deliver trusted market-leading technology products in a secure, stable, and scalable way. You are responsible for carrying out critical technology solutions across multiple technical areas within various business functions in support of the firm’s business objectives. Job Responsibilities Work with Architect, Machine Learning engineers and Data engineers to identify technical and functional needs of data systems Ensure adherence to defined development life cycle, Software design practices, and Architecture strategy and intent Contribute to application frameworks in support of greater resiliency and self-healing capabilities Contribute to monitoring frameworks to accomplish end to end flow monitoring and noiseless alerting with proper telemetry Implement performance tests, identify bottlenecks, opportunities for optimization and continuous improvements Participate in deep design reviews with application and platform teams throughout the life cycle to help develop software for reliability, speed and scale Design and Develop distributed computation and parallel processing components to support high volume data pipelines Support DevOps and CI/CD processes Required Qualifications, Capabilities, And Skills Formal training or certification on software engineering concepts and 2+ years applied experience Advanced knowledge of application, data and infrastructure architecture disciplines Experience in a Big Data technologies (Impala, Hive, Redshift, Kafka, etc.) Experience in Spark processing large amount of data Experience in Java/Python/SQL Development Expertise in AWS stack designing, coding, testing, and delivering solution that supports high data volume Experience with Spring Boot building Microservice and/or Web Apps Advanced knowledge of one or more infrastructure components (e.g. containerization - docker, k8s) Experience in end-to-end systems automation and orchestration Experience with DevOps toolchains Strong debugging and troubleshooting skills Preferred Qualifications, Capabilities, And Skills Good understanding of SDLC and ITIL practices. Knowledge of industry-wide technology trends and best practices.

Posted 4 weeks ago

Apply

12.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About Syren Cloud Syren Cloud Technologies is a cutting-edge company specializing in supply chain solutions and data engineering. Their intelligent insights, powered by technologies like AI and NLP, empower organizations with real-time visibility and proactive decision-making. From control towers to agile inventory management, Syren unlocks unparalleled success in supply chain management. Role Summary An Azure Data Architect is responsible for designing, implementing, and maintaining the data infrastructure within an organization. They collaborate with both business and IT teams to understand stakeholders’ needs and unlock the full potential of data. They create conceptual and logical data models, analyze structural requirements, and ensure efficient database solutions. Job Responsibilities Act as subject matter expert providing best-practice guidance on data lake and ETL architecture frameworks suitable for handling big data for unstructured and structured information Drive business and Service Layer development with the customer by finding new opportunities based on expanding existing solutions creating new ones Provide hands-on subject matter expertise to build and implement Azure-based Big Data solution Research, evaluate, architect, and deploy new tools, frameworks, and patterns to build sustainable Big Data platforms for our clients Facilitate and/or conduct requirements workshops Responsible for collaborating on the prioritization of technical requirement Collaborates with peer teams and vendors on the solution and delivery Has overall accountability for project delivery Works collaboratively with the Product Management, Data Management, and other Architects to deliver for the cloud data platform, Data as a service Consults with clients to assess current problem states, define desired future states, define solution architecture and make solutions recommendations Job Requirements Degree in computer science or equivalent preferred Demonstrable experience in architecture, design, implementation, and/or support of highly distributed applications 12+ Years of Hands-on experience with data modelling, database design, data mining, and segmentation techniques. Working knowledge and experience with "Cloud Architectures" (e.g., SaaS, PaaS, IaaS) and the ability to address the unique security considerations of secure Cloud computing Designing and building distributed systems capable of processing massive data volumes Should have architected solutions for Cloud environments such as Microsoft Azure and/or GCP Experience with debugging and performance tuning in distributed environments Strong analytical skills with the ability to collect, organize, analyse, and broadcast significant amounts of information with attention to detail and accuracy Experience dealing with structured, unstructured data. Must have Python, PySpark experience. EDW experience required – Azure Databricks, RedShift, Azure Synapse, etc Experience in ML or/and graph analysis is a plus Mandatory Skills: Azure Data Engineering, Strong programming skills with Azure Data Bricks & Pyspark

Posted 4 weeks ago

Apply

1.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Description The role is for 1 year term in Amazon Job Description Are you interested in applying your strong quantitative analysis and big data skills to world-changing problems? Are you interested in driving the development of methods, models and systems for strategy planning, transportation and fulfillment network? If so, then this is the job for you. Our team is responsible for creating core analytics tech capabilities, platforms development and data engineering. We develop scalable analytics applications across APAC, MENA and LATAM. We standardize and optimize data sources and visualization efforts across geographies, builds up and maintains the online BI services and data mart. You will work with professional software development managers, data engineers, business intelligence engineers and product managers using rigorous quantitative approaches to ensure high quality data tech products for our customers around the world, including India, Australia, Brazil, Mexico, Singapore and Middle East. Amazon is growing rapidly and because we are driven by faster delivery to customers, a more efficient supply chain network, and lower cost of operations, our main focus is in the development of strategic models and automation tools fed by our massive amounts of available data. You will be responsible for building these models/tools that improve the economics of Amazon’s worldwide fulfillment networks in emerging countries as Amazon increases the speed and decreases the cost to deliver products to customers. You will identify and evaluate opportunities to reduce variable costs by improving fulfillment center processes, transportation operations and scheduling, and the execution to operational plans. Major Responsibilities Include Translating business questions and concerns into specific analytical questions that can be answered with available data using BI tools; produce the required data when it is not available. Writing SQL queries and automation scripts Ensure data quality throughout all stages of acquisition and processing, including such areas as data sourcing/collection, ground truth generation, normalization, transformation, cross-lingual alignment/mapping, etc. Communicate proposals and results in a clear manner backed by data and coupled with actionable conclusions to drive business decisions. Collaborate with colleagues from multidisciplinary science, engineering and business backgrounds. Develop efficient data querying and modeling infrastructure. Manage your own process. Prioritize and execute on high impact projects, triage external requests, and ensure to deliver projects in time. Utilizing code (SQL, Python, R, Scala, etc.) for analyzing data and building data marts Basic Qualifications 3+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience Experience with data visualization using Tableau, Quicksight, or similar tools Experience with data modeling, warehousing and building ETL pipelines Experience in Statistical Analysis packages such as R, SAS and Matlab Experience using SQL to pull data from a database or data warehouse and scripting experience (Python) to process data for modeling Preferred Qualifications Experience with AWS solutions such as EC2, DynamoDB, S3, and Redshift Experience in data mining, ETL, etc. and using databases in a business environment with large-scale, complex datasets Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ASSPL - Telangana Job ID: A3005884

Posted 4 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies