Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
2.0 - 5.0 years
4 - 8 Lacs
Hyderabad
Work from Office
Works in the area of Software Engineering, which encompasses the development, maintenance and optimization of software solutions/applications. 1. Applies scientific methods to analyse and solve software engineering problems. 2. He/she is responsible for the development and application of software engineering practice and knowledge, in research, design, development and maintenance. 3. His/her work requires the exercise of original thought and judgement and the ability to supervise the technical and administrative work of other software engineers. 4. The software engineer builds skills and expertise of his/her software engineering discipline to reach standard software engineer skills expectations for the applicable role, as defined in Professional Communities. 5. The software engineer collaborates and acts as team player with other software engineers and stakeholders. Works in the area of Software Engineering, which encompasses the development, maintenance and optimization of software solutions/applications.1. Applies scientific methods to analyse and solve software engineering problems.2. He/she is responsible for the development and application of software engineering practice and knowledge, in research, design, development and maintenance.3. His/her work requires the exercise of original thought and judgement and the ability to supervise the technical and administrative work of other software engineers.4. The software engineer builds skills and expertise of his/her software engineering discipline to reach standard software engineer skills expectations for the applicable role, as defined in Professional Communities.5. The software engineer collaborates and acts as team player with other software engineers and stakeholders. Job Description - Grade Specific Has more than a year of relevant work experience. Solid understanding of programming concepts, software design and software development principles. Consistently works to direction with minimal supervision, producing accurate and reliable results. Individuals are expected to be able to work on a range of tasks and problems, demonstrating their ability to apply their skills and knowledge. Organises own time to deliver against tasks set by others with a mid term horizon. Works co-operatively with others to achieve team goals and has a direct and positive impact on project performance and make decisions based on their understanding of the situation, not just the rules. Skills (competencies) Verbal Communication
Posted 1 week ago
2.0 - 5.0 years
4 - 8 Lacs
Gurugram
Work from Office
Works in the area of Software Engineering, which encompasses the development, maintenance and optimization of software solutions/applications. 1. Applies scientific methods to analyse and solve software engineering problems. 2. He/she is responsible for the development and application of software engineering practice and knowledge, in research, design, development and maintenance. 3. His/her work requires the exercise of original thought and judgement and the ability to supervise the technical and administrative work of other software engineers. 4. The software engineer builds skills and expertise of his/her software engineering discipline to reach standard software engineer skills expectations for the applicable role, as defined in Professional Communities. 5. The software engineer collaborates and acts as team player with other software engineers and stakeholders. Works in the area of Software Engineering, which encompasses the development, maintenance and optimization of software solutions/applications.1. Applies scientific methods to analyse and solve software engineering problems.2. He/she is responsible for the development and application of software engineering practice and knowledge, in research, design, development and maintenance.3. His/her work requires the exercise of original thought and judgement and the ability to supervise the technical and administrative work of other software engineers.4. The software engineer builds skills and expertise of his/her software engineering discipline to reach standard software engineer skills expectations for the applicable role, as defined in Professional Communities.5. The software engineer collaborates and acts as team player with other software engineers and stakeholders. Job Description - Grade Specific Has more than a year of relevant work experience. Solid understanding of programming concepts, software design and software development principles. Consistently works to direction with minimal supervision, producing accurate and reliable results. Individuals are expected to be able to work on a range of tasks and problems, demonstrating their ability to apply their skills and knowledge. Organises own time to deliver against tasks set by others with a mid term horizon. Works co-operatively with others to achieve team goals and has a direct and positive impact on project performance and make decisions based on their understanding of the situation, not just the rules.
Posted 1 week ago
3.0 - 7.0 years
11 - 15 Lacs
Bengaluru
Work from Office
A Data Platform Engineer specialises in the design, build, and maintenance of cloud-based data infrastructure and platforms for data-intensive applications and services. They develop Infrastructure as Code and manage the foundational systems and tools for efficient data storage, processing, and management. This role involves architecting robust and scalable cloud data infrastructure, including selecting and implementing suitable storage solutions, data processing frameworks, and data orchestration tools. Additionally, a Data Platform Engineer ensures the continuous evolution of the data platform to meet changing data needs and leverage technological advancements, while maintaining high levels of data security, availability, and performance. They are also tasked with creating and managing processes and tools that enhance operational efficiency, including optimising data flow and ensuring seamless data integration, all of which are essential for enabling developers to build, deploy, and operate data-centric applications efficiently. Job Description - Grade Specific An expert on the principles and practices associated with data platform engineering, particularly within cloud environments, and demonstrates proficiency in specific technical areas related to cloud-based data infrastructure, automation, and scalability.Key responsibilities encompass:Team Leadership and Management: Supervising a team of platform engineers, with a focus on team dynamics and the efficient delivery of cloud platform solutions.Technical Guidance and Decision-Making: Providing technical leadership and making pivotal decisions concerning platform architecture, tools, and processes. Balancing hands-on involvement with strategic oversight.Mentorship and Skill Development: Guiding team members through mentorship, enhancing their technical proficiencies, and nurturing a culture of continual learning and innovation in platform engineering practices.In-Depth Technical Proficiency: Possessing a comprehensive understanding of platform engineering principles and practices, and demonstrating expertise in crucial technical areas such as cloud services, automation, and system architecture.Community Contribution: Making significant contributions to the development of the platform engineering community, staying informed about emerging trends, and applying this knowledge to drive enhancements in capability.
Posted 1 week ago
3.0 - 7.0 years
11 - 15 Lacs
Bengaluru
Work from Office
A Data Platform Engineer specialises in the design, build, and maintenance of cloud-based data infrastructure and platforms for data-intensive applications and services. They develop Infrastructure as Code and manage the foundational systems and tools for efficient data storage, processing, and management. This role involves architecting robust and scalable cloud data infrastructure, including selecting and implementing suitable storage solutions, data processing frameworks, and data orchestration tools. Additionally, a Data Platform Engineer ensures the continuous evolution of the data platform to meet changing data needs and leverage technological advancements, while maintaining high levels of data security, availability, and performance. They are also tasked with creating and managing processes and tools that enhance operational efficiency, including optimising data flow and ensuring seamless data integration, all of which are essential for enabling developers to build, deploy, and operate data-centric applications efficiently. Job Description - Grade Specific A strong grasp of the principles and practices associated with data platform engineering, particularly within cloud environments, and demonstrates proficiency in specific technical areas related to cloud-based data infrastructure, automation, and scalability.Key responsibilities encompass:Community Engagement: Actively participating in the professional data platform engineering community, sharing insights, and staying up-to-date with the latest trends and best practices.Project Contributions: Making substantial contributions to client delivery, particularly in the design, construction, and maintenance of cloud-based data platforms and infrastructure.Technical Expertise: Demonstrating a sound understanding of data platform engineering principles and knowledge in areas such as cloud data storage solutions (e.g., AWS S3, Azure Data Lake), data processing frameworks (e.g., Apache Spark), and data orchestration tools.Independent Work and Initiative: Taking ownership of independent tasks, displaying initiative and problem-solving skills when confronted with intricate data platform engineering challenges.Emerging Leadership: Commencing leadership roles, which may encompass mentoring junior engineers, leading smaller project teams, or taking the lead on specific aspects of data platform projects.
Posted 1 week ago
8.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Location: Chennai,Kolkata,Gurgaon,Bangalore and Pune Experience: 8 -12 Years Work Mode: Hybrid Mandatory Skills: Python, Pyspark, SQL, ETL, Data Pipeline, Azure Databricks, Azure DataFactory, Azure Synapse, Airflow, and Architect Designing,Architect. Overview We are seeking a skilled and motivated Data Engineer with experience in Python, SQL, Azure, and cloud-based technologies to join our dynamic team. The ideal candidate will have a solid background in building and optimizing data pipelines, working with cloud platforms, and leveraging modern data engineering tools like Airflow, PySpark, and Azure Data Engineering. If you are passionate about data and looking for an opportunity to work on cutting-edge technologies, this role is for you! Primary Roles And Responsibilities Developing Modern Data Warehouse solutions using Databricks and AWS/ Azure Stack Ability to provide solutions that are forward-thinking in data engineering and analytics space Collaborate with DW/BI leads to understand new ETL pipeline development requirements. Triage issues to find gaps in existing pipelines and fix the issues Work with business to understand the need in reporting layer and develop data model to fulfill reporting needs Help joiner team members to resolve issues and technical challenges. Drive technical discussion with client architect and team members Orchestrate the data pipelines in scheduler via Airflow Skills And Qualifications Bachelor's and/or master’s degree in computer science or equivalent experience. Must have total 6+ yrs. of IT experience and 3+ years' experience in Data warehouse/ETL projects. Deep understanding of Star and Snowflake dimensional modelling. Strong knowledge of Data Management principles Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture Should have hands-on experience in SQL, Python and Spark (PySpark) Candidate must have experience in AWS/ Azure stack Desirable to have ETL with batch and streaming (Kinesis). Experience in building ETL / data warehouse transformation processes Experience with Apache Kafka for use with streaming data / event-based data Experience with other Open-Source big data products Hadoop (incl. Hive, Pig, Impala) Experience with Open Source non-relational / NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J) Experience working with structured and unstructured data including imaging & geospatial data. Experience working in a Dev/Ops environment with tools such as Terraform, CircleCI, GIT. Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning and troubleshoot Databricks Certified Data Engineer Associate/Professional Certification (Desirable). Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects Should have experience working in Agile methodology Strong verbal and written communication skills. Strong analytical and problem-solving skills with a high attention to detail. Skills: data warehouse,data engineering,etl,data,python,sql,data pipeline,azure synapse,azure datafactory,pipelines,skills,azure databricks,architect designing,pyspark,azure,airflow Show more Show less
Posted 1 week ago
5.0 - 10.0 years
12 - 16 Lacs
Kolkata
Work from Office
Capgemini Invent Capgemini Invent is the digital innovation, consulting and transformation brand of the Capgemini Group, a global business line that combines market leading expertise in strategy, technology, data science and creative design, to help CxOs envision and build whats next for their businesses. Your role Having 5+ years of experience in creating data strategy frameworks/ roadmaps Having relevant experience in data exploration & profiling, involve in data literacy activities for all stakeholders. 5+ years in Analytics and data maturity evaluation based on current AS-is vs to-be framework. 5+ years Relevant experience in creating functional requirements document, Enterprise to-be data architecture. Relevant experience in identifying and prioritizing use case by for business; important KPI identification opex/capex for CXO's 2+ years working knowledge in Data Strategy: Data Governance/ MDM etc 4+ year experience in Data Analytics operating model with vision on prescriptive, descriptive, predictive , cognitive analytics Identify, design, and recommend internal process improvements: automating manual processes, optimizing data delivery, re- designing infrastructure for greater scalability, etc. Identify data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader. Work with data and analytics experts to create frameworks for digital twins/ digital threads. Relevant experience in co-ordinating with cross functional team ; aka SPOC for global master data Your Profile 8+ years of experience in a Data Strategy role, who has attained a Graduate degree in Computer Science, Informatics, Information Systems, or another quantitative field. They should also have experience using the following software/tools: Experience with understanding big data tools: Hadoop, Spark, Kafka, etc. Experience with understanding relational SQL and NoSQL databases, including Postgres and Cassandra/Mongo dB. Experience with understanding data pipeline and workflow management tools: Luigi, Airflow, etc. Good to have cloud skillsets (Azure/ AWS/ GCP), 5+ years of Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.: Postgres/ SQL/ Mongo 2+ years working knowledge in Data Strategy: Data Governance/ MDM etc. Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement. Strong analytic skills related to working with unstructured datasets. A successful history of manipulating, processing, and extracting value from large, disconnected datasets. Working knowledge of message queuing, stream processing, and highly scalable big data data stores. Strong project management and organizational skills. Experience supporting and working with cross-functional teams in a dynamic environment. About Capgemini Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by its market leading capabilities in AI, cloud and data, combined with its deep industry expertise and partner ecosystem. The Group reported 2023 global revenues of 22.5 billion.
Posted 1 week ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
About The Opportunity A key player in the Big Data solutions space, we specialize in creating and implementing large-scale data processing frameworks. Our mission is to help clients harness the power of data analytics to drive business insights and operational efficiency. With a strong focus on leveraging cutting-edge technologies, we provide a collaborative environment conducive to professional growth and innovation. Role & Responsibilities Design and implement scalable data processing frameworks using Hadoop and Spark. Develop ETL processes for data ingestion, transformation, and loading from diverse sources. Collaborate with data architects and analysts to optimize data models and enhance performance. Ensure data quality and integrity through rigorous testing and validation. Create and maintain documentation for data workflows, processes, and architecture. Troubleshoot and resolve data-related issues in a timely manner. Skills & Qualifications Must-Have Proficiency in the Hadoop ecosystem (HDFS, MapReduce, Hive). Hands-on experience with Apache Spark and its components. Strong SQL skills for querying relational databases. Experience with ETL tools and data integration technologies. Knowledge of data modeling techniques and best practices. Familiarity with Python for scripting and automation. Preferred Experience with NoSQL databases (Cassandra, MongoDB). Ability to tune performance for large-scale data workflows. Exposure to cloud-based data solutions (AWS, Azure). Benefits & Culture Highlights Dynamic work environment focused on innovation and continuous learning. Opportunities for professional development and career advancement. Collaborative team atmosphere that values diverse perspectives. Skills: sql proficiency,big data debveloper,data modeling techniques,data integration technologies,python scripting,etl tools,gcp,performance tuning,python,sql,hadoop ecosystem (hdfs, mapreduce, hive),apache spark,data modeling,pyspark,data warehousing,hadoop ecosystem Show more Show less
Posted 1 week ago
8.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Location: Chennai,Kolkata,Gurgaon,Bangalore and Pune Experience: 8 -12 Years Work Mode: Hybrid Mandatory Skills: Python, Pyspark, SQL, ETL, Data Pipeline, Azure Databricks, Azure DataFactory, Azure Synapse, Airflow, and Architect Designing,Architect. Overview We are seeking a skilled and motivated Data Engineer with experience in Python, SQL, Azure, and cloud-based technologies to join our dynamic team. The ideal candidate will have a solid background in building and optimizing data pipelines, working with cloud platforms, and leveraging modern data engineering tools like Airflow, PySpark, and Azure Data Engineering. If you are passionate about data and looking for an opportunity to work on cutting-edge technologies, this role is for you! Primary Roles And Responsibilities Developing Modern Data Warehouse solutions using Databricks and AWS/ Azure Stack Ability to provide solutions that are forward-thinking in data engineering and analytics space Collaborate with DW/BI leads to understand new ETL pipeline development requirements. Triage issues to find gaps in existing pipelines and fix the issues Work with business to understand the need in reporting layer and develop data model to fulfill reporting needs Help joiner team members to resolve issues and technical challenges. Drive technical discussion with client architect and team members Orchestrate the data pipelines in scheduler via Airflow Skills And Qualifications Bachelor's and/or master’s degree in computer science or equivalent experience. Must have total 6+ yrs. of IT experience and 3+ years' experience in Data warehouse/ETL projects. Deep understanding of Star and Snowflake dimensional modelling. Strong knowledge of Data Management principles Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture Should have hands-on experience in SQL, Python and Spark (PySpark) Candidate must have experience in AWS/ Azure stack Desirable to have ETL with batch and streaming (Kinesis). Experience in building ETL / data warehouse transformation processes Experience with Apache Kafka for use with streaming data / event-based data Experience with other Open-Source big data products Hadoop (incl. Hive, Pig, Impala) Experience with Open Source non-relational / NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J) Experience working with structured and unstructured data including imaging & geospatial data. Experience working in a Dev/Ops environment with tools such as Terraform, CircleCI, GIT. Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning and troubleshoot Databricks Certified Data Engineer Associate/Professional Certification (Desirable). Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects Should have experience working in Agile methodology Strong verbal and written communication skills. Strong analytical and problem-solving skills with a high attention to detail. Skills: data warehouse,data engineering,etl,data,python,sql,data pipeline,azure synapse,azure datafactory,pipelines,skills,azure databricks,architect designing,pyspark,azure,airflow Show more Show less
Posted 1 week ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
About The Opportunity A key player in the Big Data solutions space, we specialize in creating and implementing large-scale data processing frameworks. Our mission is to help clients harness the power of data analytics to drive business insights and operational efficiency. With a strong focus on leveraging cutting-edge technologies, we provide a collaborative environment conducive to professional growth and innovation. Role & Responsibilities Design and implement scalable data processing frameworks using Hadoop and Spark. Develop ETL processes for data ingestion, transformation, and loading from diverse sources. Collaborate with data architects and analysts to optimize data models and enhance performance. Ensure data quality and integrity through rigorous testing and validation. Create and maintain documentation for data workflows, processes, and architecture. Troubleshoot and resolve data-related issues in a timely manner. Skills & Qualifications Must-Have Proficiency in the Hadoop ecosystem (HDFS, MapReduce, Hive). Hands-on experience with Apache Spark and its components. Strong SQL skills for querying relational databases. Experience with ETL tools and data integration technologies. Knowledge of data modeling techniques and best practices. Familiarity with Python for scripting and automation. Preferred Experience with NoSQL databases (Cassandra, MongoDB). Ability to tune performance for large-scale data workflows. Exposure to cloud-based data solutions (AWS, Azure). Benefits & Culture Highlights Dynamic work environment focused on innovation and continuous learning. Opportunities for professional development and career advancement. Collaborative team atmosphere that values diverse perspectives. Skills: sql proficiency,big data debveloper,data modeling techniques,data integration technologies,python scripting,etl tools,gcp,performance tuning,python,sql,hadoop ecosystem (hdfs, mapreduce, hive),apache spark,data modeling,pyspark,data warehousing,hadoop ecosystem Show more Show less
Posted 1 week ago
8.0 years
0 Lacs
Greater Kolkata Area
On-site
Location: Chennai,Kolkata,Gurgaon,Bangalore and Pune Experience: 8 -12 Years Work Mode: Hybrid Mandatory Skills: Python, Pyspark, SQL, ETL, Data Pipeline, Azure Databricks, Azure DataFactory, Azure Synapse, Airflow, and Architect Designing,Architect. Overview We are seeking a skilled and motivated Data Engineer with experience in Python, SQL, Azure, and cloud-based technologies to join our dynamic team. The ideal candidate will have a solid background in building and optimizing data pipelines, working with cloud platforms, and leveraging modern data engineering tools like Airflow, PySpark, and Azure Data Engineering. If you are passionate about data and looking for an opportunity to work on cutting-edge technologies, this role is for you! Primary Roles And Responsibilities Developing Modern Data Warehouse solutions using Databricks and AWS/ Azure Stack Ability to provide solutions that are forward-thinking in data engineering and analytics space Collaborate with DW/BI leads to understand new ETL pipeline development requirements. Triage issues to find gaps in existing pipelines and fix the issues Work with business to understand the need in reporting layer and develop data model to fulfill reporting needs Help joiner team members to resolve issues and technical challenges. Drive technical discussion with client architect and team members Orchestrate the data pipelines in scheduler via Airflow Skills And Qualifications Bachelor's and/or master’s degree in computer science or equivalent experience. Must have total 6+ yrs. of IT experience and 3+ years' experience in Data warehouse/ETL projects. Deep understanding of Star and Snowflake dimensional modelling. Strong knowledge of Data Management principles Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture Should have hands-on experience in SQL, Python and Spark (PySpark) Candidate must have experience in AWS/ Azure stack Desirable to have ETL with batch and streaming (Kinesis). Experience in building ETL / data warehouse transformation processes Experience with Apache Kafka for use with streaming data / event-based data Experience with other Open-Source big data products Hadoop (incl. Hive, Pig, Impala) Experience with Open Source non-relational / NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J) Experience working with structured and unstructured data including imaging & geospatial data. Experience working in a Dev/Ops environment with tools such as Terraform, CircleCI, GIT. Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning and troubleshoot Databricks Certified Data Engineer Associate/Professional Certification (Desirable). Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects Should have experience working in Agile methodology Strong verbal and written communication skills. Strong analytical and problem-solving skills with a high attention to detail. Skills: data warehouse,data engineering,etl,data,python,sql,data pipeline,azure synapse,azure datafactory,pipelines,skills,azure databricks,architect designing,pyspark,azure,airflow Show more Show less
Posted 1 week ago
1.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Description The Prime Data Engineering & Analytics (PDEA) team is seeking to hire passionate Data Engineers to build and manage the central petabyte-scale data infrastructure supporting the worldwide Prime business operations. At Amazon Prime, understanding customer data is paramount to our success in providing customers with relevant and enticing benefits such as fast free shipping, instant videos, streaming music and free Kindle books in the US and international markets. At Amazon you will be working in one of the world's largest and most complex data environments. You will be part of team that will work with the marketing, retail, finance, analytics, machine learning and technology teams to provide real time data processing solution that give Amazon leadership, marketers, PMs timely, flexible and structured access to customer insights. The team will be responsible for building this platform end to end using latest AWS technologies and software development principles. As a Data Engineer, you will be responsible for leading the architecture, design and development of the data, metrics and reporting platform for Prime. You will architect and implement new and automated Business Intelligence solutions, including big data and new analytical capabilities that support our Development Engineers, Analysts and Retail business stakeholders with timely, actionable data, metrics and reports while satisfying scalability, reliability, accuracy, performance and budget goals and driving automation and operational efficiencies. You will partner with business leaders to drive strategy and prioritize projects and feature sets. You will also write and review business cases and drive the development process from design to release. In addition, you will provide technical leadership and mentoring for a team of highly capable Data Engineers. Responsibilities Own design and execution of end to end projects Own managing WW Prime core services data infrastructure Establish key relationships which span Amazon business units and Business Intelligence teams Implement standardized, automated operational and quality control processes to deliver accurate and timely data and reporting to meet or exceed SLAs Basic Qualifications 1+ years of data engineering experience Experience with SQL Experience with data modeling, warehousing and building ETL pipelines Experience with one or more query language (e.g., SQL, PL/SQL, DDL, MDX, HiveQL, SparkSQL, Scala) Experience with one or more scripting language (e.g., Python, KornShell) Preferred Qualifications Experience with big data technologies such as: Hadoop, Hive, Spark, EMR Experience with any ETL tool like, Informatica, ODI, SSIS, BODI, Datastage, etc. Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI - Karnataka Job ID: A3011483 Show more Show less
Posted 1 week ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About The Opportunity A key player in the Big Data solutions space, we specialize in creating and implementing large-scale data processing frameworks. Our mission is to help clients harness the power of data analytics to drive business insights and operational efficiency. With a strong focus on leveraging cutting-edge technologies, we provide a collaborative environment conducive to professional growth and innovation. Role & Responsibilities Design and implement scalable data processing frameworks using Hadoop and Spark. Develop ETL processes for data ingestion, transformation, and loading from diverse sources. Collaborate with data architects and analysts to optimize data models and enhance performance. Ensure data quality and integrity through rigorous testing and validation. Create and maintain documentation for data workflows, processes, and architecture. Troubleshoot and resolve data-related issues in a timely manner. Skills & Qualifications Must-Have Proficiency in the Hadoop ecosystem (HDFS, MapReduce, Hive). Hands-on experience with Apache Spark and its components. Strong SQL skills for querying relational databases. Experience with ETL tools and data integration technologies. Knowledge of data modeling techniques and best practices. Familiarity with Python for scripting and automation. Preferred Experience with NoSQL databases (Cassandra, MongoDB). Ability to tune performance for large-scale data workflows. Exposure to cloud-based data solutions (AWS, Azure). Benefits & Culture Highlights Dynamic work environment focused on innovation and continuous learning. Opportunities for professional development and career advancement. Collaborative team atmosphere that values diverse perspectives. Skills: sql proficiency,big data debveloper,data modeling techniques,data integration technologies,python scripting,etl tools,gcp,performance tuning,python,sql,hadoop ecosystem (hdfs, mapreduce, hive),apache spark,data modeling,pyspark,data warehousing,hadoop ecosystem Show more Show less
Posted 1 week ago
4.0 - 8.0 years
6 - 10 Lacs
Bengaluru
Work from Office
The Opportunity "We are seeking a senior software engineer to undertake a range of feature development tasks that continue the evolution of our DMP Streaming product. You will demonstrate the required potential and technical curiosity to work on software that utilizes a range of leading edge technologies and integration frameworks. Given your depth of experience, we also want you to technically guide more junior members of the team, instilling both good engineering practices and inspiring them to grow" Software Quality Assurance Director What You'll Contribute Implement product changes, undertaking detailed design, programming, unit testing and deployment as required by our SDLC process Investigate and resolve reported software defects across supported platforms Work in conjunction with product management to understand business requirements and convert them into effective software designs that will enhance the current product offering Produce component specifications and prototypes as necessary Provide realistic and achievable project estimates for the creation and development of solutions. This information will form part of a larger release delivery plan Develop and test software components of varying size and complexity Design and execute unit, link and integration test plans, and document test results. Create test data and environments as necessary to support the required level of validation Work closely with the quality assurance team and assist with integration testing, system testing, acceptance testing, and implementation Produce relevant system documentation Participate in peer review sessions to ensure ongoing quality of deliverables. Validate other team members' software changes, test plans and results Maintain and develop industry knowledge, skills and competencies in software development What We're Seeking A Bachelors or Masters degree in Computer Science, Engineering, or related field Java software development experience within an industry setting Ability to work in both Windows and UNIX/Linux operating systems Detailed understanding of software and testing methods Strong foundation and grasp of design models and database structures Proficient in Kubernetes, Docker, and Kustomize Exposure to the following technologies: Apache Storm, MySQL or Oracle, Kafka, Cassandra, OpenSearch, and API (REST) development Familiarity with Eclipse, Subversion and Maven Ability to lead and manage others independently on major feature changes Excellent communication skills with the ability to articulate information clearly with architects, and discuss strategy/requirements with team members and the product manager Quality-driven work ethic with meticulous attention to detail Ability to function effectively in a geographically-diverse team Ability to work within a hybrid Agile methodology Understand the design and development approaches required to build a scalable infrastructure/platform for large amounts of data ingestion, aggregation, integration and advanced analytics Experience of developing and deploying applications into AWS or a private cloud Exposure to any of the following: Hadoop, JMS, Zookeeper, Spring, JavaScript, Angular, UI Development
Posted 1 week ago
0 years
0 Lacs
Navi Mumbai, Maharashtra, India
On-site
Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your Role And Responsibilities As a Data Engineer at IBM, you'll play a vital role in the development, design of application, provide regular support/guidance to project teams on complex coding, issue resolution and execution. Your Primary Responsibilities Include Lead the design and construction of new solutions using the latest technologies, always looking to add business value and meet user requirements. Strive for continuous improvements by testing the build solution and working under an agile framework. Discover and implement the latest technologies trends to maximize and build creative solutions Preferred Education Master's Degree Required Technical And Professional Expertise Experience with Apache Spark (PySpark): In-depth knowledge of Spark’s architecture, core APIs, and PySpark for distributed data processing. Big Data Technologies: Familiarity with Hadoop, HDFS, Kafka, and other big data tools. Data Engineering Skills: Strong understanding of ETL pipelines, data modeling, and data warehousing concepts. Strong proficiency in Python: Expertise in Python programming with a focus on data processing and manipulation. Data Processing Frameworks: Knowledge of data processing libraries such as Pandas, NumPy. SQL Proficiency: Experience writing optimized SQL queries for large-scale data analysis and transformation. Cloud Platforms: Experience working with cloud platforms like AWS, Azure, or GCP, including using cloud storage systems Preferred Technical And Professional Experience Define, drive, and implement an architecture strategy and standards for end-to-end monitoring. Partner with the rest of the technology teams including application development, enterprise architecture, testing services, network engineering, Good to have detection and prevention tools for Company products and Platform and customer-facing Show more Show less
Posted 1 week ago
8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Location: Chennai,Kolkata,Gurgaon,Bangalore and Pune Experience: 8 -12 Years Work Mode: Hybrid Mandatory Skills: Python, Pyspark, SQL, ETL, Data Pipeline, Azure Databricks, Azure DataFactory, Azure Synapse, Airflow, and Architect Designing,Architect. Overview We are seeking a skilled and motivated Data Engineer with experience in Python, SQL, Azure, and cloud-based technologies to join our dynamic team. The ideal candidate will have a solid background in building and optimizing data pipelines, working with cloud platforms, and leveraging modern data engineering tools like Airflow, PySpark, and Azure Data Engineering. If you are passionate about data and looking for an opportunity to work on cutting-edge technologies, this role is for you! Primary Roles And Responsibilities Developing Modern Data Warehouse solutions using Databricks and AWS/ Azure Stack Ability to provide solutions that are forward-thinking in data engineering and analytics space Collaborate with DW/BI leads to understand new ETL pipeline development requirements. Triage issues to find gaps in existing pipelines and fix the issues Work with business to understand the need in reporting layer and develop data model to fulfill reporting needs Help joiner team members to resolve issues and technical challenges. Drive technical discussion with client architect and team members Orchestrate the data pipelines in scheduler via Airflow Skills And Qualifications Bachelor's and/or master’s degree in computer science or equivalent experience. Must have total 6+ yrs. of IT experience and 3+ years' experience in Data warehouse/ETL projects. Deep understanding of Star and Snowflake dimensional modelling. Strong knowledge of Data Management principles Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture Should have hands-on experience in SQL, Python and Spark (PySpark) Candidate must have experience in AWS/ Azure stack Desirable to have ETL with batch and streaming (Kinesis). Experience in building ETL / data warehouse transformation processes Experience with Apache Kafka for use with streaming data / event-based data Experience with other Open-Source big data products Hadoop (incl. Hive, Pig, Impala) Experience with Open Source non-relational / NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J) Experience working with structured and unstructured data including imaging & geospatial data. Experience working in a Dev/Ops environment with tools such as Terraform, CircleCI, GIT. Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning and troubleshoot Databricks Certified Data Engineer Associate/Professional Certification (Desirable). Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects Should have experience working in Agile methodology Strong verbal and written communication skills. Strong analytical and problem-solving skills with a high attention to detail. Skills: data warehouse,data engineering,etl,data,python,sql,data pipeline,azure synapse,azure datafactory,pipelines,skills,azure databricks,architect designing,pyspark,azure,airflow Show more Show less
Posted 1 week ago
5.0 - 8.0 years
0 - 1 Lacs
Pune, Chennai
Hybrid
Exciting Opportunity Alert !! We're on the hunt for passionate individuals to join our dynamic team as Data Engineer Job Profile : Data Engineers Experience : 5 to 8 Yrs of exp Location : Chennai / Pune Mandatory Skills : Big Data | Hadoop | pyspark | spark | sparkSql | Hive Qualification : B.TECH / B.E / MCA / Computer Science Background - Any Specification How to Apply? Send your CV to: kowshik@sightspectrum.in Contact Number - 9500333730 Don't miss out on this amazing opportunity to accelerate your professional career! #bigdata engineer #Hadoop #Pyspark #hive
Posted 1 week ago
10.0 years
0 Lacs
Gurugram, Haryana, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Data n’ Analytics – Data Strategy - Manager, Strategy and Transactions EY’s Data n’ Analytics team is a multi-disciplinary technology team delivering client projects and solutions across Data Management, Visualization, Business Analytics and Automation. The assignments cover a wide range of countries and industry sectors. The opportunity We’re looking for Manager - Data Strategy. The main objective of the role is to develop and articulate a clear and concise data strategy aligned with the overall business strategy. Communicate the data strategy effectively to stakeholders across the organization, ensuring buy-in and alignment. Establish and maintain data governance policies and procedures to ensure data quality, security, and compliance. Oversee data management activities, including data acquisition, integration, transformation, and storage. Develop and implement data quality frameworks and processes.The role will primarily involve conceptualizing, designing, developing, deploying and maintaining complex technology solutions which help EY solve business problems for the clients. This role will work closely with technical architects, product and business subject matter experts (SMEs), back-end developers and other solution architects and is also on-shore facing. Discipline Data Strategy Key Skills Strong understanding of data models (relational, dimensional), data warehousing concepts, and cloud-based data architectures (AWS, Azure, GCP). Proficiency in data analysis techniques (e.g., SQL, Python, R), statistical modeling, and data visualization tools. Familiarity with big data technologies such as Hadoop, Spark, and NoSQL databases. Client Handling and Communication, Problem Solving, Systems thinking, Passion of technology, Adaptability, Agility, Analytical thinking, Collaboration Skills And Attributes For Success 10-12 years of total experience with 8+ years in Data Strategy and Architecture field Solid hands-on 6+ years of professional experience with designing and architecting of data warehouses/ data lakes on client engagements and helping create enhancements to a data warehouse Architecture design and implementation experience with medium to complex on-prem to cloud migrations with any of the major cloud platforms (preferably AWS/Azure/GCP) 5+ years’ experience in Azure database offerings [ Relational, NoSQL, Datawarehouse ] 5+ years experience in various Azure services preferred – Azure Data Factory, Kafka, Azure Data Explorer, Storage, Azure Data Lake, Azure Synapse Analytics, Azure Analysis Services & Databricks Minimum of 8 years of hands-on database design, modelling and integration experience with relational data sources, such as SQL Server databases, Oracle/MySQL, Azure SQL and Azure Synapse Knowledge and direct experience using business intelligence reporting tools (Power BI, Alteryx, OBIEE, Business Objects, Cognos, Tableau, MicroStrategy, SSAS Cubes etc.) Strong creative instincts related to data analysis and visualization. Aggressive curiosity to learn the business methodology, data model and user personas. Strong understanding of BI and DWH best practices, analysis, visualization, and latest trends. Experience with the software development lifecycle (SDLC) and principles of product development such as installation, upgrade and namespace management Willingness to mentor team members Solid analytical, technical and problem-solving skills Excellent written and verbal communication skills Strong project and people management skills with experience in serving global clients To qualify for the role, you must have Master’s Degree in Computer Science, Business Administration or equivalent work experience. Fact driven and analytically minded with excellent attention to details Hands-on experience with data engineering tasks such as building analytical data records and experience manipulating and analysing large volumes of data Relevant work experience of minimum 12 to 14 years in a big 4 or technology/ consulting set up Help incubate new finance analytic products by executing Pilot, Proof of Concept projects to establish capabilities and credibility with users and clients. This may entail working either as an independent SME or as part of a larger team Ideally, you’ll also have Ability to think strategically/end-to-end with result-oriented mindset Ability to build rapport within the firm and win the trust of the clients Willingness to travel extensively and to work on client sites / practice office locations Strong experience in SQL server and MS Excel plus atleast one other SQL dialect e.g. MS Access\Postgresql\Oracle PLSQL\MySQLStrong in Data Structures & Algorithm Experience of interfacing with databases such as Azure databases, SQL server, Oracle, Teradata etc Preferred exposure to JSON, Cloud Foundry, Pivotal, MatLab, Spark, Greenplum, Cassandra, Amazon Web Services, Microsoft Azure, Google Cloud, Informatica, Angular JS, Python, etc. What We Look For A Team of people with commercial acumen, technical experience and enthusiasm to learn new things in this fast-moving environment An opportunity to be a part of market-leading, multi-disciplinary team of 1400 + professionals, in the only integrated global transaction business worldwide. Opportunities to work with EY SaT practices globally with leading businesses across a range of industries What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less
Posted 1 week ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
About The Opportunity A key player in the Big Data solutions space, we specialize in creating and implementing large-scale data processing frameworks. Our mission is to help clients harness the power of data analytics to drive business insights and operational efficiency. With a strong focus on leveraging cutting-edge technologies, we provide a collaborative environment conducive to professional growth and innovation. Role & Responsibilities Design and implement scalable data processing frameworks using Hadoop and Spark. Develop ETL processes for data ingestion, transformation, and loading from diverse sources. Collaborate with data architects and analysts to optimize data models and enhance performance. Ensure data quality and integrity through rigorous testing and validation. Create and maintain documentation for data workflows, processes, and architecture. Troubleshoot and resolve data-related issues in a timely manner. Skills & Qualifications Must-Have Proficiency in the Hadoop ecosystem (HDFS, MapReduce, Hive). Hands-on experience with Apache Spark and its components. Strong SQL skills for querying relational databases. Experience with ETL tools and data integration technologies. Knowledge of data modeling techniques and best practices. Familiarity with Python for scripting and automation. Preferred Experience with NoSQL databases (Cassandra, MongoDB). Ability to tune performance for large-scale data workflows. Exposure to cloud-based data solutions (AWS, Azure). Benefits & Culture Highlights Dynamic work environment focused on innovation and continuous learning. Opportunities for professional development and career advancement. Collaborative team atmosphere that values diverse perspectives. Skills: sql proficiency,big data debveloper,data modeling techniques,data integration technologies,python scripting,etl tools,gcp,performance tuning,python,sql,hadoop ecosystem (hdfs, mapreduce, hive),apache spark,data modeling,pyspark,data warehousing,hadoop ecosystem Show more Show less
Posted 1 week ago
3.0 - 5.0 years
0 Lacs
Bengaluru, Karnataka, India
Remote
Sapiens is on the lookout for a Developer (ETL) to become a key player in our Bangalore team. If you're a seasoned ETL pro and ready to take your career to new heights with an established, globally successful company, this role could be the perfect fit. Location: Bangalore Working Model: Our flexible work arrangement combines both remote and in-office work, optimizing flexibility and productivity. This position will be part of Sapiens’ L&P division, for more information about it, click here: https://sapiens.com/solutions/life-and-pension-software/ What You’ll Do Designing, and developing of core components/services that are flexible, extensible, multi-tier, scalable, high-performance and reliable applications of an advanced complex software system, called ALIS both in R&D and Delivery. Good understanding in the ETL Advanced concepts and administration activities to support R&D/Project. Experience in understanding of different ETL tools (min 4) and Advanced transformations, Good in Talend, SAP BODS to support R&D/Project To be able to resolve all ETL code and administration issue. Ability to resolve complex Reporting challenges. Ability to create full-fledged dashboards with story boards/lines, drill down, linking etc.; design tables, views or datamarts to support these dashboards Ability understand, propose data load strategies which improves performance and visualizations Ability to performance tune SQL, ETL & Report, Universes To understand Sapiens Intelligence Product and support below points Understands the Transaction Layer model for all modules Universe the Universe Model for all modules Should have End to End Sapiens Intelligence knowledge Should be able to independently demo or give training for Sapiens Intelligence product. Should be an SME in Sapiens Intelligence as a Product What To Have For This Position. Must have Skills. 3 - 5 years of IT experience. Should have experience to understand the Advanced concepts of Insurance and has good command over at least all Business / Functional Areas (like NB, claims, Finance etc,.) Should have experience with developing a complete DWH ETL lifecycle Should have experience in developing ETL processes - ETL control tables, error logging, auditing, data quality, etc. - using ETL tools such as Talend, BODS, SSIS etc. Experience or knowledge in Bigdata related tools like (Spark, Hive, Kafka, Hadoop, Horton works, Python, R) would be good to go Should have experience in developing SAP BO or any Reporting tool knowledge Should be able to implement reusability, parameterization, workflow design, etc. Should have experience of interacting with customers in understanding business requirement documents and translating them into ETL specifications and Low/High level design documents Experience in understanding complex source system data structures – preferably in Insurance services (Insurance preferred) Experience in Data Analysis, Data Modeling and Data Mart design Strong database development skills like complex SQL queries, complex stored procedures Good verbal and written communication in English, Strong interpersonal, analytical and problem-solving abilities. Ability to work with minimal guidance or supervision in a time critical environment. Willingness to travel and work at various customer sites across the globe. About Sapiens Sapiens is a global leader in the insurance industry, delivering its award-winning, cloud-based SaaS insurance platform to over 600 customers in more than 30 countries. Sapiens’ platform offers pre-integrated, low-code capabilities to accelerate customers’ digital transformation. With more than 40 years of industry expertise, Sapiens has a highly professional team of over 5,000 employees globally. For More information visit us on www.sapiens.com . Sapiens is an equal opportunity employer. We value diversity and strive to create an inclusive work environment that embraces individuals from diverse backgrounds. Disclaimer : Sapiens India does not authorise any third parties to release employment offers or conduct recruitment drives via a third party. Hence, beware of inauthentic and fraudulent job offers or recruitment drives from any individuals or websites purporting to represent Sapiens . Further, Sapiens does not charge any fee or other emoluments for any reason (including without limitation, visa fees) or seek compensation from educational institutions to participate in recruitment events. Accordingly, please check the authenticity of any such offers before acting on them and where acted upon, you do so at your own risk. Sapiens shall neither be responsible for honouring or making good the promises made by fraudulent third parties, nor for any monetary or any other loss incurred by the aggrieved individual or educational institution. In the event that you come across any fraudulent activities in the name of Sapiens , please feel free report the incident at sapiens to sharedservices@sapiens.com . Show more Show less
Posted 1 week ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About Lowe’s Lowe’s is a FORTUNE® 100 home improvement company serving approximately 16 million customer transactions a week in the United States. With total fiscal year 2024 sales of more than $83 billion, Lowe’s operates over 1,700 home improvement stores and employs approximately 300,000 associates. Based in Mooresville, N.C., Lowe’s supports the communities it serves through programs focused on creating safe, affordable housing, improving community spaces, helping to develop the next generation of skilled trade experts and providing disaster relief to communities in need. For more information, visit Lowes.com. Lowe’s India, the Global Capability Center of Lowe’s Companies Inc., is a hub for driving our technology, business, analytics, and shared services strategy. Based in Bengaluru with over 4,500 associates, it powers innovations across omnichannel retail, AI/ML, enterprise architecture, supply chain, and customer experience. From supporting and launching homegrown solutions to fostering innovation through its Catalyze platform, Lowe’s India plays a pivotal role in transforming home improvement retail while upholding strong commitment to social impact and sustainability. For more information, visit Lowes India About The Team A Merchandising Analyst plays a pivotal role in driving the performance of product assortments by leveraging data to optimize strategies. They are responsible for analyzing key trends across sales, margins, inventory, turnover, and other critical KPIs. By incorporating macroeconomic factors and leveraging forecasted expectations, they develop effective strategies to maximize revenue and margins, optimize inventory levels, and ensure customer needs are met efficiently. The ideal candidate should possess strong technical expertise, enabling them to conduct root cause analyses, A/B testing, hypothesis testing, and regression analysis. Their insights should translate into actionable recommendations that drive business results. Additionally, they are expected to collaborate with cross-functional teams to integrate metrics beyond merchandising and engage stakeholders to understand and address their specific requirements effectively. Job Summary The primary purpose of this role is to perform mathematical and statistical analysis or model building as appropriate. This includes following analytical best practices, analyzing and reporting accurate results, and identifying meaningful insights that directly support decision making. This role provides assistance in supporting one functional area of the business in partnership with other team members. At times, this role may work directly with the business function, but the majority of time is spent working with internal team members to identify and understand business needs. Roles & Responsibilities Core Responsibilities: Conduct in-depth analysis of business trends, financial performance, and market conditions. Develop and maintain data models, dashboards, and reports to support business decisions. Identify opportunities for operational improvements and recommend strategic solutions. Collaborate with cross-functional teams to translate data insights into actionable strategies. Ensure data accuracy, integrity, and security while handling large datasets. Present findings and recommendations to leadership in a clear and concise manner Years Of Experience 1 to 3 yrs of experience data analytics Education Qualification & Certifications (optional) Required Minimum Qualifications Bachelor's degree in business administration, computer science, computer information systems (CIS), engineering, or related field (or equivalent work experience in lieu of degree) Skill Set Required Experience using basic analytical tools such as R, Python, SQL, SAS, Adobe, Alteryx, Knime, Aster Experience using visualization tools such as Power BI, Tableau Secondary Skills (desired) Experience with business intelligence and reporting tools (e.g., MicroStrategy, Business Objects, Cognos, Adobe, TM1, Alteryx, Knime, SSIS, SQL, Svr) and Enterprise level databases (Hadoop, GCP, Azure, Oracle, Teradata, DB2) Experience working with big, unstructured data in a retail environment Experience with analytical tools like Python, Alteryx, Knime, SAS, R, etc. Experience with visualization tools like MicroStrategy VI, Power BI, SAS-VA, Tableau, D3, R-Shiny Programming experience using tools such as R, Python Data Science experience using tools such as ML, Text mining Knowledge of SQL Project management experience Experience in home improvement retail Lowe's is an equal opportunity employer and administers all personnel practices without regard to race, color, religious creed, sex, gender, age, ancestry, national origin, mental or physical disability or medical condition, sexual orientation, gender identity or expression, marital status, military or veteran status, genetic information, or any other category protected under federal, state, or local law. Starting rate of pay may vary based on factors including, but not limited to, position offered, location, education, training, and/or experience. For information regarding our benefit programs and eligibility, please visit https://talent.lowes.com/us/en/benefits. Show more Show less
Posted 1 week ago
2.0 years
0 Lacs
Gandhinagar, Gujarat, India
On-site
Company Overview: Bush & Bush Law Group is a reputable law firm committed to providing our clients with outstanding legal services through the power of advanced technology and data management. We are seeking an experienced ETL Developer (Python) to join our dynamic data team. As an ETL Developer, you will play a vital role in shaping how we utilize data, ensuring accurate and timely downstream decision-making through robust data extraction, transformation, and loading processes. Key Responsibilities: Create and maintain scalable ETL processes using Python to extract data from various operational systems Transform raw data into meaningful formats that meet the business requirements, applying quality checks throughout the process Load data into databases and data warehouses, optimizing the performance of queries and storage solutions Collaborate with cross-functional teams to interpret data needs and implement solutions that drive operational excellence Execute performance tuning, error resolution, and troubleshooting of ETL pipelines to ensure efficient operation Document all ETL processes and data workflows, providing clear communication to stakeholders Stay current on industry best practices and emerging trends in data integration and management Compensation: $5-$7 an hour paid bi-weekly. Requirements Qualifications: Bachelor's degree in Computer Science, Information Systems, or a related area At least 2 years of experience in ETL development, specifically using Python Strong familiarity with relational databases and SQL, including experience with data warehousing Experience with ETL tools and frameworks; knowledge of cloud-based solutions is a plus Excellent analytical and problem-solving skills with great attention to detail Ability to work collaboratively in a team environment with effective communication skills Familiarity with big data technologies (like Hadoop or Spark) is an advantage Benefits Positive Culture: Be part of a supportive and innovative team Professional Growth: Opportunities for career advancement and ongoing professional development Work-Life Balance: Flexible work arrangements to support a healthy balance between personal and professional life Show more Show less
Posted 1 week ago
2.0 years
0 Lacs
India
On-site
Company Overview: Bush & Bush Law Group is a reputable law firm committed to providing our clients with outstanding legal services through the power of advanced technology and data management. We are seeking an experienced ETL Developer (Python) to join our dynamic data team. As an ETL Developer, you will play a vital role in shaping how we utilize data, ensuring accurate and timely downstream decision-making through robust data extraction, transformation, and loading processes. Key Responsibilities: Create and maintain scalable ETL processes using Python to extract data from various operational systems Transform raw data into meaningful formats that meet the business requirements, applying quality checks throughout the process Load data into databases and data warehouses, optimizing the performance of queries and storage solutions Collaborate with cross-functional teams to interpret data needs and implement solutions that drive operational excellence Execute performance tuning, error resolution, and troubleshooting of ETL pipelines to ensure efficient operation Document all ETL processes and data workflows, providing clear communication to stakeholders Stay current on industry best practices and emerging trends in data integration and management Compensation: $5-$7 an hour paid bi-weekly. Requirements Qualifications: Bachelor's degree in Computer Science, Information Systems, or a related area At least 2 years of experience in ETL development, specifically using Python Strong familiarity with relational databases and SQL, including experience with data warehousing Experience with ETL tools and frameworks; knowledge of cloud-based solutions is a plus Excellent analytical and problem-solving skills with great attention to detail Ability to work collaboratively in a team environment with effective communication skills Familiarity with big data technologies (like Hadoop or Spark) is an advantage Benefits Positive Culture: Be part of a supportive and innovative team Professional Growth: Opportunities for career advancement and ongoing professional development Work-Life Balance: Flexible work arrangements to support a healthy balance between personal and professional life Show more Show less
Posted 1 week ago
5.0 - 10.0 years
20 - 25 Lacs
Pune, Chennai, Bengaluru
Work from Office
Roles and Responsibilities Design, develop, test, deploy, and maintain large-scale data processing pipelines using Scala/Spark. Collaborate with cross-functional teams to gather requirements and deliver high-quality solutions. Troubleshoot complex issues related to Hive queries, Spark jobs, and other big data technologies. Ensure scalability, performance, and reliability of big data systems on Cloudera/Hadoop ecosystem. Stay up-to-date with industry trends and best practices in big data development.
Posted 1 week ago
0 years
0 Lacs
Indore, Madhya Pradesh, India
On-site
Are you ready to write your next chapter? Make your mark at one of the biggest names in payments. With proven technology, we process the largest volume of payments in the world, driving the global economy every day. When you join Worldpay, you join a global community of experts and changemakers, working to reinvent an industry by constantly evolving how we work and making the way millions of people pay easier, every day. About The Role Worldpay are on an exciting journey re-engineering our core merchant payments platform to be more cost effective, scalable and cloud ready utilizing the latest cutting-edge technologies. This journey will require the very best engineering talent to get us there as it’s not just a technical change, it’s a cultural change as well. About The Team The New Acquiring Platform (NAP) is pioneering a payment revolution. Our state-of-the-art system equips us for the spending habits of tomorrow, as well as today. We will be able to deliver unrivalled insights into every trade and transaction. The platform is designed around payments, not just cards – so we can cater to every emerging trend with speed and customer centricity. What You'll Own We are looking for bright talent who can build future testing capability for ongoing BAU delivery and drive quality improvements across multiple agile teams. You will be working on the QA Team who caters to the product, platform and business needs of a number of Agile Release Trains. including Acquiring, Billing & Funding, Servicing, Reporting. E2E Volume Functional Testing Quality Assurance team is fundamental to unlocking value for our Merchant business, being relied upon to both ensure the stability of our production releases and deliver new boundary breaking products for our customers. This includes full E2E testing of the full Merchant Acquirer lifecycle from merchant onboarding to Invoice generation and reporting whilst interacting with many payment modules/interfaces as part of delivery. Where you'll own it You will own it in our Vibrant Office Locations as Bangalore and Indore hub . APAC With hubs in the heart of city centers and tech capitals, things move fast in APAC. We pride ourselves on being an agile and dynamic collective, collaborating with different teams and offices across the glob What You Bring Proven track record of E2E systems integration testing across multiple team and application boundaries, including 3rd parties and vendors. Experience of operating within an Agile Team (SAFe methodology preferable) Testing of API’s using SoapUI, Postman, etc Advanced SQL, PL/SQL skills (Procedure, Package), basic performance tuning and DBA metadata. ETL or Data Warehouse concepts, experience in any ETL tools (Informatica, ODI, etc.) Experience with automated testing development (Shell, Python, Java) Experience with testing frameworks - Selenium and tools such as Cucumber Good understanding of CI/CD principles, error logging and reporting including the supporting tools e.g. Github, Jenkins, Splunk Experience on testing against modern cloud platforms and containerised applications (AWS/ Azure). Understanding of Kafka / Hadoop (Spark) and/or event driven design and principles. Understanding of job scheduler tools (Control M, Autosys, etc). Experience of the payments industry is preferable, and working with large volumes of data across real-time processing systems (35+ million data records) Good understanding of Unix/Linux/Windows Operating Systems and Oracle Databases. Working with complex reporting requirements across multiple systems Experience in supporting a small team of experienced Quality Analysts Experience in carrying out internal reviews to ensure quality standards are met Must demonstrate ability to own tasks and defects, and see through to completion Building strong relationships across multiple engineering teams and stakeholders Experience in reviewing progress and presenting results to stakeholders Experience with environment management, deployments, and prioritisation Provide subject matter expert knowledge in Quality Assurance best practices, tools, and software Experience working with Rally for test case management and defect management What Makes a Worldpayer It’s simple: Think, Act, Win. We stay curious, always asking the right questions to be better every day, finding creative solutions to simplify the complex. We’re dynamic, every Worldpayer is empowered to make the right decisions for their customers. And we’re determined, always staying open – winning and failing as one. Does this sound like you? Then you sound like a Worldpayer. Apply now to write the next chapter in your career. Privacy Statement Worldpay is committed to protecting the privacy and security of all personal information that we process in order to provide services to our clients. For specific information on how Worldpay protects personal information online, please see the Online Privacy Notice. Sourcing Model Recruitment at Worldpay works primarily on a direct sourcing model; a relatively small portion of our hiring is through recruitment agencies. Worldpay does not accept resumes from recruitment agencies which are not on the preferred supplier list and is not responsible for any related fees for resumes submitted to job postings, our employees, or any other part of our company. #pridepass Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Description The AOP (Analytics Operations and Programs) team is responsible for creating core analytics, insight generation and science capabilities for ROW Ops. We develop scalable analytics applications, AI/ML products and research models to optimize operation processes. You will work with Product Managers, Data Engineers, Data Scientists, Research Scientists, Applied Scientists and Business Intelligence Engineers using rigorous quantitative approaches to ensure high quality data/science products for our customers around the world. We are looking for a Sr.Data Scientist to join our growing Science Team. As Data Scientist, you are able to use a range of science methodologies to solve challenging business problems when the solution is unclear. You will be responsible for building ML models to solve complex business problems and test them in production environment. The scope of role includes defining the charter for the project and proposing solutions which align with org's priorities and production constraints but still create impact. You will achieve this by leveraging strong leadership and communication skills, data science skills and by acquiring domain knowledge pertaining to the delivery operations systems. You will provide ML thought leadership to technical and business leaders, and possess ability to think strategically about business, product, and technical challenges. You will also be expected to contribute to the science community by participating in science reviews and publishing in internal or external ML conferences. Our Team Solves a Broad Range Of Problems That Can Be Scaled Across ROW (Rest Of The World Including Countries Like India, Australia, Singapore, MENA And LATAM). Here Is a Glimpse Of The Problems That This Team Deals With On a Regular Basis Using live package and truck signals to adjust truck capacities in real-time HOTW models for Last Mile Channel Allocation Using LLMs to automate analytical processes and insight generation Ops research to optimize middle mile truck routes Working with global partner science teams to affect Reinforcement Learning based pricing models and estimating Shipments Per Route for $MM savings Deep Learning models to synthesize attributes of addresses Abuse detection models to reduce network losses Key job responsibilities Use machine learning and analytical techniques to create scalable solutions for business problems Analyze and extract relevant information from large amounts of Amazon’s historical business data to help automate and optimize key processes Design, develop, evaluate and deploy, innovative and highly scalable ML/OR models Work closely with other science and engineering teams to drive real-time model implementations Work closely with Ops/Product partners to identify problems and propose machine learning solutions Establish scalable, efficient, automated processes for large scale data analyses, model development, model validation and model maintenance Work proactively with engineering teams and product managers to evangelize new algorithms and drive the implementation of large-scale complex ML models in production Leading projects and mentoring other scientists, engineers in the use of ML techniques Basic Qualifications 5+ years of data scientist experience Experience with data scripting languages (e.g. SQL, Python, R etc.) or statistical/mathematical software (e.g. R, SAS, or Matlab) Experience with statistical models e.g. multinomial logistic regression Experience in data applications using large scale distributed systems (e.g., EMR, Spark, Elasticsearch, Hadoop, Pig, and Hive) Experience working with data engineers and business intelligence engineers collaboratively Demonstrated expertise in a wide range of ML techniques Preferred Qualifications Experience as a leader and mentor on a data science team Master's degree in a quantitative field such as statistics, mathematics, data science, business analytics, economics, finance, engineering, or computer science Expertise in Reinforcement Learning and Gen AI is preferred Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - Amazon Development Centre (India) Private Limited - S55 Job ID: A2974724 Show more Show less
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
20312 Jobs | Dublin
Wipro
11977 Jobs | Bengaluru
EY
8165 Jobs | London
Accenture in India
6667 Jobs | Dublin 2
Uplers
6464 Jobs | Ahmedabad
Amazon
6352 Jobs | Seattle,WA
Oracle
5993 Jobs | Redwood City
IBM
5803 Jobs | Armonk
Capgemini
3897 Jobs | Paris,France
Tata Consultancy Services
3776 Jobs | Thane