Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 - 10.0 years
15 - 20 Lacs
Bengaluru, Karnataka, India
On-site
Greetings from Maneva! Job Description Job Title-PySpark/Scala Developer Location-Bangalore Experience-4- 10 Years Notice -Immediate to 30 days Requirements: Excellent Knowledge on Spark; The professional must have a thorough understanding Spark framework, Performance Tuning etc Excellent Knowledge and hands-on experience of at least 4+ years in Scala andPySpark Excellent Knowledge of the Hadoop eco System- Knowledge of Hive mandatory Strong Unix and Shell Scripting Skills Excellent Inter-personal skills and for experienced candidates Excellent leadership skills Mandatory for anyone to have Good knowledge of any of the CSPs like Azure,AWS or GCP; Certifications on Azure will be additional Plus. If you are excited to grab this opportunity, please apply directly or share your CV at [HIDDEN TEXT] and [HIDDEN TEXT]
Posted 2 weeks ago
8.0 - 10.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
The Minimum Qualifications Education: Bachelors or Masters degree in Computer Science, Information Systems or related field. Experience 8+ years of experience with data analytics, data modeling, and database design. 3+ years of coding and scripting (Python, Java, Scala) and design experience. 3+ years of experience with Spark framework. 5+ Experience with ELT methodologies and tools. 5+ years mastery in designing, developing, tuning and troubleshooting SQL. Knowledge of Informatica Power center and Informatica IDMC. Knowledge of distributed, column- orientated technology to create high-performant database technologies like - Vertica, Snowflake. Strong data analysis skills for extracting insights from financial data Proficiency in reporting tools (e.g., Power BI, Tableau). Show more Show less
Posted 2 weeks ago
2.0 - 6.0 years
0 Lacs
hyderabad, telangana
On-site
As a Pyspark Developer at Viraaj HR Solutions, you will be responsible for developing and maintaining scalable Pyspark applications for data processing. Your role will involve collaborating with data engineers to design and implement ETL pipelines for large datasets. Additionally, you will perform data analysis and build data models using Pyspark to derive insights. It will be your responsibility to ensure data quality and integrity by implementing data cleansing routines and leveraging SQL to query databases effectively. You will also create comprehensive data reports and visualizations for stakeholders, optimize existing data processing jobs for performance and efficiency, and implement new features and enhancements as required by project specifications. Participation in code reviews to ensure adherence to best practices, troubleshooting technical issues with team members, and maintaining documentation of data processes and system configurations will be part of your daily tasks. To excel in this role, you should possess a Bachelor's degree in Computer Science, Information Technology, or a related field, along with proven experience as a Pyspark Developer or in a similar role. Strong programming skills in Pyspark and Python, a solid understanding of the Spark framework and its APIs, and proficiency in SQL for managing and querying databases are essential qualifications. Experience with ETL tools and processes, knowledge of data visualization techniques and tools, and familiarity with cloud platforms such as AWS and Azure are also required. Your problem-solving and analytical skills, along with excellent communication skills (both verbal and written), will be crucial for success in this role. You should be able to work effectively in a team environment, adapt to new technologies and methodologies, and have experience in Agile and Scrum methodologies. Prior experience in data processing on large datasets and an understanding of data governance and compliance standards will be beneficial. Key Skills: agile methodologies, data analysis, team collaboration, Python, Scrum, Pyspark, data visualization, problem-solving, ETL tools, Python scripting, Apache Spark, Spark framework, cloud platforms (AWS, Azure), SQL, cloud technologies, data processing.,
Posted 3 weeks ago
6.0 - 10.0 years
0 Lacs
delhi
On-site
The client, a leading MNC, specializes in technology consulting and digital solutions for global enterprises. With a vast workforce of over 145,000 professionals across 90+ countries, they cater to 1100+ clients in various industries. The company offers a comprehensive range of services including consulting, IT solutions, enterprise applications, business processes, engineering, network services, customer experience, AI & analytics, and cloud infrastructure services. Notably, they have been recognized for their commitment to sustainability with the Terra Carta Seal, showcasing their dedication to building a climate and nature-positive future. As a Data Engineer with a minimum of 6 years of experience, you will be responsible for constructing and managing data pipelines. The ideal candidate should possess expertise in Databricks, AWS/Azure, and data storage technologies such as databases and distributed file systems. Familiarity with the Spark framework is essential, and prior experience in the retail sector would be advantageous. Key Responsibilities: - Design, develop, and maintain scalable ETL pipelines for processing large data volumes from diverse sources. - Implement and oversee data integration solutions utilizing tools like Databricks, Snowflake, and other relevant technologies. - Develop and optimize data models and schemas to support analytical and reporting requirements. - Write efficient and sustainable Python code for data processing and transformations. - Utilize Apache Spark for distributed data processing and large-scale analytics. - Translate business needs into technical solutions. - Ensure data quality and integrity through rigorous unit testing. - Collaborate with cross-functional teams to integrate data pipelines with other systems. Technical Requirements: - Proficiency in Databricks for data integration and processing. - Experience with ETL tools and processes. - Strong Python programming skills with Apache Spark, emphasizing data processing and automation. - Solid SQL skills and familiarity with relational databases. - Understanding of data warehousing concepts and best practices. - Exposure to cloud platforms such as AWS and Azure. - Hands-on troubleshooting ability and problem-solving skills for complex data issues. - Practical experience with Snowflake.,
Posted 3 weeks ago
6.0 - 10.0 years
0 Lacs
navi mumbai, maharashtra
On-site
You should have 6-8 years of experience with a deep understanding of the Spark framework, along with hands-on experience in Spark SQL and Pyspark. Your expertise should include Python programming and familiarity with common Python libraries. Strong analytical skills are essential, especially in database management, including writing complex queries, query optimization, debugging, user-defined functions, views, and indexes. Your problem-solving abilities will be crucial in designing, implementing, and maintaining efficient data models and pipelines. Experience with Big Data technologies is a must, while familiarity with any ETL tool would be advantageous. As part of your responsibilities, you will be working on projects to deliver, review, and design PySpark and Spark SQL-based data engineering analytics solutions. Your tasks will involve writing clean, efficient, reusable, testable, and scalable Python logic for analytical solutions. Emphasis will be on building solutions for data cleaning, data scraping, and exploratory data analysis, ensuring compatibility with any BI tool. Collaboration with Data Analysts/BI developers to provide clean and processed data will be essential. You will design data processing pipelines using ETL techniques, develop and deliver complex requirements to achieve business goals, and work with unstructured, structured, and semi-structured data and their respective databases. Effective coordination with internal engineering and development teams to understand requirements and develop solutions is critical. Communication with stakeholders to grasp business logic and provide optimal data engineering solutions will also be part of your role. It is important to adhere to best coding practices and standards throughout your work.,
Posted 3 weeks ago
3.0 - 7.0 years
0 Lacs
chennai, tamil nadu
On-site
As an experienced professional with 3-5 years in the field, you will be responsible for handling various technical tasks related to Azure Data Factory, Talend/SISS, MSSQL, Azure, and MySQL. Your expertise in Azure Data Factory will be crucial in this role. Your primary responsibilities will include demonstrating advanced knowledge of Azure SQL DB & Synapse Analytics, Power BI, SSIS, SSRS, T-SQL, and Logic Apps. Your ability to analyze and comprehend complex data sets will play a key role in your daily tasks. Proficiency in Azure Data Lake and other Azure services such as Analysis Service, SQL Databases, Azure DevOps, and CI/CD will be essential for success in this role. Additionally, a solid understanding of master data management, data warehousing, and business intelligence architecture will be required. You will be expected to have experience in data modeling and database design, with a strong grasp of SQL Server best practices. Effective communication skills, both verbal and written, will be necessary for interacting with stakeholders at all levels. A clear understanding of the data warehouse lifecycle will be beneficial, as you will be involved in preparing design documents, unit test plans, and code review reports. Experience working in an Agile environment, particularly with methodologies like Scrum, Lean, or Kanban, will be advantageous. Knowledge of big data technologies such as the Spark Framework, NoSQL, Azure Data Bricks, and the Hadoop Ecosystem (Hive, Impala, HDFS) would be a valuable asset in this role.,
Posted 1 month ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
NTT DATA is seeking a Java Technical Consultant to join their team in Bangalore, Karnataka (IN-KA), India. As a Java Technical Consultant, you will be responsible for demonstrating proficiency in Java, including a solid understanding of its ecosystems. You will also be expected to have sound knowledge of Object-Oriented Programming (OOP) Patterns and Concepts, familiarity with different design and architectural patterns, and the ability to write reusable Java libraries. Additionally, you should possess expertise in Java concurrency patterns, a basic understanding of the concepts of MVC (Model-View-Controller) Pattern, JDBC (Java Database Connectivity), and RESTful web services. Experience in working with popular web application frameworks like Play and Spark is preferred, as well as relevant knowledge of Java GUI frameworks like Swing, SWT, AWT according to project requirements. The ideal candidate will have the ability to write clean, readable Java code, basic knowhow of class loading mechanism in Java, experience in handling external and embedded databases, and understanding basic design principles behind scalable applications. You should also be skilled at creating database schemas that characterize and support business processes, knowledgeable about JVM (Java Virtual Machine) and its drawbacks, weaknesses, and workarounds, and proficient in implementing automated testing platforms and unit tests. Moreover, you are expected to have in-depth knowledge of code versioning tools like Git, understanding of building tools such as Ant, Maven, Gradle, etc, expertise in continuous integration, and familiarity with JavaServer pages (JSP) and servlets, web frameworks like Struts and Spring, service-oriented architecture, web technologies like HTML, JavaScript, CSS, JQuery, and markup languages such as XML, JSON. Other required skills for this role include knowledge of abstract classes and interfaces, constructors, lists, maps, sets, file IO and serialization, exceptions, generics, Java Keywords like static, volatile, synchronized, transient, etc, multithreading, and synchronization. Banking experience is a must for this position. NTT DATA is a global innovator of business and technology services, serving 75% of the Fortune Global 100. As a Global Top Employer, NTT DATA has diverse experts in more than 50 countries and a robust partner ecosystem. Their services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation, and management of applications, infrastructure, and connectivity. NTT DATA is committed to helping clients innovate, optimize, and transform for long-term success and is one of the leading providers of digital and AI infrastructure worldwide. Visit us at us.nttdata.com.,
Posted 1 month ago
3.0 - 7.0 years
0 Lacs
chennai, tamil nadu
On-site
As an experienced professional with 3-5 years of experience, you will be responsible for working with a range of technical skills including Azure Data Factory, Talend/SSIS, MSSQL, Azure, and MySQL. Your primary focus will be on Azure Data Factory, where you will utilize your expertise to handle complex data analysis tasks effectively. In this role, you will demonstrate advanced knowledge in Azure SQL DB & Synapse Analytics, Power BI, SSIS, SSRS, T-SQL, and Logic Apps. It is essential that you possess a solid understanding of Azure Data Lake and Azure Services such as Analysis Service, SQL Databases, Azure DevOps, and CI/CD processes. Furthermore, your responsibilities will include mastering data management, data warehousing, and business intelligence architecture. You will be required to apply your experience in data modeling and database design, ensuring compliance with SQL Server best practices. Effective communication is key in this role, as you will engage with stakeholders at various levels. You will contribute to the preparation of design documents, unit test plans, and code review reports. Experience in an Agile environment, specifically with Scrum, Lean, or Kanban methodologies, will be advantageous. Additionally, familiarity with Big Data technologies such as the Spark Framework, NoSQL databases, Azure Data Bricks, and the Hadoop Ecosystem (Hive, Impala, HDFS) will be beneficial for this position.,
Posted 1 month ago
6.0 - 7.0 years
6 - 7 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
Introduction A career in IBM Consulting is rooted by long-term relationships and close collaboration with clients across the globe. You'll work with visionaries across multiple industries to improve the hybrid cloud and AI journey for the most innovative and valuable companies in the world. Your ability to accelerate impact and make meaningful change for your clients is enabled by our strategic partner ecosystem and our robust technology platforms across the IBM portfolio In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your role and responsibilities As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and Azure Cloud Data Platform Responsibilities : Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark and Hive, Hbase or other NoSQL databases on Azure Cloud Data Platform or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / Azure eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Required technical and professional expertise Total 6 - 7+ years of experience in Data Management (DW, DL, Data Platform, Lakehouse) and Data Engineering skills Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala; Minimum 3 years of experience on Cloud Data Platforms on Azure; Experience in DataBricks / Azure HDInsight / Azure Data Factory, Synapse, SQL Server DB Good to excellent SQL skills Preferred technical and professional experience Certification in Azure and Data Bricks or Cloudera Spark Certified developers
Posted 2 months ago
5.0 - 12.0 years
5 - 6 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
Description We are seeking an experienced AWS Glue Engineer to join our team in India. The ideal candidate will have a strong background in ETL processes and AWS services, with the ability to design and implement efficient data pipelines. Responsibilities Design, develop, and maintain ETL processes using AWS Glue. Collaborate with data architects and data scientists to optimize data pipelines. Implement data transformation processes to ensure data integrity and accessibility. Monitor and troubleshoot ETL jobs to ensure performance and reliability. Work with AWS services such as S3, Redshift, and RDS to support data workflows. Skills and Qualifications 5-12 years of experience in data engineering or ETL development. Strong proficiency in AWS Glue and AWS ecosystem services. Experience with Python or Scala for scripting and data transformation. Knowledge of data modeling and database design principles. Familiarity with data warehousing concepts and tools. Understanding of data governance and security best practices. Experience with version control systems like Git.
Posted 2 months ago
5.0 - 7.0 years
0 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
Introduction In this role, youll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. In this role, youll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your role and responsibilities As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / AWS eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Required education Bachelors Degree Preferred education Masters Degree Required technical and professional expertise Total 5 - 7+ years of experience in Data Management (DW, DL, Data Platform, Lakehouse) and Data Engineering skills Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala. Minimum 3 years of experience on Cloud Data Platforms on AWS Exposure to streaming solutions and message brokers like Kafka technologies. Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB Good to excellent SQL skills Preferred technical and professional experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers AWS S3, Redshift, and EMR for data storage and distributed processing. AWS Lambda, AWS Step Functions, and AWS Glue to build serverless, event-driven data workflows and orchestrate ETL processes
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
54024 Jobs | Dublin
Wipro
24262 Jobs | Bengaluru
Accenture in India
18733 Jobs | Dublin 2
EY
17079 Jobs | London
Uplers
12548 Jobs | Ahmedabad
IBM
11704 Jobs | Armonk
Amazon
11059 Jobs | Seattle,WA
Bajaj Finserv
10656 Jobs |
Accenture services Pvt Ltd
10587 Jobs |
Oracle
10506 Jobs | Redwood City