Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
10 - 12 years
13 - 20 Lacs
Chennai
Work from Office
Key Responsibilities : - Understand the factories , manufacturing process , data availability and avenues for improvement - Brainstorm , together with engineering, manufacturing and quality problems that can be solved using the acquired data in the data lake platform. - Define what data is required to create a solution and work with connectivity engineers , users to collect the data - Create and maintain optimal data pipeline architecture. - Assemble large, complex data sets that meet functional / non-functional business requirements. - Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery for greater scalability - Work on data preparation, data deep dive , help engineering, process and quality to understand the process/ machine behavior more closely using available data - Deploy and monitor the solution - Work with data and analytics experts to strive for greater functionality in our data systems. - Work together with Data Architects and data modeling teams. Skills /Competencies : - Good knowledge of the business vertical with prior experience in solving different use cases in the manufacturing or similar industry - Ability to bring cross industry learning to benefit the use cases aimed at improving manufacturing process Problem Scoping/definition Skills : - Experience in problem scoping, solving, quantification - Strong analytic skills related to working with unstructured datasets. - Build processes supporting data transformation, data structures, metadata, dependency and workload management. - Working knowledge of message queuing, stream processing, and highly scalable 'big data' data stores - Ability to foresee and identify all right data required to solve the problem Data Wrangling Skills : - Strong skill in data mining, data wrangling techniques for creating the required analytical dataset - Experience building and optimizing 'big data' data pipelines, architectures and data sets - Adaptive mindset to improvise on the data challenges and employ techniques to drive desired outcomes Programming Skills : - Experience with big data tools: Spark, Delta, CDC, NiFi, Kafka, etc - Experience with relational SQL ,NoSQL databases and query languages, including oracle , Hive, sparkQL. - Experience with object-oriented languages: Scala, Java, C++ etc. Visualization Skills : - Know how of any visualization tools such as PowerBI, Tableau - Good storytelling skills to present the data in simple and meaningful manner Data Engineering Skills : - Strong skill in data analysis techniques to generate finding and insights by means of exploratory data analysis - Good understanding of how to transform and connect the data of various types and form - Great numerical and analytical skills - Identify opportunities for data acquisition - Explore ways to enhance data quality and reliability - Build algorithms and prototypes - Reformulating existing frameworks to optimize their functioning. - Good understanding of optimization techniques to make the system performant for requirements.
Posted 1 month ago
10 - 18 years
12 - 22 Lacs
Pune, Bengaluru
Hybrid
Hi, We are hiring for the role of AWS Data Engineer with one of the leading organization for Bangalore & Pune. Experience - 10+ Years Location - Bangalore & Pune Ctc - Best in the industry Job Description Technical Skills PySpark coding skill Proficient in AWS Data Engineering Services Experience in Designing Data Pipeline & Data Lake If interested kindly share your resume at nupur.tyagi@mounttalent.com
Posted 1 month ago
3 - 7 years
5 - 9 Lacs
Mumbai, Delhi / NCR, Bengaluru
Work from Office
About Emperen Technologies : Emperen Technologies is a leading consulting firm committed to delivering tangible results for clients through a relationship-driven approach. With successful implementations for Fortune 500 companies, non-profits, and startups, Emperen Technologies exemplifies a client-centric model that prioritizes values and scalable, flexible solutions. Emperen specializes in navigating complex technological landscapes, empowering clients to achieve growth and success. Role Description : Emperen Technologies is seeking a highly skilled Senior Master Data Management (MDM) Engineer to join our team on a contract basis. This is a remote position where the Senior MDM Engineer will be responsible for a variety of key tasks including data engineering, data modeling, ETL processes, data warehousing, and data analytics. The role demands a strong understanding of MDM platforms, cloud technologies, and data integration, as well as the ability to work collaboratively in a dynamic environment. Key Responsibilities : - Design, implement, and manage Master Data Management (MDM) solutions to ensure data consistency and accuracy across the organization. - Oversee the architecture and operation of data modeling, ETL processes, and data warehousing. - Develop and execute data quality strategies to maintain high-quality data in line with business needs. - Build and integrate data pipelines using Microsoft Azure, DevOps, and GitLab technologies. - Implement data governance policies and ensure compliance with data security and privacy regulations. - Collaborate with cross-functional teams to define and execute business and technical requirements. - Analyze data to support business intelligence and decision-making processes. - Provide ongoing support for data integration, ensuring smooth operation and optimal performance. - Troubleshoot and resolve technical issues related to MDM, data integration, and related processes. - Work on continuous improvements of the MDM platform and related data processes. Qualifications : Required Skills & Experience : - Proven experience in Master Data Management (MDM), with hands-on experience on platforms like Profisee MDM and Microsoft Master Data Services (MDS). - Solid experience in Microsoft Azure cloud technologies. - Expertise in DevOps processes and using GitLab for version control and deployment. - Strong background in Data Warehousing, Azure Data Lakes, and Business Intelligence (BI) tools. - Expertise in Data Governance, data architecture, data modeling, and data integration (particularly using REST APIs). - Knowledge and experience in data quality, data security, and privacy best practices. - Experience working with business stakeholders and technical teams to analyze business requirements and translate them into effective data solutions. - Basic Business Analysis Skills - Ability to assess business needs, translate those into technical requirements, and ensure alignment between data management systems and business goals. Preferred Qualifications : - Experience with big data technologies and advanced analytics platforms. - Familiarity with data integration tools such as Talend or Informatica. - Knowledge of data visualization tools such as Power BI or Tableau. - Certifications in relevant MDM, cloud technologies, or data management platforms are a plus Location : - Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune, Remote
Posted 1 month ago
12 - 22 years
35 - 65 Lacs
Chennai
Hybrid
Warm Greetings from SP Staffing Services Private Limited!! We have an urgent opening with our CMMI Level 5 client for the below position. Please send your update profile if you are interested. Relevant Experience: 8 - 24 Yrs Location- Pan India Job Description : - Candidates should have minimum 2 Years hands on experience as Azure databricks Architect If interested please forward your updated resume to sankarspstaffings@gmail.com / Sankar@spstaffing.in With Regards, Sankar G Sr. Executive - IT Recruitment
Posted 1 month ago
10 - 18 years
35 - 55 Lacs
Hyderabad, Bengaluru, Mumbai (All Areas)
Hybrid
Warm Greetings from SP Staffing Services Private Limited!! We have an urgent opening with our CMMI Level 5 client for the below position. Please send your update profile if you are interested. Relevant Experience: 8 Yrs - 18 Yrs Location- Pan India Job Description : - Experience in Synapase with pyspark Knowledge of Big Data pipelinesData Engineering Working Knowledge on MSBI stack on Azure Working Knowledge on Azure Data factory Azure Data Lake and Azure Data lake storage Handson in Visualization like PowerBI Implement endend data pipelines using cosmosAzure Data factory Should have good analytical thinking and Problem solving Good communication and coordination skills Able to work as Individual contributor Requirement Analysis CreateMaintain and Enhance Big Data Pipeline Daily status reporting interacting with Leads Version controlADOGIT CICD Marketing Campaign experiences Data Platform Product telemetry Analytical thinking Data Validation of the new streams Data quality check of the new streams Monitoring of data pipeline created in Azure Data factory updating the Tech spec and wiki page for each implementation of pipeline Updating ADO on daily basis If interested please forward your updated resume to sankarspstaffings@gmail.com / Sankar@spstaffing.in With Regards, Sankar G Sr. Executive - IT Recruitment
Posted 1 month ago
10 - 20 years
35 - 55 Lacs
Hyderabad, Bengaluru, Mumbai (All Areas)
Hybrid
Warm Greetings from SP Staffing Services Private Limited!! We have an urgent opening with our CMMI Level 5 client for the below position. Please send your update profile if you are interested. Relevant Experience: 8 Yrs - 18 Yrs Location- Pan India Job Description : - Mandatory Skill: Azure ADB with Azure Data Lake Lead the architecture design and implementation of advanced analytics solutions using Azure Databricks Fabric The ideal candidate will have a deep understanding of big data technologies data engineering and cloud computing with a strong focus on Azure Databricks along with Strong SQL Work closely with business stakeholders and other IT teams to understand requirements and deliver effective solutions Oversee the endtoend implementation of data solutions ensuring alignment with business requirements and best practices Lead the development of data pipelines and ETL processes using Azure Databricks PySpark and other relevant tools Integrate Azure Databricks with other Azure services eg Azure Data Lake Azure Synapse Azure Data Factory and onpremise systems Provide technical leadership and mentorship to the data engineering team fostering a culture of continuous learning and improvement Ensure proper documentation of architecture processes and data flows while ensuring compliance with security and governance standards Ensure best practices are followed in terms of code quality data security and scalability Stay updated with the latest developments in Databricks and associated technologies to drive innovation Essential Skills Strong experience with Azure Databricks including cluster management notebook development and Delta Lake Proficiency in big data technologies eg Hadoop Spark and data processing frameworks eg PySpark Deep understanding of Azure services like Azure Data Lake Azure Synapse and Azure Data Factory Experience with ETLELT processes data warehousing and building data lakes Strong SQL skills and familiarity with NoSQL databases Experience with CICD pipelines and version control systems like Git Knowledge of cloud security best practices Soft Skills Excellent communication skills with the ability to explain complex technical concepts to nontechnical stakeholders Strong problemsolving skills and a proactive approach to identifying and resolving issues Leadership skills with the ability to manage and mentor a team of data engineers Experience Demonstrated expertise of 8 years in developing data ingestion and transformation pipelines using DatabricksSynapse notebooks and Azure Data Factory Solid understanding and handson experience with Delta tables Delta Lake and Azure Data Lake Storage Gen2 Experience in efficiently using Auto Loader and Delta Live tables for seamless data ingestion and transformation Proficiency in building and optimizing query layers using Databricks SQL Demonstrated experience integrating Databricks with Azure Synapse ADLS Gen2 and Power BI for endtoend analytics solutions Prior experience in developing optimizing and deploying Power BI reports Familiarity with modern CICD practices especially in the context of Databricks and cloudnative solutions If interested please forward your updated resume to sankarspstaffings@gmail.com / Sankar@spstaffing.in With Regards, Sankar G Sr. Executive - IT Recruitment
Posted 1 month ago
11 - 20 years
20 - 35 Lacs
Hyderabad, Pune, Bengaluru
Hybrid
Warm Greetings from SP Staffing Services Private Limited!! We have an urgent opening with our CMMI Level 5 client for the below position. Please send your update profile if you are interested. Relevant Experience: 11 - 20 Yrs Location- Pan India Job Description : - Minimum 2 Years hands on experience in Solution Architect ( AWS Databricks ) If interested please forward your updated resume to sankarspstaffings@gmail.com With Regards, Sankar G Sr. Executive - IT Recruitment
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
20312 Jobs | Dublin
Wipro
11977 Jobs | Bengaluru
EY
8165 Jobs | London
Accenture in India
6667 Jobs | Dublin 2
Uplers
6464 Jobs | Ahmedabad
Amazon
6352 Jobs | Seattle,WA
Oracle
5993 Jobs | Redwood City
IBM
5803 Jobs | Armonk
Capgemini
3897 Jobs | Paris,France
Tata Consultancy Services
3776 Jobs | Thane