Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 years
0 Lacs
Pune, Maharashtra, India
Remote
ZS is a place where passion changes lives. As a management consulting and technology firm focused on improving life and how we live it, our most valuable asset is our people. Here you’ll work side-by-side with a powerful collective of thinkers and experts shaping life-changing solutions for patients, caregivers and consumers, worldwide. ZSers drive impact by bringing a client first mentality to each and every engagement. We partner collaboratively with our clients to develop custom solutions and technology products that create value and deliver company results across critical areas of their business. Bring your curiosity for learning; bold ideas; courage and passion to drive life-changing impact to ZS. Our most valuable asset is our people . At ZS we honor the visible and invisible elements of our identities, personal experiences and belief systems—the ones that comprise us as individuals, shape who we are and make us unique. We believe your personal interests, identities, and desire to learn are part of your success here. Learn more about our diversity, equity, and inclusion efforts and the networks ZS supports to assist our ZSers in cultivating community spaces, obtaining the resources they need to thrive, and sharing the messages they are passionate about. What you’ll do: Lead end to end projects using cloud technologies to solve complex business problems Provide technology expertise to maximize value for clients and project teams Drive strong delivery methodology to ensure projects are delivered on time, within budget and to client’s satisfaction Ensure technology solutions are scalable, resilient, and optimized for performance and cost Guide coach and mentor project team members for continuous learning and professional growth Demonstrate expertise, facilitation, and strong interpersonal skills in internal and client interactions Collaborate with ZS experts to drive innovation and minimize project risks Work globally with team members to ensure a smooth project delivery Bring structure to unstructured work for developing business cases with clients Assist ZS Leadership with business case development, innovation, thought leadership and team initiatives What you’ll bring: Candidates must either be in their junior year of a Bachelor's degree or in their first year of a Master's degree specializing in Business Analytics, Computer Science, MIS, MBA, or a related field with academic excellence 5+ years of consulting experience in leading large-scale technology implementations Strong communication skills to convey technical concepts to diverse audiences Significant supervisory, coaching, and hands on project management skills Extensive experience with major cloud platforms like AWS, Azure and GCP Deep knowledge of enterprise data management, advanced analytics, process automation, and application development Familiarity with industry- standard products and platforms such as Snowflake, Databricks, Redshift, Salesforce, Power BI, Cloud. Experience in delivering projects using agile methodologies Additional skills: Capable of managing a virtual global team for the timely delivery of multiple projects Experienced in analyzing and troubleshooting interactions between databases, operating systems, and applications Travel to global offices as required to collaborate with clients and internal project teams Perks & Benefits: ZS offers a comprehensive total rewards package including health and well-being, financial planning, annual leave, personal growth and professional development. Our robust skills development programs, multiple career progression options and internal mobility paths and collaborative culture empowers you to thrive as an individual and global team member. We are committed to giving our employees a flexible and connected way of working. A flexible and connected ZS allows us to combine work from home and on-site presence at clients/ZS offices for the majority of our week. The magic of ZS culture and innovation thrives in both planned and spontaneous face-to-face connections. Travel: Travel is a requirement at ZS for client facing ZSers; business needs of your project and client are the priority. While some projects may be local, all client-facing ZSers should be prepared to travel as needed. Travel provides opportunities to strengthen client relationships, gain diverse experiences, and enhance professional growth by working in different environments and cultures. Considering applying? At ZS, we're building a diverse and inclusive company where people bring their passions to inspire life-changing impact and deliver better outcomes for all. We are most interested in finding the best candidate for the job and recognize the value that candidates with all backgrounds, including non-traditional ones, bring. If you are interested in joining us, we encourage you to apply even if you don't meet 100% of the requirements listed above. ZS is an equal opportunity employer and is committed to providing equal employment and advancement opportunities without regard to any class protected by applicable law. To Complete Your Application: Candidates must possess or be able to obtain work authorization for their intended country of employment.An on-line application, including a full set of transcripts (official or unofficial), is required to be considered. NO AGENCY CALLS, PLEASE. Find Out More At: www.zs.com
Posted 1 week ago
0.0 - 12.0 years
0 Lacs
Bengaluru, Karnataka
On-site
3 months ago TESCRA India DESCRIPTION Job Summary: We are seeking a highly skilled and innovative Software Engineer with expertise in AI, LLMs, and data science to join our dynamic team. The ideal candidate will have a strong background in software development, data engineering, and AI/ML, with a specific focus on Large Language Models (LLMs) and AI-powered solutions. This role will involve building scalable AI applications, optimizing data pipelines, and implementing AI-driven automation to enhance business strategies. Key Responsibilities: Develop AI-driven applications with a focus on LLMs, RAG-based AI Agents, and private AI hosting on Bedrock and Azure. Implement the full software development lifecycle, with a preference for React-based UI development. Design, develop, and maintain scalable backend services using Python (FastAPI), Java (Spring Boot), and Kotlin (KTOR). Work with data streams and APIs to enhance automation and real-time data processing. Implement UI test automation using Playwright. Develop and maintain AI chatbots for customer support and business automation. Integrate AI services into business workflows, including META services for Instagram, Facebook, and WhatsApp. Develop data ingestion and vector embedding services for eFicient AI model training and retrieval. Ensure data privacy and security in AI deployments. Stay updated with advancements in generative AI, LLMs, and AI agent development. Collaborate with cross-functional teams to implement AI-powered solutions for marketing, customer engagement, and business automation. Requirements: 5+ years of experience in Software Engineering, AI, or Data Science. Strong programming skills in Python, Java, Kotlin, and JavaScript (React/Next.js, Angular). Experience with private AI hosting and LLM-based solutions. Hands-on experience with RAG-based AI Agent development. Expertise in FastAPI, Spring Boot, KTOR, and Next.js. Experience with Docker for containerization and CI/CD pipelines (Jenkins, GitLab CI, Azure DevOps). Proficiency in cloud platforms, preferably Azure Databricks and AWS Bedrock. Experience in AI-powered business management tools and SMB AI solutions. Strong understanding of MLOps, data pipelines, and AI model deployment. Familiarity with Figma for UX design. Experience in AI chatbot development and AI-powered automation. Knowledge of vector search services, prompt engineering, and AI agent tools. Nice to Have: Experience in LLM fine-tuning and private AI model deployment. Hands-on experience with AI communication services (Email, WhatsApp, Social Media AI Agents). Knowledge of embedding and search services for AI applications. Understanding of DevOps and Kubernetes for AI deployment. This role oFers an exciting opportunity to work with cutting-edge AI technologies, develop custom AI applications, and drive AI innovation in business automation. QUALIFICATIONS Must Have Skills AI LLM DATASCIENCE CHAT BOT RAG AUTOMATION Good To Have Skills PYTHON JAVASCRIPIT Minimum Education Level Bachelors or Equivalent Years of Experience 5-12 years ADDITIONAL INFORMATION Work Type: FullTime Location: Bangalore, Karnataka, India Job ID: Tescra-Ado-C6FB55
Posted 1 week ago
1.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In data analysis at PwC, you will focus on utilising advanced analytical techniques to extract insights from large datasets and drive data-driven decision-making. You will leverage skills in data manipulation, visualisation, and statistical modelling to support clients in solving complex business problems. Focused on relationships, you are building meaningful client connections, and learning how to manage and inspire others. Navigating increasingly complex situations, you are growing your personal brand, deepening technical expertise and awareness of your strengths. You are expected to anticipate the needs of your teams and clients, and to deliver quality. Embracing increased ambiguity, you are comfortable when the path forward isn’t clear, you ask questions, and you use these moments as opportunities to grow. Skills Examples of the skills, knowledge, and experiences you need to lead and deliver value at this level include but are not limited to: Respond effectively to the diverse perspectives, needs, and feelings of others. Use a broad range of tools, methodologies and techniques to generate new ideas and solve problems. Use critical thinking to break down complex concepts. Understand the broader objectives of your project or role and how your work fits into the overall strategy. Develop a deeper understanding of the business context and how it is changing. Use reflection to develop self awareness, enhance strengths and address development areas. Interpret data to inform insights and recommendations. Uphold and reinforce professional and technical standards (e.g. refer to specific PwC tax and audit guidance), the Firm's code of conduct, and independence requirements. Responsibilities Responsible for leading software application frontend design, development, delivery, and maintenance. Translates designs and style guides the UI/UX team provides into functional user interfaces, ensuring cross-browser compatibility and performance. Work with an onshore team to clarify business requirements into product features, acting as a liaison between business and technical teams. Resolve technical issues and provide technical support. Provide technical guidance and assistance to other software engineers. Prepare staffing plan and allotment of resources. Assist the project managers in resolving any issues and conflicts within their projects. Improve customer relations by effective communication, managing expectations, and meeting commitments. Keep abreast of technical and organizational developments in your own professional field. Required Qualifications Bachelor's degree in computer science, information technology, or related area (equivalent work experience will be considered). 1+ years' experience in developing business applications in a full software development life cycle using web technologies. Expertise in HTML5, CSS3, and JavaScript, and familiarity with the latest frontend technologies. Experiences in developing single page applications with Angular2.0 (and above) or React. Responsive design and CSS frameworks, such as Material2, Ant Design and Twitter Bootstrap. Experiences in ES6, TypeScript, and Query language such as GraphQL. Comfortable with a development environment that includes Nginx, Docker, Redis, Node.js, CSS pre-processors SCSS/LESS. Experience using at least one of the following cloud platforms: Azure, AWS, GCP. Prefer deep understanding with Azure DevOps, Azure Synapse Analytics, Databricks, Delta Lake and Lakehouse. Demonstrate the ability to independently design and implement the frontend of an entire business module. Familiarity with the application and integration of Generative AI, Prompt Engineering and Large Language Models (LLMs) in enterprise solutions. Demonstrate excellent interpersonal skills, particularly in balancing requirements, managing expectations, collaborating with team members, and driving effective results. Proactive attitude, ability to work independently, and a desire to continuously learn new skills and technology. Excellent written and verbal communication skills in English. Additional Or Preferred Qualifications Master’s degree in computer science, information technology, or related majors. Technical lead experience. 3+ years’ experience in developing business applications in a full software development life cycle using web technologies. Experience using Azure and either AWS or GCP. Experience with data visualization tools such as Power BI or Tableau.
Posted 1 week ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Key Accountabilities Responsible for design, develop, and maintain Power BI reports and dashboards for enterprise-wide users. Work closely with business users, business analysts, data engineers, and stakeholders to gather requirements and translate them into technical solutions. Analyze complex business and operational system requirements and recommend solution options. Integrate data from various sources into Power BI using SQL queries, SharePoint, and Dataflows to provide comprehensive insights. Write and optimize SQL queries to extract and manipulate data for reporting purposes. Participate in meetings and discussions to understand business needs and provide technical insights. Stay updated with the latest developments and best practices in Power BI, SQL, and Power Platform. Propose and implement improvements to existing reports, dashboards, and processes. Experience working in Agile development software methodology, authoring technical documents / specifications. Responsible to support the production environment to assist the business users for any issues related to data and reporting. Skills A minimum of 5 years’ experience with the entire Power BI stack is required. Proficiency in Power BI, including Power Query, DAX, and Power BI Service. Strong understanding of data modeling, data warehousing, and ETL/ELT processes. Experience working with data sources like SQL Server, Oracle, Azure SQL, Azure Synapse, Databricks, Blob Storages & Data Lakes. Strong understanding of data visualization best practices. Excellent analytical and problem-solving skills. Familiarity with Agile development methodologies. Knowledge of standard ITL process. Excellent interpersonal, verbal and written communication skills. A flexible attitude with respect to work assignments and new learning. Ability to manage multiple and varied tasks with enthusiasm and prioritize workload with attention to detail. Willingness to work in a matrix environment. Good To Have Experience with Microsoft Power Platform (Power Apps, Power Automate). Relevant certifications (e.g., Microsoft Certified: Data Analyst Associate, Microsoft Certified: Azure Data Engineer Associate, Microsoft Certified: Power BI, etc.). Experience using Git for version control and deployment. Knowledge of Microsoft Fabric. Experience coding in Python. Knowledge And Experience 5 to 7 years of experience in Information Technology. Education Bachelor’s degree in computer science, data science, software development, or another related field; a master’s degree is recommended.
Posted 1 week ago
4.0 - 8.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Microsoft’s Cloud business is expanding, and the Cloud Supply Chain (CSCP) organization is responsible for enabling the hardware infrastructure underlying this growth including AI! CSCP’s vision is to empower customers to achieve more by delivering Cloud and AI capabilities at scale. Our mission is to deliver the world's computer with an industry-leading supply chain. The CSCP organization is responsible for traditional supply chain functions such as plan, source, make, deliver, but also manages supportability (spares), sustainability, and decommissioning of datacenter assets worldwide. We deliver the core infrastructure and foundational technologies for Microsoft's over 200 online businesses including Bing, MSN, Office 365, Xbox Live, OneDrive and the Microsoft Azure platform for external customers. Our infrastructure is supported by more than 300 datacenters around the world that enable services for more than 1 billion customers in over 90 countries. Microsoft Cloud Planning (MCP) is the central planning function within CSCP focused on forecasting, demand planning, and supply planning for all Microsoft Cloud services and associated hardware, directly impacting the success of Microsoft's cloud business. Responsibilities Researching and developing production-grade models (forecasting, anomaly detection, optimization, clustering, etc.) for our global cloud business by using statistical and machine learning techniques. Manage large volumes of data, and create new and improved solutions for data collection, management, analyses, and data science model development. Drive the onboarding of new data and the refinement of existing data sources through feature engineering and feature selection. Apply statistical concepts and cutting-edge machine learning techniques to analyze cloud demand and optimize our data science model code for distributed computing platforms and task automation. Work closely with other data scientists and data engineers to deploy models that drive cloud infrastructure capacity planning. Present analytical findings and business insights to project managers, stakeholders, and senior leadership and keep abreast of new statistical / machine learning techniques and implement them as appropriate to improve predictive performance. Oversees and directs the plan or forecast across the company for demand planning. Evangelizes the demand plan with other leaders. Drives clarity and understanding of what is required to achieve the plan (e.g., promotions, sales resources, collaborative planning, forecasting, and replenishment [CPFR], budget, engineering changes) and assesses plans to mitigate potential risks and issues. Oversees the analysis of data and leads the team in identifying trends, patterns, correlations, and insights to develop new forecasting models and improve existing models. Oversees development of short and long term (e.g., weekly, monthly, quarterly) demand forecasts and develops and publishes key forecast accuracy metrics. Analyzes data to identify potential sources of forecasting error. Serves as an expert resource and leader of demand planning across the company and ensures that business drivers are incorporated into the plan (e.g., forecast, budget). Leads collaboration among team and leverages data to identify pockets of opportunity to apply state-of-the-art algorithms to improve a solution to a business problem. Consistently leverages knowledge of techniques to optimize analysis using algorithms. Modifies statistical analysis tools for evaluating Machine Learning models. Solves deep and challenging problems for circumstances such as when model predictions are not correct, when models do not match the training data or the design outcomes when the data is not clean when it is unclear which analyses to run, and when the process is ambiguous. Provides coaching to team members on business context, interpretation, and the implications of findings. Interprets findings and their implications for multiple businesses, and champions methodological rigor by calling attention to the limitations of knowledge wherever biases in data, methods, and analysis exist. Generates and leverages insights that inform future studies and reframe the research agenda. Informs both current business decisions by implementing and adapting supply-chain strategies through complex business intelligence. Connects across functional teams and the broader organization outside of Demand Planning to advocate for continuous improvement and maintain best practices. Leads broad governance and rhythm of the business processes that ensure cross-group collaboration, discussion of key issues, and an opportunity to build proposed solutions to address current or future business needs. Qualifications Required: M.Sc. in Statistics, Applied Mathematics, Applied Economics, Computer Science or Engineering, Data Science, Operations Research or similar applied quantitative field 4-8 years of industry experience in developing production-grade statistical and machine learning code in a collaborative team environment. Prior experience in machine learning using R or Python (scikit / numpy / pandas / statsmodel). Prior experience in time series forecasting. Prior experience with typical data management systems and tools such as SQL. Knowledge and ability to work within a large-scale computing or big data context, and hands-on experience with Hadoop, Spark, DataBricks or similar. Excellent analytical skills; ability to understand business needs and translate them into technical solutions, including analysis specifications and models. Creative thinking skills with emphasis on developing innovative methods to solve hard problems under ambiguity and no obvious solutions. Good interpersonal and communication (verbal and written) skills, including the ability to write concise and accurate technical documentation and communicate technical ideas to non-technical audiences. Preferred PhD in Statistics, Applied Mathematics, Applied Economics, Computer Science or Engineering, Data Science, Operations Research or similar applied quantitative field. Experience in machine learning using R or Python (scikit / numpy / pandas / statsmodel) with skill level at or near fluency. Experience with deep learning models (e.g., tensorflow, PyTorch, CNTK) and solid knowledge of theory and practice. Practical and professional experience contributing to and maintaining a large code base with code versioning systems such as Git. Knowledge of supply chain models, operations research techniques, optimization modelling and solvers. Microsoft is an equal opportunity employer. Consistent with applicable law, all qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.
Posted 1 week ago
2.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
About this role: Join our dynamic and forward-thinking team within Gartner's Global Strategy and Operations (GSO) division, where innovation blends with impactful results. Our Service Analytics & Productivity team is at the forefront of developing cutting-edge automated data solutions for Gartner’s Global Services & Delivery team. We leverage data to uncover transformative insights and strategies, enhancing productivity and boosting client retention. This is your chance to be part of a high-impact analytics team, dedicated to driving automation, solving complex problems, and managing key stakeholder relationships, ultimately delivering significant and measurable business outcomes. What you’ll do: Business Intelligence Development: Design, develop, and maintain robust business intelligence solutions using Power BI. Ensure these solutions are scalable and stable to support the evolving needs of the business. Problem-Solving: Independently tackle complex data challenges to create innovative solutions that push the boundaries of conventional thinking. Stakeholder Relationships: Build and nurture strong relationships with stakeholders by understanding and rationalizing automation requirements. Define success clearly and deliver high-quality automated solutions that meet stakeholder expectations. Communication: Effectively communicate project status and challenges to leaders and stakeholders, simplifying complex technical concepts for easy understanding. Data Quality: Uphold the highest standards of data quality and protection. Ensure adherence to control structures that guarantee data accuracy and quality across all data channels, fostering trust and reliability in our data-driven decisions. Ethical Standards & Teamwork: Maintain the highest ethical standards while fostering a culture of teamwork and collaboration, contributing to a positive and productive work environment. What you’ll need: Educational Background: Possess 2+ years of professional experience with a degree in Engineering, Math, Statistics, or related fields. Your academic foundation will be complemented by a passion for data analytics and innovation. SQL Proficiency: Demonstrate proficient SQL skills for data extraction and manipulation, enabling the creation of innovative data solutions. Data Visualization: Experience with data visualization techniques and tools for impactful storytelling through dashboards, with a primary focus on Power BI. Python: Preferred experience in Python and essential libraries such as NumPy and Pandas, with a track record of creating efficient ETL processes, preferably in Databricks. Problem-Solving Skills: Possess a knack for creative problem-solving, with sharp qualitative and quantitative abilities and a keen eye for detail and accuracy. Communication Skills: Exhibit strong written and verbal communication skills, with the ability to convey technical concepts to a non-technical audience effectively. What you’ll get: In addition to an outstanding work environment with rapid advancement potential, Gartner associates enjoy exceptional compensation and benefits, including: Competitive base salary Flexible work environment A great work culture Who are we? At Gartner, Inc. (NYSE:IT), we guide the leaders who shape the world. Our mission relies on expert analysis and bold ideas to deliver actionable, objective insight, helping enterprise leaders and their teams succeed with their mission-critical priorities. Since our founding in 1979, we’ve grown to more than 21,000 associates globally who support ~14,000 client enterprises in ~90 countries and territories. We do important, interesting and substantive work that matters. That’s why we hire associates with the intellectual curiosity, energy and drive to want to make a difference. The bar is unapologetically high. So is the impact you can have here. What makes Gartner a great place to work? Our sustained success creates limitless opportunities for you to grow professionally and flourish personally. We have a vast, virtually untapped market potential ahead of us, providing you with an exciting trajectory long into the future. How far you go is driven by your passion and performance. We hire remarkable people who collaborate and win as a team. Together, our singular, unifying goal is to deliver results for our clients. Our teams are inclusive and composed of individuals from different geographies, cultures, religions, ethnicities, races, genders, sexual orientations, abilities and generations. We invest in great leaders who bring out the best in you and the company, enabling us to multiply our impact and results. This is why, year after year, we are recognized worldwide as a great place to work. What do we offer? Gartner offers world-class benefits, highly competitive compensation and disproportionate rewards for top performers. In our hybrid work environment, we provide the flexibility and support for you to thrive — working virtually when it's productive to do so and getting together with colleagues in a vibrant community that is purposeful, engaging and inspiring. Ready to grow your career with Gartner? Join us. The policy of Gartner is to provide equal employment opportunities to all applicants and employees without regard to race, color, creed, religion, sex, sexual orientation, gender identity, marital status, citizenship status, age, national origin, ancestry, disability, veteran status, or any other legally protected status and to seek to advance the principles of equal employment opportunity. Gartner is committed to being an Equal Opportunity Employer and offers opportunities to all job seekers, including job seekers with disabilities. If you are a qualified individual with a disability or a disabled veteran, you may request a reasonable accommodation if you are unable or limited in your ability to use or access the Company’s career webpage as a result of your disability. You may request reasonable accommodations by calling Human Resources at +1 (203) 964-0096 or by sending an email to ApplicantAccommodations@gartner.com. Job Requisition ID:100643 By submitting your information and application, you confirm that you have read and agree to the country or regional recruitment notice linked below applicable to your place of residence. Gartner Applicant Privacy Link: https://jobs.gartner.com/applicant-privacy-policy For efficient navigation through the application, please only use the back button within the application, not the back arrow within your browser.
Posted 1 week ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Title: Data Scientist – AI/ML, Snowflake, Databricks, Informatica, Power BI Experience : 5+ Years Location: Hyderabad (Hybrid) Required Skills & Qualifications: Bachelor’s or Master’s in Computer Science, Data Science, Engineering, or related field. 5+ years of hands-on experience in AI/ML model development. Strong experience with Snowflake and Databricks. Proven track record of working in Informatica for data integration with Power BI for reporting and dashboard development . Proficient in Python, SQL, and distributed computing. Preferred Qualifications: Certifications in Snowflake, Databricks, or Informatica. Familiarity with MLOps and CI/CD practices for ML deployment.
Posted 1 week ago
6.0 - 10.0 years
0 Lacs
maharashtra
On-site
Are you ready to make it happen at Mondelz International Join our mission to lead the future of snacking and make it with pride. As a part of the Global MSC Data & Analytics team, you will play a crucial role in supporting the business by developing excellent data models to uncover trends that can drive long-term business results. In this role, you will work closely with business leadership to execute the analytics agenda, identify and nurture external partners for strategic projects, and develop custom models and algorithms to reveal patterns and trends for enhancing long-term business performance. Your methodical approach in executing the business analytics program agenda will effectively convey to stakeholders the value that business analytics can deliver. To excel in this position, you should possess experience in using data analysis to provide recommendations to senior leaders, technical expertise in analytics practices, and a track record of deploying new analytical approaches in complex organizations. Your proficiency in analytics techniques will be crucial in creating impactful business outcomes. As a key technical leader in the Supply Chain Data & Analytics team, you will be responsible for developing cutting-edge Supply Chain data products. Your role will involve designing, building, and automating data processes, driving advanced analytics, reporting, and insights to optimize Supply Chain performance across the organization. Additionally, you will contribute to the engineering of scalable data solutions and play a hands-on role in managing Supply Chain data products. The ideal candidate will bring a deep understanding of SAP data structures and processes, proficiency in cloud data engineering within the Google Cloud Platform ecosystem, and experience in developing robust data pipelines for integration and analysis. Furthermore, hands-on experience with tools like Databricks and expertise in system monitoring and optimization will be advantageous. Your communication and collaboration skills will be essential for effective teamwork and engagement with Supply Chain stakeholders. Experience in delegating work, guiding team members through technical challenges, and thriving in a fast-paced environment will set you up for success in this role. Additionally, a strong problem-solving acumen, industry knowledge in consumer goods, and familiarity with Agile development environments will be valuable assets. To qualify for this position, you should hold a Bachelor's degree in a relevant field and have at least 6 years of hands-on experience in data engineering or a similar technical role, preferably in CPG or manufacturing with a focus on Supply Chain data. If you are looking to accelerate your career in a dynamic and challenging setting, this role offers a platform to drive impactful change and contribute to the future of snacking. Join us at Mondelz International and be part of a diverse community that is passionate about empowering people to snack right. Be a part of our purpose-driven organization that values growth, innovation, and making a positive impact on the world through sustainable practices and high-quality products. Become one of our makers and bakers who are committed to delivering the right snack, for the right moment, made the right way. Within Country Relocation support is available, and for candidates considering international relocation, minimal support is provided through our Volunteer International Transfer Policy. Job Type: Regular Business Unit: Analytics & Modelling, Analytics & Data Science,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
pune, maharashtra
On-site
You will be joining our data engineering team as an experienced Python + Databricks Developer. Your role will involve designing, developing, and maintaining scalable data pipelines using Databricks and Apache Spark. You will write efficient Python code for data transformation, cleansing, and analytics. Collaboration with data scientists, analysts, and engineers to understand data needs and deliver high-performance solutions is a key part of this role. Additionally, you will optimize and tune data pipelines for performance and cost efficiency and implement data validation, quality checks, and monitoring. Working with cloud platforms, preferably Azure or AWS, to manage data workflows will also be part of your responsibilities. Ensuring best practices in code quality, version control, and documentation is essential for this role. To be successful in this position, you should have at least 5 years of professional experience in Python development and 3 years of hands-on experience with Databricks, including notebooks, clusters, Delta Lake, and job orchestration. Strong experience with Spark, particularly PySpark, is required. Proficiency in working with large-scale data processing and ETL/ELT pipelines is necessary, along with a solid understanding of data warehousing concepts and SQL. Experience with Azure Data Factory, AWS Glue, or other data orchestration tools would be advantageous. Familiarity with version control tools like Git is also desired. Excellent problem-solving and communication skills are important for this role.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
As the Lead Data Engineer at Mastercard, you will be responsible for designing and building scalable, cloud-native data platforms using PySpark, Python, and modern data engineering practices. Your role will involve mentoring and guiding other engineers, fostering a culture of curiosity and continuous improvement, and creating robust ETL/ELT pipelines to serve business-critical use cases. You will lead by example by writing high-quality, testable code, participating in architecture and design discussions, and decomposing complex problems into scalable components aligned with platform and product goals. Championing best practices in data engineering, you will drive collaboration across teams, support data governance and quality efforts, and continuously learn and apply new technologies to improve team productivity and platform reliability. To succeed in this role, you should have at least 5 years of hands-on experience in data engineering with strong PySpark and Python skills. You should also possess solid experience in designing and implementing data models, pipelines, and batch/stream processing systems. Additionally, you should be comfortable working with cloud platforms such as AWS, Azure, or GCP and have a strong foundation in data modeling, database design, and performance optimization. A bachelor's degree in computer science, engineering, or a related field is required, along with experience in Agile/Scrum development environments. Experience with CI/CD practices, version control, and automated testing is essential, as well as the ability to mentor and uplift junior engineers. Familiarity with cloud-related services like S3, Glue, Data Factory, and Databricks is highly desirable. Furthermore, exposure to data governance tools and practices, orchestration tools, containerization, and infrastructure automation will be advantageous. A master's degree, relevant certifications, or contributions to open source/data engineering communities will be considered a bonus. Exposure to machine learning data pipelines or MLOps is also a plus. If you are a curious, adaptable, and driven individual who enjoys problem-solving and continuous improvement, and if you have a passion for building clean data pipelines and cloud-native designs, then this role is perfect for you. Join us at Mastercard and be part of a team that is dedicated to unlocking the potential of data assets and shaping the future of data engineering.,
Posted 1 week ago
4.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
The Data Engineer is accountable for developing high quality data products to support the Bank’s regulatory requirements and data driven decision making. A Data Engineer will serve as an example to other team members, work closely with customers, and remove or escalate roadblocks. By applying their knowledge of data architecture standards, data warehousing, data structures, and business intelligence they will contribute to business outcomes on an agile team. Responsibilities: - Developing and supporting scalable, extensible, and highly available data solutions Deliver on critical business priorities while ensuring alignment with the wider architectural vision Identify and help address potential risks in the data supply chain Follow and contribute to technical standards Design and develop analytical data models Required Qualifications & Work Experience First Class Degree in Engineering/Technology (4-year graduate course) 9 to 11 years’ experience implementing data-intensive solutions using agile methodologies Experience of relational databases and using SQL for data querying, transformation and manipulation Experience of modelling data for analytical consumers Ability to automate and streamline the build, test and deployment of data pipelines Experience in cloud native technologies and patterns A passion for learning new technologies, and a desire for personal growth, through self-study, formal classes, or on-the-job training Excellent communication and problem-solving skills An inclination to mentor; an ability to lead and deliver medium sized components independently Technical Skills (Must Have) ETL: Hands on experience of building data pipelines. Proficiency in two or more data integration platforms such as Ab Initio, Apache Spark, Talend and Informatica Big Data: Experience of ‘big data’ platforms such as Hadoop, Hive or Snowflake for data storage and processing Data Warehousing & Database Management: Expertise around Data Warehousing concepts, Relational (Oracle, MSSQL, MySQL) and NoSQL (MongoDB, DynamoDB) database design Data Modeling & Design: Good exposure to data modeling techniques; design, optimization and maintenance of data models and data structures Languages: Proficient in one or more programming languages commonly used in data engineering such as Python, Java or Scala DevOps: Exposure to concepts and enablers - CI/CD platforms, version control, automated quality control management Data Governance: A strong grasp of principles and practice including data quality, security, privacy and compliance Technical Skills (Valuable) Ab Initio: Experience developing Co>Op graphs; ability to tune for performance. Demonstrable knowledge across full suite of Ab Initio toolsets e.g., GDE, Express>IT, Data Profiler and Conduct>IT, Control>Center, Continuous>Flows Cloud: Good exposure to public cloud data platforms such as S3, Snowflake, Redshift, Databricks, BigQuery, etc. Demonstratable understanding of underlying architectures and trade-offs Data Quality & Controls: Exposure to data validation, cleansing, enrichment and data controls Containerization: Fair understanding of containerization platforms like Docker, Kubernetes File Formats: Exposure in working on Event/File/Table Formats such as Avro, Parquet, Protobuf, Iceberg, Delta Others: Experience of using a Job scheduler e.g., Autosys. Exposure to Business Intelligence tools e.g., Tableau, Power BI Certification on any one or more of the above topics would be an advantage ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Digital Software Engineering ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster.
Posted 1 week ago
10.0 years
0 Lacs
Pune, Maharashtra, India
On-site
About the Company Sonny's Enterprises is the world's largest manufacturer of conveyorized car wash equipment, parts, and supplies. We are the industry leader, recognized and awarded by the International Car Wash Association for innovating new technologies to advance the industry with products proudly designed and built in the USA. Our culture thrives on finding new and better ways to accelerate what’s next. We embrace change and the opportunity it produces to maximize the potential of our most valuable resource — our PEOPLE! We invite you to explore our opportunities and grow your career with us. Position Summary The Business Applications Manager (GCC India) is responsible for leading the India-based global capability center (GCC) team supporting enterprise applications, digital platforms, and business technology services. This role oversees functional and technical delivery across SAP, Salesforce, ecommerce, data analytics, and internal collaboration tools (e.g., SharePoint). The Manager ensures systems stability, drives enhancement initiatives, supports global projects, and leads a cross-functional team of analysts, developers, and technical staff. Key Responsibilities Manage day-to-day operations and delivery for the India-based team supporting: SAP Center of Excellence (functional and technical) Salesforce (Sales Cloud, Service Cloud, Field Service) Ecommerce integration and application support Cybersecurity analyst (supporting enterprise InfoSec) SharePoint/intranet developer Data engineers and data analysts Linux administrators and cloud platform support Additional Responsibilities Provide leadership, mentorship, and performance management for the team Collaborate with U.S.-based IT leaders and business stakeholders to align priorities and deliver projects Oversee support and minor enhancements for business-critical applications across finance, sales, service, supply chain, and HR Ensure adherence to change control, system documentation, and support best practices Monitor team KPIs and ensure high system availability and responsiveness to incidents Support hiring, training, and capability building for the India-based applications team Align with global enterprise architecture, cybersecurity, and data standards Qualifications Bachelor’s Degree in Engineering, Information Technology, or related field 10+ years of experience in enterprise applications or IT delivery roles 5+ years managing cross-functional technology teams Required Skills Proven track record of managing SAP ECC/S/4HANA environments Experience managing Salesforce support teams and application owners Exposure to ecommerce systems, cloud environments (preferably Azure), and data analytics platforms Experience coordinating with U.S.-based/global teams and supporting distributed business operations Preferred Skills Prior experience in a Global Capability Center (GCC) or shared services center Experience with Power BI, Databricks, Azure Data Factory, or similar tools Understanding of cybersecurity controls and frameworks Prior experience supporting manufacturing, distribution, or industrial sectors Equal Opportunity Statement Sonny’s is proud to be an equal opportunity employer and is committed to maintaining a diverse and inclusive work environment. All qualified applicants will receive considerations for employment without regard to race, color, religion, sex, age, disability, marital status, familial status, sexual orientation, pregnancy, genetic information, gender identity, gender expression, national origin, ancestry, citizenship status, veteran status, and any other legally protected status under federal, state, or local anti-discrimination laws.
Posted 1 week ago
10.0 - 15.0 years
30 - 45 Lacs
Hyderabad
Hybrid
Job Title: IT-Lead Architect Architect AI Years of Experience: 10-15 Years Mandatory Skills: Data Architect, Team Leadership, AI/ML Expert, Azure, SAP Good to have: Visualization, Python Key Responsibilities: Lead a team of architects and engineers focused on Strategic Azure architecture and AI projects. Develop and maintain the companys data architecture strategy and lead design/architecture validation reviews. Drive the adoption of new AI/ML technologies and assess their impact on data strategy. Architect scalable data flows, storage, and analytics platforms, ensuring secure and cost-effective solutions. Establish data governance frameworks and promote best practices for data quality. Act as a technical advisor on complex data projects and collaborate with stakeholders. Work with technologies including SQL, SYNAPSE, Databricks, PowerBI, Fabric, Python, SQL Server, and NoSQL. Required Qualifications & Experience: Bachelor’s or Master’s degree in Computer Science or related field. At least 5 years in a leadership role in data architecture. Expert in Azure, Databricks, and Synapse. Proven experience leading technical teams and strategic projects, specifically designing and implementing AI solutions within data architectures. Deep knowledge of cloud data platforms (Azure, Fabric, Databricks, AWS), data modeling, ETL/ELT, big data, relational/NoSQL databases, and data security. 5 years of experience in AI model design & deployment. Strong experience in Solution Architecture. Excellent communication, stakeholder management, and problem-solving skills.
Posted 1 week ago
8.0 - 10.0 years
15 - 30 Lacs
Hyderabad
Hybrid
Job Title: IT- Lead Engineer/Architect Azure Lake Years of Experience: 8-10 Years Mandatory Skills: Azure, DataLake, Databricks, SAP BW Key Responsibilities: Lead the development and maintenance of data architecture strategy, including design and architecture validation reviews with all stakeholders. Architect scalable data flows, storage, and analytics platforms in cloud/hybrid environments, ensuring secure, high-performing, and cost-effective solutions. Establish comprehensive data governance frameworks and promote best practices for data quality and enterprise compliance. Act as a technical leader on complex data projects and drive the adoption of new technologies, including AI/ML. Collaborate extensively with business stakeholders to translate needs into architectural solutions and define project scope. Support a wide range of Datalakes and Lakehouses technologies (SQL, SYNAPSE, Databricks, PowerBI, Fabric). Required Qualifications & Experience: Bachelors or Master’s degree in Computer Science or related field. At least 3 years in a leadership role in data architecture. Proven ability leading Architecture/AI/ML projects from conception to deployment. Deep knowledge of cloud data platforms (Microsoft Azure, Fabric, Databricks), data modeling, ETL/ELT, big data, relational/NoSQL databases, and data security. Experience in designing and implementing AI solutions within cloud architecture. 3 years as a project lead in large-scale projects. 5 years in development with Azure, Synapse, and Databricks. Excellent communication and stakeholder management skills.
Posted 1 week ago
4.0 - 8.0 years
7 - 17 Lacs
Hyderabad
Hybrid
Job Title: IT- Senior Engineer Azure Lake Years of Experience: 4-6 Years Mandatory Skills: Azure, DataLake, SAP BW, PowerBI, Tableau Key Responsibilities: Develop and maintain data architecture strategy, including design and architecture validation reviews. Architect scalable data flows, storage, and analytics platforms in cloud/hybrid environments, ensuring secure and cost-effective solutions. Establish and enforce data governance frameworks, promoting data quality and compliance. Act as a technical advisor on complex data projects and collaborate with stakeholders on project scope and planning. Drive adoption of new technologies, conduct technological watch, and define standards for data management. Develop using SQL, SYNAPSE, Databricks, PowerBI, Fabric. Required Qualifications & Experience: Bachelors or Master’s degree in Computer Science or related field. Experience in data architecture with at least 3 years in a leadership role. Deep knowledge of Azure/AWS, Databricks, Synapse, and other cloud data platforms. Understanding of SAP technologies (SAP BW, SAP DataSphere, HANA, S/4, ECC) and visualization tools (Power BI, Tableau). Understanding of data modeling, ETL/ELT, big data, relational/NoSQL databases, and data security. Experience with AI/ML and familiarity with data mesh/fabric. 5 years in back-end/full stack development in large scale projects with Azure Synapse / Databricks.
Posted 1 week ago
1.0 - 3.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Donaldson is committed to solving the world’s most complex filtration challenges. Together, we make cool things. As an established technology and innovation leader, we are continuously evolving to meet the filtration needs of our changing world. Join a culture of collaboration and innovation that matters and a chance to learn, effect change, and make meaningful contributions at work and in communities. We are seeking a skilled and motivated Data Engineer II to join the Corporate Technology Data Engineering Team. This role is important for developing and sustaining our data infrastructure, which supports a wide range of R&D, sensor-based, and modeling technologies. The Data Engineer II will design and maintain pipelines that enable the use of complex datasets. This position directly empowers faster decision making by building trustworthy data flows and access for engineers and scientists. Primary Role Responsibilities: Develop and maintain data ingestion and transformation pipelines across on-premise and cloud platforms. Develop scalable ETL/ELT pipelines that integrate data from a variety of sources (i.e. form-based entries, SQL databases, Snowflake, SharePoint). Collaborate with data scientists, data analysts, simulation engineers and IT personnel to deliver data engineering and predictive data analytics projects. Implement data quality checks, logging, and monitoring to ensure reliable operations. Follow and maintain data versioning, schema evolution, and governance controls and guidelines. Help administer Snowflake environments for cloud analytics. Work with more senior staff to improve solution architectures and automation. Stay updated with the latest data engineering technologies and trends. Participate in code reviews and knowledge sharing sessions. Participate in and plan new data projects that impact business and technical domains. Required Qualifications & Relevant Experience: Bachelor’s or master’s degree in computer science, data engineering, or related field. 1-3 years of experience in data engineering, ETL/ELT development, and/or backend software engineering. Demonstrated expertise in Python and SQL. Demonstrated experience working with data lakes and/or data warehouses (e.g. Snowflake, Databricks, or similar) Familiarity with source control and development practices (e.g Git, Azure DevOps) Strong problem-solving skills and eagerness to work with cross-functional globalized teams. Preferred Qualifications: Required qualification plus Working experience and knowledge of scientific and R&D workflows, including simulation data and LIMS systems. Demonstrated ability to balance operational support and longer-term project contributions. Experience with Java Strong communication and presentation skills. Motivated and self-driven learner Employment opportunities for positions in the United States may require use of information which is subject to the export control regulations of the United States. Hiring decisions for such positions are required by law to be made in compliance with these regulations. Applicants for employment opportunities in other countries must be able to meet the comparable export control requirements of that country and of the United States. Donaldson Company has been made aware that there are several recruiting scams that are targeting job seekers. These scams have attempted to solicit money for job applications and/or collect confidential information, Donaldson will never solicit money during the application or recruiting process. Donaldson only accepts online applications through our Careers | Donaldson Company, Inc. website and any communication from a Donaldson recruiter would be sent using a donaldson.com email address. If you have any questions about the legitimacy of an employment opportunity, please reach out to talentacquisition@donaldson.com to verify that the communication is from Donaldson. Our policy is to provide equal employment opportunities to all qualified persons without regard to race, gender, color, disability, national origin, age, religion, union affiliation, sexual orientation, veteran status, citizenship, gender identity and/or expression, or other status protected by law.
Posted 1 week ago
2.0 - 7.0 years
6 - 16 Lacs
Hyderabad
Work from Office
Job Title: IT- Engineer / Senior Engineer Years of Experience: 2-6 Years Mandatory Skills: PowerBI, Tableau Key Responsibilities: Develop on Datalakes technologies including SQL, SYNAPSE, Databricks, PowerBI, and Fabric. Provide tool usage support to business stakeholders (10-20% of the role). Apply data modeling in the semantic layer and understand system structures. Support end-user training and resolve performance and tuning issues. Required Qualifications & Experience: 3 years as an Analyst in large-scale projects. 5 years in back-end/full stack development with Fabric / PowerBI. Experience in SQL environments and Agile (SCRUM) methodologies. Familiarity with data architectural concepts and data modeling.
Posted 1 week ago
8.0 - 13.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. Manager Data Engineering What You Will Do Let’s do this. Let’s change the world. In this vital role you will lead and scale an impactful team of data engineers. This role blends technical depth with strategic oversight and people leadership. The ideal candidate will oversee the execution of data engineering initiatives, collaborate with business analysts and multi-functional teams, manage resource capacity, and ensure delivery aligned to business priorities. In addition to technical competence, the candidate will be adept at managing agile operations and driving continuous improvement. Roles & Responsibilities: Possesses strong rapid prototyping skills and can quickly translate concepts into working code. Provide expert guidance and mentorship to the data engineering team, fostering a culture of innovation and standard methodologies. Design, develop, and implement robust data architectures and platforms to support business objectives. Oversee the development and optimization of data pipelines, and data integration solutions. Establish and maintain data governance policies and standards to ensure data quality, security, and compliance. Architect and manage cloud-based data solutions, using AWS or other preferred platforms. Lead and motivate an impactful data engineering team to deliver exceptional results. Identify, analyze, and resolve complex data-related challenges. Collaborate closely with business collaborators to understand data requirements and translate them into technical solutions. Stay abreast of emerging data technologies and explore opportunities for innovation. Lead and manage a team of data engineers, ensuring appropriate workload distribution, goal alignment, and performance management. Work closely with business analysts and product collaborators to prioritize and align engineering output with business objectives. What We Expect Of You We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Doctorate degree / Master's degree / Bachelor's degree and 8 to 13 years computer science and engineering preferred, other Engineering field is considered Demonstrated proficiency in using cloud platforms (AWS, Azure, GCP) for data engineering solutions. Strong understanding of cloud architecture principles and cost optimization strategies. Proficient on experience in Python, PySpark, SQL. Handon experience with bid data ETL performance tuning. Proven ability to lead and develop impactful data engineering teams. Strong problem-solving, analytical, and critical thinking skills to address complex data challenges. Strong communication skills for collaborating with business and technical teams alike. Preferred Qualifications: Experienced with data modeling and performance tuning for both OLAP and OLTP databases Experienced with Apache Spark, Apache Airflow Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops Experienced with AWS, GCP or Azure cloud services Professional Certification: AWS Certified Data Engineer preferred Databricks Certificate preferred Soft Skills : Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills. What You Can Expect Of Us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Posted 1 week ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Azure Cloud – Technology Assurance As Risk Assurance Senior, you’ll contribute technically to Risk Assurance client engagements and internal projects. An important part of your role will be to assist fellow Seniors & Managers while actively participating within the client engagement Similarly, you’ll anticipate and identify risks within engagements and share any issues with senior members of the team. In line with EY commitment to quality, you’ll confirm that work is of high quality and is reviewed by the next-level reviewer. As a member of the team, you’ll help to create a positive learning culture and assist fellow team members while delivering an assignment. The opportunity We’re looking for professional having at least 3 years or more of experience. You’ll be part of a cross-functional team that’s responsible for the full software development life cycle, from conception to deployment. This is a fantastic opportunity to be part of a leading firm whilst being instrumental in the growth of a new service offering. Skills and Summary of Accountabilities: Designing, architecting, and developing solutions leveraging Azure cloud to ingest, process and analyse large, disparate data sets to exceed business requirements. Proficient in Azure Data Factory, Azure Databricks, Azure SQL Database, Azure Synapse Analytics and Azure Data Lake Storage for data storage and processing. Designed data pipelines using these technologies. Working knowledge of Data warehousing/Modelling ETL/ELT pipelines, Data Democratization using cloud services. Design, build and maintain efficient, reusable, and reliable code ensuring the best possible performance, quality, and responsiveness of applications using reliable Python code. Automate tasks through Python scripting, databases, and other advanced technologies like databricks, synapse studia, ADF etc Exposure working in client facing roles, collaborate with cross functional teams including internal audits, IT security and business stakeholders to assess control effectiveness and facilitate remediation activities. Preferred knowledge/understanding of experience in IT Controls, Risk and Compliance. Design IT Risk Controls framework such as IT SOX. Testing of internal controls such as IT general controls, IT application controls, IPE related controls, interface controls etc. To qualify for the role, you must have. 3 years of experience in building end-to-end business solutions using big data and data engineering. Expertise in core Microsoft Azure Data Services (e.g., Azure Data Factory, Azure Databricks, Azure Synapse, Azure SQL, Data Lake services etc). Familiar with integrating services, Azure Logic apps, Function Apps, Stream analytics, Triggers, Event hubs etc. Expertise in Cloud related Big Data integration and infrastructure tech stack using Azure Databricks & Apache Spark framework. Must have the following: Python, SQL and preferred R, Scala. Experience developing software tools using utilities, pandas, NumPy and other libraries/components etc. Hands-on expertise in using Python frameworks (like Django, Pyramid, Flask). Preferred substantial background in data extraction and transformation, developing data pipelines using MS SSIS, Informatica, Talend or any other on-premises tools. Preferred knowledge on Power BI or any BI tools. Should have good understanding of version controlling with Git, JIRA, Change / Release management, build/deploy, CI/CD Azure Devops. Ideally, you'll also have Bachelor's Degree or above in mathematics, information systems, statistics, computer science, data analytics or related disciplines. Experience with AI/ML is a plus. Preferred Certification in DP-203 Azure Data Engineer or any other. Ability to communicate clearly and concisely and use strong writing and verbal skills to communicate facts, figures, and ideas to others. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 1 week ago
6.0 years
0 Lacs
India
Remote
Senior Data Architect Work Location - Remote Lead Time - Immediate Key Skills – Databricks exp is required Must have excellent communication Must Have: 6+ years of experience in Big Data architecture, Data Engineering and AI assisted BI solutions within Databricks and AWS technologies. 3+ Years of Experience with AWS Data services like S3, Glue, Lake Formation, EMR, Kinesis, RDS, DMS and others 3+ Years of Experience in building Delta Lakes and open formats using technologies like Databricks and AWS Analytics Services. Proven expertise in Databricks, Apache Spark, Delta Lake, and MLflow. Strong programming skills in Python, SQL, and PySpark. Experience with SAP data extraction and integration (e.g., SAP BW, S/4HANA, BODS). Hands-on experience with cloud platforms (Azure, AWS, or GCP), especially in cost optimization and data lakehouse architectures. Solid understanding of data modeling, ETL/ELT pipelines, and data warehousing. We strongly believe that data and analytics are strategic drivers for future success. We are building a world class advanced analytics team that will solve some of the most complex strategic problems and deliver topline growth and operational efficiencies across our business The Analytics team is part of the Organization and is responsible for driving organic growth by leveraging big data and advanced analytics. The team reports to the VP and Chief Data Officer at TEIS, works closely with the SVP of Corporate Strategy, and has regular interactions with the company’s C-Suite. We are on an exciting journey to build and scale our advanced analytics practice. Looking for a Senior Data Architect that has experience in building data lake and data warehouse architectures using On-Prem and Cloud technologies. We are seeking a highly skilled and experienced Data Architect with a strong background in Big Data technologies, Databricks solutioning, and SAP integration within the manufacturing industry. The ideal candidate will have a proven track record of leading data teams, architecting scalable data platforms, and optimizing cloud infrastructure costs. This role requires deep hands-on expertise in Apache Spark, Python, SQL, and cloud platforms (Azure/AWS/GCP). Key Responsibilities: Design and implement scalable, secure, and high-performance Big Data architectures using Databricks, Apache Spark, and cloud-native services. Lead the end-to-end data architecture lifecycle, from requirements gathering to deployment and optimization. Design repeatable and reusable data ingestion pipelines for bringing in data from ERP source systems like SAP, Salesforce, HR, Factory, Marketing systems etc. Collaborate with cross-functional teams to integrate SAP data sources into modern data platforms. Drive cloud cost optimization strategies and ensure efficient resource utilization. Provide technical leadership and mentorship to a team of data engineers and developers. Develop and enforce data governance, data quality, and security standards. Translate complex business requirements into technical solutions and data models. Stay current with emerging technologies and industry trends in data architecture and analytics. Required Skills & Qualifications: 6+ years of experience in Big Data architecture, Data Engineering and AI assisted BI solutions within Databricks and AWS technologies. 3+ Years of Experience with AWS Data services like S3, Glue, Lake Formation, EMR, Kinesis, RDS, DMS and others 3+ Years of Experience in building Delta Lakes and open formats using technologies like Databricks and AWS Analytics Services. Bachelor’s degree in computer science, information technology, data science, data analytics or related field Proven expertise in Databricks, Apache Spark, Delta Lake, and MLflow. Strong programming skills in Python, SQL, and PySpark. Experience with SAP data extraction and integration (e.g., SAP BW, S/4HANA, BODS). Hands-on experience with cloud platforms (Azure, AWS, or GCP), especially in cost optimization and datalake house architectures. Solid understanding of data modelling, ETL/ELT pipelines, and data warehousing. Demonstrated team leadership and project management capabilities. Excellent communication, problem solving and stakeholder management skills. Preferred Qualifications: Experience in the manufacturing domain, with knowledge of production, supply chain, and quality data. Certifications in Databricks, cloud platforms, or data architecture. Familiarity with CI/CD pipelines, DevOps practices, and infrastructure as code (e.g., Terraform).
Posted 1 week ago
0 years
0 Lacs
Kochi, Kerala, India
On-site
Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your Role And Responsibilities As an Associate Software Developer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include: Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise search applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviours. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modelling results. Preferred Education Master's Degree Required Technical And Professional Expertise Total Exp-6-7 Yrs (Relevant-4-5 Yrs) Mandatory Skills: Azure Databricks, Python/PySpark, SQL, Github, - Azure Devops - Azure Blob Ability to use programming languages like Java, Python, Scala, etc., to build pipelines to extract and transform data from a repository to a data consumer Ability to use Extract, Transform, and Load (ETL) tools and/or data integration, or federation tools to prepare and transform data as needed. Ability to use leading edge tools such as Linux, SQL, Python, Spark, Hadoop and Java Preferred Technical And Professional Experience You thrive on teamwork and have excellent verbal and written communication skills. Ability to communicate with internal and external clients to understand and define business needs, providing analytical solutions Ability to communicate results to technical and non-technical audiences
Posted 1 week ago
4.0 - 10.0 years
0 - 30 Lacs
Pune, Chennai, Bengaluru
Work from Office
Roles and Responsibilities : Design, develop, and maintain Master Data Management (MDM) solutions using Informatica MDM to ensure data quality, integrity, and consistency across the organization. Collaborate with cross-functional teams to identify business requirements and implement MI reporting solutions using SQL queries on Data Bricks platform. Develop complex SQL queries to extract insights from large datasets stored in various databases such as Oracle, MySQL, PostgreSQL etc. Troubleshoot issues related to MDM implementation, data integration, and reporting by analyzing logs files, debugging techniques. Job Requirements : 4-10 years of experience in Business System Analysis with expertise in Master Data Management (MDM). Strong understanding of Informatica MDM toolset including its architecture, features, and best practices. Proficiency in writing complex SQL queries for extracting insights from large datasets using various database management systems like Oracle, MySQL, PostgreSQL etc. Experience working with Data Bricks platform for building scalable big-data pipelines.
Posted 1 week ago
12.0 years
0 Lacs
India
On-site
Job Title: Spark Scala Architect / Bigdata Experience: 12 to 16 Years Location: Bangalore, Pune, Hyderabad, Mumbai Employment Type: Full-Time Interview Mode : Virtual Requirement Job Description Job Title: Spark Scala Architect / Bigdata Experience: 12 to 16 Years Location: Bangalore, Pune, Hyderabad, Mumbai Notice Period: Immediate to 15 Days Maximum Employment Type: Full-Time We are looking for an experienced Databricks + PySpark Architect to lead the design and implementation of advanced data processing solutions on cloud. The ideal candidate will have a strong background in big data architecture , Databricks , and PySpark , with a solid understanding of AWS services . Core Roles & Responsibilities: Architect and implement scalable data pipelines using Databricks and PySpark Lead end-to-end architecture and solution design for large-scale data platforms Collaborate with stakeholders to understand business requirements and translate them into technical solutions Optimize performance and scalability of data engineering workflows Integrate and deploy solutions on AWS cloud using services like S3, Glue, EMR, Lambda, etc. Ensure best practices for data security, governance, and compliance Guide and mentor development teams in big data technologies and architecture Primary Skill: Expertise in Databricks and PySpark Strong hands-on experience with data engineering on cloud platforms Secondary Skill: Proficiency with AWS services for data processing and storage Familiarity with DevOps practices and CI/CD pipelines on cloud
Posted 1 week ago
12.0 years
0 Lacs
India
Remote
Work Location - Remote Work Timings - 12 pm - 9 pm Notice Period – Immediate/ who is serving the notice period. Must have excellent communication Overall experience: 12-14 years Candidates should have 6+ six years of relevant experience as a Scrum Master, specifically managing Data Analytics projects . This experience must be clearly detailed in the resume. Scrum Master – Data Analytics We’re seeking a highly motivated Scrum Master who thrives in fast-paced environments, inspires teams, and enables the delivery of impactful data and analytics solutions for manufacturing and supply chain operations. Act as Scrum Master for Agile teams delivering data and analytics solutions for manufacturing and supply chain operations. Work closely with Product Owners to align on business priorities, maintain a clear and actionable backlog, and ensure stakeholder needs are me Facilitate core Agile ceremonies: Sprint Planning, Daily Standups, Backlog Refinement, Reviews, and Retrospectives. Guide the team through data-focused sprints, including work on ingestion, transformation, integration, and reporting. Track progress, remove blockers, and drive continuous improvement in team performance and delivery. Collaborate with data engineers, analysts, architects, and business teams to ensure high-quality, end-to-end solutions. Promote Agile best practices across platforms like SAP ECC, IBP, HANA, BOBJ, Databricks, and Tableau. Monitor and share Agile metrics (e.g., velocity, burn-down) to keep teams and stakeholders aligned. Support team capacity planning, identify bottlenecks early, and help the team stay focused and accountable. Foster a culture of collaboration, adaptability, and frequent customer feedback to ensure business value is delivered in every sprint. Guide the team to continuously break down efforts to smaller components. Smaller workpieces result in better flow. Having 8 stories/tasks of ½ day each is better than having 1 story/task of 4 days. Guide the team to always provide clarity on the stories/tasks by using detailed descriptions and explicit acceptance criteria. Bring the team’s focus in the daily standup meetings to completing things instead of working on things
Posted 1 week ago
7.0 - 10.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Summary We are seeking a highly skilled Sr. Developer with 7 to 10 years of experience to join our dynamic team. The ideal candidate will have expertise in Python Databricks SQL Databricks Workflows and PySpark. This role operates in a hybrid work model with day shifts offering the opportunity to work on innovative projects that drive our companys success. Responsibilities Develop and maintain scalable data processing systems using Python and PySpark to enhance data analytics capabilities. Collaborate with cross-functional teams to design and implement Databricks Workflows that streamline data operations. Optimize Databricks SQL queries to improve data retrieval performance and ensure efficient data management. Provide technical expertise in Python programming to support the development of robust data solutions. Oversee the integration of data sources into Databricks environments to facilitate seamless data processing. Ensure data quality and integrity by implementing best practices in data validation and error handling. Troubleshoot and resolve complex technical issues related to Databricks and PySpark environments. Contribute to the continuous improvement of data processing frameworks and methodologies. Mentor junior developers and provide guidance on best practices in data engineering. Collaborate with stakeholders to gather requirements and translate them into technical specifications. Conduct code reviews to ensure adherence to coding standards and best practices. Stay updated with the latest industry trends and technologies to drive innovation in data engineering. Document technical processes and workflows to support knowledge sharing and team collaboration. Qualifications Possess a strong proficiency in Python programming and its application in data engineering. Demonstrate expertise in Databricks SQL and its use in optimizing data queries. Have hands-on experience with Databricks Workflows for efficient data processing. Show proficiency in PySpark for developing scalable data solutions. Exhibit excellent problem-solving skills and the ability to troubleshoot complex technical issues. Have a solid understanding of data integration techniques and best practices. Display strong communication skills to collaborate effectively with cross-functional teams. Certifications Required Databricks Certified Data Engineer Associate Python Institute PCEP Certification
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
32455 Jobs | Dublin
Wipro
16590 Jobs | Bengaluru
EY
11025 Jobs | London
Accenture in India
10991 Jobs | Dublin 2
Amazon
8878 Jobs | Seattle,WA
Uplers
8715 Jobs | Ahmedabad
IBM
8204 Jobs | Armonk
Oracle
7750 Jobs | Redwood City
Capgemini
6181 Jobs | Paris,France
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi