Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0.0 years
0 Lacs
Hyderabad, Telangana
On-site
Req ID: 329860 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Python API Developer to join our team in Hyderabad, Telangana (IN-TG), India (IN). "NTT DATA Services currently seeks Python API developer to join our team in Bangalore / Hyderabad The right candidate will have expert level experience in Python, Hadoop Big Data environment, Model testing, Model Deployment, Google Cloud Platform, Vertex AI and developing repeatable and predictable python based frameworks. The role will require the candidate to be the technical lead and agile developer for key model operationalization task. Responsibilities : 1. Design API and Batch applications 2. Develop Ingestion Pipelines from RDBMS to Data Lakes 3. AIML Model Testing & Model Deployment 4. Release coordination to enable safe and automated production deployments 5. Create application architecture and design documentation 6. Develop applications using agile methodologies 7. Act as agile SME for the team 8. Perform unit testing 9. Support application deployment to production 10. Interact with various teams to conduct daily work" About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at us.nttdata.com NTT DATA endeavors to make https://us.nttdata.com accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at https://us.nttdata.com/en/contact-us. This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click here. If you'd like more information on your EEO rights under the law, please click here. For Pay Transparency information, please click here.
Posted 1 week ago
0.0 years
0 Lacs
Delhi, Delhi
On-site
Customer Solutions New Delhi, India R03217 Description We’re not just building better tech. We’re rewriting how data moves and what the world can do with it. With Confluent, data doesn’t sit still. Our platform puts information in motion, streaming in near real-time so companies can react faster, build smarter, and deliver experiences as dynamic as the world around them. It takes a certain kind of person to join this team. Those who ask hard questions, give honest feedback, and show up for each other. No egos, no solo acts. Just smart, curious humans pushing toward something bigger, together. One Confluent. One Team. One Data Streaming Platform. About the Role: The CSTA role is both deeply technical and commercial in nature. You will leverage a technical background (e.g. AppDev, SysAdmin, Distributed Computing) to advise customers on their architectures, including patterns & strategies for operating and maturing their Confluent subscription. You will also utilize relationship management skills & industry experience to guide the customer on how they can best achieve their business goals and value-based outcomes via their Confluent investment. Confluent is searching for a Customer Success Technical Architect (CSTA) to act as a trusted technical advisor and advocate to work with our customers. The primary objective of this role is to ensure the success, retention & expansion of our customers by providing technical guidance, including best practices, for the Confluent product suite. You will partner with others in Customer Solutions as well as cross-functional divisions such as Sales, Product Management and Engineering to help our customers accelerate time to value, maximize product consumption and achieve their overall business objectives. In this role you will have the opportunity to build broad experience with Kafka, Flink and Confluent IP as well as gain an understanding of complementary and competitive technologies. You will work with a variety of organizations ranging from new start-ups to large enterprise customers. Throughout these interactions, you will build strong relationships, ensure exemplary delivery standards, and have a lot of fun helping our customers build state-of-the-art data streaming platforms! What You Will Do: Champion and advocate for the customer within Confluent. You will be the technical voice of the customer and will leverage learned technical insights & perspective while coordinating between Confluent Sales, Product, Services, Support and Training teams to drive technical success. Identify customer technical objections and develop strategies to address those blockers to adoption. Proactively support customers through technical lifecycle activities such as architecture planning, cluster & security design, monitoring & automation; review & provide guidance on upgrade or migration plans, platform & application hardening ideas and high availability design. Guide customers up the data streaming maturity curve through recommendations on advanced technical topics (e.g. data mesh, stream processing, utilization optimization & performance tuning) Develop and present periodic customer reviews, including analysis of technical health and operational performance, to Confluent senior management. Document and transfer knowledge to customers and internal teams. This assists customers in advancing their knowledge & abilities on their own, while also helping Technical Support Engineers and Professional Services teams better serve your customers. Leverage knowledge of your customer environments and use cases to influence the roadmap of Confluent products. When necessary, roll up your sleeves and dig in to help address customer issues alongside Confluent Technical Support Engineers and Core Engineering. What You Will Bring: Bachelor's degree in computer science or engineering, mathematics, or other quantitative field Demonstrated success in a technical Field role for a product / SaaS company with enterprise customers Passion for working on complex technical problems, with a strong understanding of modern infrastructure and streaming technologies; self-starter who loves a fast-paced environment Excellent interpersonal & communication skills and an ability to concisely explain tricky issues and complex solutions to a variety of personas Demonstrated ability to manage multiple customers at a time while paying strict attention to detail and delivering results across multiple initiatives such as driving expansion, customer satisfaction, feature adoption, and retention Collaborative spirit, outstanding consulting & relationship management skills and ability to rapidly switch context Hands on knowledge of one or more key cloud vendors (AWS, GCP and Azure) Solid understanding of cloud networking and security technologies (e.g. VPC, Private Link, Private Service Connect, TLS/SSL, SASL, Kerberos, etc.) Experience prototyping and analyzing code for client solutions in multiple languages (e.g. Java, Python, Go, etc.) Experience with Java Virtual Machine (JVM) tuning and troubleshooting Experience with operating Linux, you know how to configure, tune and troubleshoot both RedHat & Debian based distributions Ability to learn new technologies quickly, as well as a strong interest in doing so! Flexibility to travel up to 20% of the time What Gives You an Edge: Confluent Developer or Administrator certifications (https://www.confluent.io/certification/) Experience with Apache Flink Experience helping customers build distributed systems or streaming solutions that use Apache Kafka alongside technologies such as Spark, Flink, Hadoop, Cassandra, etc Ready to build what's next? Let’s get in motion. Come As You Are Belonging isn’t a perk here. It’s the baseline. We work across time zones and backgrounds, knowing the best ideas come from different perspectives. And we make space for everyone to lead, grow, and challenge what’s possible. We’re proud to be an equal opportunity workplace. Employment decisions are based on job-related criteria, without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, veteran status, or any other classification protected by law.
Posted 1 week ago
0.0 - 2.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Tesco India • Bengaluru, Karnataka, India • Hybrid • Full-Time • Permanent • Apply by 31-Jul-2025 About the role Enable data driven decision making across the Tesco business globally by developing analytics solutions using a combination of math, tech and business knowledge What is in it for you At Tesco, we are committed to providing the best for you. As a result, our colleagues enjoy a unique, differentiated, market- competitive reward package, based on the current industry practices, for all the work they put into serving our customers, communities and planet a little better every day. Our Tesco Rewards framework consists of pillars - Fixed Pay, Incentives, and Benefits. Total Rewards offered at Tesco is determined by four principles - simple, fair, competitive, and sustainable. Salary - Your fixed pay is the guaranteed pay as per your contract of employment. Performance Bonus - Opportunity to earn additional compensation bonus based on performance, paid annually Leave & Time-off - Colleagues are entitled to 30 days of leave (18 days of Earned Leave, 12 days of Casual/Sick Leave) and 10 national and festival holidays, as per the company’s policy. Making Retirement Tension-FreeSalary - In addition to Statutory retirement beneets, Tesco enables colleagues to participate in voluntary programmes like NPS and VPF. Health is Wealth - Tesco promotes programmes that support a culture of health and wellness including insurance for colleagues and their family. Our medical insurance provides coverage for dependents including parents or in-laws. Mental Wellbeing - We offer mental health support through self-help tools, community groups, ally networks, face-to-face counselling, and more for both colleagues and dependents. Financial Wellbeing - Through our financial literacy partner, we offer one-to-one financial coaching at discounted rates, as well as salary advances on earned wages upon request. Save As You Earn (SAYE) - Our SAYE programme allows colleagues to transition from being employees to Tesco shareholders through a structured 3-year savings plan. Physical Wellbeing - Our green campus promotes physical wellbeing with facilities that include a cricket pitch, football field, badminton and volleyball courts, along with indoor games, encouraging a healthier lifestyle. You will be responsible for Identifying operational improvements and finding solutions by applying CI tools and techniques - Responsible for completing tasks and transactions within agreed KPI's - Knows and applies fundamental work theories/concepts/processes in own areas of work Engaging with business & functional partners to understand business priorities, ask relevant questions and scope same into a analytical solution document calling out how application of data science will improve decision making - In depth understanding of techniques to prepare the analytical data set leveraging multiple complex data set sources - Building Statistical models and ML algorithms with practitioner level competency - Writing structured, modularized & codified algorithms using Continuous Improvement principles (development of knowledge assets and reusable modules on GitHub, Wiki, etc) with expert competency - Building easy visualization layer on top of the algorithms in order to empower end-users to take decisions - this could be on a visualization platform (Tableau / Python) or through a recommendation set through PPTs - Working with the line manager to ensure application / consumption and also think beyond the immediate ask and spot opportunities to address the bigger business questions (if any) You will need 1-2 year experience in data science application in Retail or CPG Preferred - Functional experience: Marketing, Supply Chain, Customer, Merchandising, Operations, Finance or Digital - Applied Math: Applied Statistics, Design of Experiments, Linear & Logistic Regression, Decision Trees, Forecasting, Optimization algorithms - Tech: SQL, Hadoop, Python, Tableau, MS Excel, MS Powerpoint - Soft Skills: Analytical Thinking & Problem solving, Storyboarding About us Tesco in Bengaluru is a multi-disciplinary team serving our customers, communities, and planet a little better every day across markets. Our goal is to create a sustainable competitive advantage for Tesco by standardising processes, delivering cost savings, enabling agility through technological solutions, and empowering our colleagues to do even more for our customers. With cross-functional expertise, a wide network of teams, and strong governance, we reduce complexity, thereby offering high-quality services for our customers. Tesco in Bengaluru, established in 2004 to enable standardisation and build centralised capabilities and competencies, makes the experience better for our millions of customers worldwide and simpler for over 3,30,000 colleagues. Tesco Business Solutions: Established in 2017, Tesco Business Solutions (TBS) has evolved from a single entity traditional shared services in Bengaluru, India (from 2004) to a global, purpose-driven solutions-focused organisation. TBS is committed to driving scale at speed and delivering value to the Tesco Group through the power of decision science. With over 4,400 highly skilled colleagues globally, TBS supports markets and business units across four locations in the UK, India, Hungary, and the Republic of Ireland. The organisation underpins everything that the Tesco Group does, bringing innovation, a solutions mindset, and agility to its operations and support functions, building winning partnerships across the business. TBS's focus is on adding value and creating impactful outcomes that shape the future of the business. TBS creates a sustainable competitive advantage for the Tesco Group by becoming the partner of choice for talent, transformation, and value creation Apply
Posted 1 week ago
12.0 years
0 Lacs
Noida, Uttar Pradesh
Remote
Location: Noida, Uttar Pradesh, India Job ID: R0098886 Date Posted: 2025-07-07 Company Name: HITACHI INDIA PVT. LTD Profession (Job Category): Other Job Schedule: Full time Remote: No Job Description: Job Title: Solution Architect Designation : Senior Company: Hitachi Rail GTS India Location: Noida, UP, India Salary: As per Industry Company Overview: Hitachi Rail is right at the forefront of the global mobility sector following the acquisition. The closing strengthens the company's strategic focus on helping current and potential Hitachi Rail and GTS customers through the sustainable mobility transition – the shift of people from private to sustainable public transport, driven by digitalization. Position Overview: We are looking for a Solution Architect that will be responsible for translating business requirements into technical solutions, ensuring the architecture is scalable, secure, and aligned with enterprise standards. Solution Architect will play a crucial role in defining the architecture and technical direction of the existing system. you will be responsible for the design, implementation, and deployment of solutions that integrate with transit infrastructure, ensuring seamless fare collection, real-time transaction processing, and enhanced user experiences. You will collaborate with development teams, stakeholders, and external partners to create scalable, secure, and highly available software solutions. Job Roles & Responsibilities: Architectural Design : Develop architectural documentation such as solution blueprints, high-level designs, and integration diagrams. Lead the design of the system's architecture, ensuring scalability, security, and high availability. Ensure the architecture aligns with the company's strategic goals and future vision for public transit technologies. Technology Strategy : Select the appropriate technology stack and tools to meet both functional and non-functional requirements, considering performance, cost, and long-term sustainability. System Integration : Work closely with teams to design and implement the integration of the AFC system with various third-party systems (e.g., payment gateways, backend services, cloud infrastructure). API Design & Management : Define standards for APIs to ensure easy integration with external systems, such as mobile applications, ticketing systems, and payment providers. Security & Compliance : Ensure that the AFC system meets the highest standards of data security, particularly for payment information, and complies with industry regulations (e.g., PCI-DSS, GDPR). Stakeholder Collaboration : Act as the technical lead during project planning and discussions, ensuring the design meets customer and business needs. Technical Leadership : Mentor and guide development teams through best practices in software development and architectural principles. Performance Optimization : Monitor and optimize system performance to ensure the AFC system can handle high volumes of transactions without compromise. Documentation & Quality Assurance : Maintain detailed architecture documentation, including design patterns, data flow, and integration points. Ensure the implementation follows best practices and quality standards. Research & Innovation : Stay up to date with the latest advancements in technology and propose innovative solutions to enhance the AFC system. Skills (Mandatory): DotNet (C#), C/C++, Java, ASP.NET Core (C#), Angular, OAuth2 / OpenID Connect (Authentication & Authorization) JWT (JSON Web Tokens) Spring Cloud, Docker, Kubernetes, Relational Databases (MSSQL) Data Warehousing SOAP/RESTful API Design, Redis (Caching & Pub/Sub) Preferred Skills (Good to have): Python, Android SSL/TLS Encryption OWASP Top 10 (Security Best Practices) Vault (Secret Management) Keycloak (Identity & Access Management) Swagger (API Documentation) NoSQL Databases, GraphQL, gRPC, OpenAPI, Istio, Apache Kafka, RabbitMQ, Consul, DevOps & CI/CD Tools Tools & Technologies: UML (Unified Modeling Language) Lucidchart / Draw.io (Diagramming) PlantUML (Text-based UML generation) C4 Model (Software architecture model), Enterprise Architect (Modeling), Apache Hadoop / Spark (Big Data), Elasticsearch (Search Engine), Apache Kafka (Stream Processing), TensorFlow / PyTorch (Machine Learning/AI) Education: Bachelor's or Master’s degree in Computer Science, Information Technology, or a related field. Experience Required: 12+ years of experience in solution architecture or software design. Proven experience with enterprise architecture frameworks (e.g., TOGAF, Zachman). Strong understanding of cloud platforms (AWS, Azure, or Google Cloud). Experience in system integration, API design, microservices, and SOA. Familiarity with data modeling and database technologies (SQL, NoSQL). Strong communication and stakeholder management skills. Preferred: Certification in cloud architecture (e.g., AWS Certified Solutions Architect, Azure Solutions Architect Expert). Experience with DevOps tools and CI/CD pipelines. Knowledge of security frameworks and compliance standards (e.g., ISO 27001, GDPR). Experience in Agile/Scrum environments. Domain knowledge in [insert industry: e.g., finance, transportation, healthcare]. Soft Skills: Analytical and strategic thinking. Excellent problem-solving abilities. Ability to lead and mentor cross-functional teams. Strong verbal and written communication.
Posted 1 week ago
5.0 - 10.0 years
0 - 0 Lacs
Hyderabad
Remote
Data Engineering / Big Data part time Work from Home (Any where in world) Warm Greetings from Excel Online Classes, We are a team of industry professionals running an institute that provides comprehensive online IT training, technical support, and development services. We are currently seeking Data Engineering / Big Data Experts who are passionate about technology and can collaborate with us in their free time. If you're enthusiastic, committed, and ready to share your expertise, we would love to work with you! Were hiring for the following services: Online Training Online Development Online Technical Support Conducting Online Interviews Corporate Training Proof of Concept (POC) Projects Research & Development (R&D) We are looking for immediate joiners who can contribute in any of the above areas. If you're interested, please fill out the form using the link below: https://docs.google.com/forms/d/e/1FAIpQLSdvut0tujgMbBIQSc6M7qldtcjv8oL1ob5lBc2AlJNRAgD3Cw/viewform We also welcome referrals! If you know someone—friends, colleagues, or connections—who might be interested in: Teaching, developing, or providing tech support online Sharing domain knowledge (e.g., Banking, Insurance, etc.) Teaching foreign languages (e.g., Spanish, German, etc.) Learning or brushing up on technologies to clear interviews quickly Upskilling in new tools or frameworks for career growth Please feel free to forward this opportunity to them. For any queries, feel free to contact us at: excel.onlineclasses@gmail.com Thank you & Best Regards, Team Excel Online Classes excel.onlineclasses@gmail.com
Posted 1 week ago
8.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Data Architect We are seeking an experienced and forward-thinking Data Architect with deep expertise in Big Data ecosystems to design and lead scalable data architecture solutions. The ideal candidate will be responsible for designing, building, and optimizing data pipelines, storage frameworks, and architecture patterns to support advanced analytics, machine learning, and business intelligence initiatives. Key Responsibilities Lead the design and development of enterprise-grade Big Data architectures using tools such as Hadoop, Spark, Kafka, Hive, etc. Architect scalable data lakes, data warehouses, and streaming platforms to support structured and unstructured data ingestion. Collaborate with business, data science, and engineering teams to understand requirements and translate them into robust architecture designs. Define data modeling standards, metadata management, and governance frameworks. Ensure data quality, lineage, and integrity across data systems. Drive best practices for performance, reliability, and security of large-scale data processing pipelines. Evaluate emerging technologies and make recommendations for adoption in alignment with business goals. Provide technical leadership and mentorship to data engineers and developers. Establish and enforce architectural standards, design patterns, and documentation. Required Skills & Experience 8+ years of overall experience in data architecture, data engineering, or a similar role. Strong expertise in Big Data tools. Deep experience with data modeling, ETL frameworks, and distributed data processing. Hands-on knowledge of cloud platforms (AWS, GCP, Azure) and tools like Amazon Redshift, BigQuery, Snowflake, or Databricks. Proficiency in SQL, and scripting languages such as Python, Scala, or Java. Experience in designing real-time data streaming and batch processing pipelines. Knowledge of data governance, security, and compliance best practices. Excellent problem-solving skills and ability to communicate complex technical concepts to non-technical stakeholders. Preferred Qualifications Bachelors or Masters degree in Computer Science, Data Engineering, or related field. Certifications in Big Data technologies or cloud platforms (e.g., AWS Certified Data Analytics, GCP Professional Data Engineer). Experience working with BI tools (Tableau, Power BI, Looker) and supporting data visualization requirements. Exposure to ML pipeline integration is a plus (ref:hirist.tech)
Posted 1 week ago
2.0 years
0 Lacs
Vadodara, Gujarat, India
On-site
Full-time Career Site Team: Data Science & Business Intelligence Job Description Business Unit: Data Science Function: Center of Statistical and Data Science Research – Non Cooperator Estimation Role: Lead Data Scientist / Sr Data Scientist Location: Vadodara, India In 100 countries around the world, NielsenIQ provides clients the most complete understanding of how the FMCG market evolves and what consumers buy. As a global leader in measurement and information, we believe providing our clients a precise understanding of the consumer is the key to making the right decisions -- decisions that can lead to profitable growth. At NielsenIQ, we’re always innovating to keep pace with emerging market trends and the increasingly diverse, demanding and connected consumer. Being a Lead Data Scientist in the Center of Statistical and Data Science Research, you will play a critical role in developing and implementing the next generation of solutions we offer our clients, powered by algorithms and AI and leveraging large volumes of internal data, client data and third party data. You’ll be actively participating with other team members in the overall Product creation process. From ideation through Proof Of Concept and experimenting for methodological development, to output visualization and fine tuning the setup to take into account different market specifics. You will be learning and exploring the newest technologies & applications for data streaming, data architecture, and data modelling solutions. You’ll work alongside a group of talented individuals with different areas of expertise including best in class data scientists, product managers, and data collection specialists with the main purpose of accelerating innovation. On a Daily Basis You’ll Be Expected To Ideate and develop solutions for the product innovation pipeline. Build statistical and analytical models (including ML/AI approaches) as prototypes and POCs to address specific client business needs Adapt implementation and fine tune the setup of these models to new markets, new projects. Support productionizing those models by writing specifications to our Technology teams, supporting the development and testing the production implementation against your prototypes Document the solution and experimentation to arrive to it, and create specifications for Technology team to productionize it Refine and enhance product features Maintain data integrity and quality throughout the development lifecycle Read, write, comment, maintain, and share legible and quality computer code utilized in prototyping and solutioning. Qualifications Role requirements: E=essential, P=preferred E - Bachelors or Masters degree in Computer Science, Data Science, Statistics, Mathematics, Engineering or related field with requiring outstanding analytical expertise and strong technical background E - 2+ years of experience in data science/analytics related product development and/or data solutions development E - Knowledge of statistical and machine learning methodologies: Sampling theory, Probability Theory, Variation analysis, Outlier identification techniques, Regressions, Classification, Time series analysis, Clustering, Neural Networks, etc. E – Ability to lead prototyping as well as supporting pilot programs for R&D purposes. The primary areas include – but not limited to – trend analyses, identifying gaps for improvements in coverage, representation/ sampling, bias reduction, indirect estimation, data integration, automation, generalization, harmonization as well as working with different data sources. E - Proficient in Python/ R/ SQL or other statistical packages as well as version control. P - Ability to lead and influence with other functional areas as a team and deliver results on time and per-spec. P - Experience with Cloud computing, ETL, advanced data processing techniques (Spark, Hadoop, etc.) P - Experience working on Agile teams P - Domain in managing clients and proven experience of solving complex client requirements. P -Strong Communication skills and ability to present and explain methodological and operational solutions to executive leadership Additional Information Our Benefits Flexible working environment Volunteer time off LinkedIn Learning Employee-Assistance-Program (EAP) About NIQ NIQ is the world’s leading consumer intelligence company, delivering the most complete understanding of consumer buying behavior and revealing new pathways to growth. In 2023, NIQ combined with GfK, bringing together the two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights—delivered with advanced analytics through state-of-the-art platforms—NIQ delivers the Full View™. NIQ is an Advent International portfolio company with operations in 100+ markets, covering more than 90% of the world’s population. For more information, visit NIQ.com Want to keep up with our latest updates? Follow us on: LinkedIn | Instagram | Twitter | Facebook Our commitment to Diversity, Equity, and Inclusion NIQ is committed to reflecting the diversity of the clients, communities, and markets we measure within our own workforce. We exist to count everyone and are on a mission to systematically embed inclusion and diversity into all aspects of our workforce, measurement, and products. We enthusiastically invite candidates who share that mission to join us. We are proud to be an Equal Opportunity/Affirmative Action-Employer, making decisions without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability status, age, marital status, protected veteran status or any other protected class. Our global non-discrimination policy covers these protected classes in every market in which we do business worldwide. Learn more about how we are driving diversity and inclusion in everything we do by visiting the NIQ News Center: https://nielseniq.com/global/en/news-center/diversity-inclusion I'm interested I'm interested Privacy Policy
Posted 1 week ago
8.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
At ABB, we help industries outrun - leaner and cleaner. Here, progress is an expectation - for you, your team, and the world. As a global market leader, we’ll give you what you need to make it happen. It won’t always be easy, growing takes grit. But at ABB, you’ll never run alone. Run what runs the world. This Position reports to: Manager - Digital Labs Your Role And Responsibilities In this role, you will have the opportunity to develop end-to-end business intelligence framework for online products and SAAS solutions based on requirements that include recommending an appropriate architecture (on-premises/cloud), analytics, and reporting. Each day, you will evaluate business data to produce meaningful insights and metrics and to identify trends that could improve business performance. You will also showcase your expertise by designing new dashboards using best practices and intuitive visualization . You will be mainly accountable for: Design, develop, and manage interactive Power BI dashboards, reports, and data visualizations that meet business needs. Utilize Power BI to transform complex data sets into compelling and easily digestible insights. Maintain and optimize existing Power BI reports, ensuring high per-formance and data accuracy. Develop Key Performance Indicators (KPIs) and metrics to measure business performance and guide decision-making. Leverage advanced Power BI features, such as R/Python integration, AI visuals, and custom visuals, to enhance reporting capabilities. Conduct in-depth data analysis to identify trends, patterns, and opportunities that support business strategies. Understanding of data integration (data mining, data preparation, valida-tion and cleaning), familiarity with complex data and structures) Good to have skills: Writing efficient and complex SQL queries for data extraction, manipula-tion, and ensuring data accuracy and integrity. Knowledge of Hadoop, Spark, or other Big Data tools is helpful for scaling large data sets. Develop and implement BI through working in Agile teams - Work closely with business stake-holders to understand requirements, gather insights, and translate them into actionable Power BI solutions, providing ongoing support and training as needed. Provide business application analysis and data modelling design to collect, standardize, main-tain, monitor and audit business data for effective centralized data warehouse. Monitor and op-timize Power BI performance, including reducing refresh times, improving model efficiency, and addressing any technical issues to maintain a smooth user experience. Mentor junior team members, share BI best practices, and stay current with Power BI updates, ensuring the BI team uses the latest techniques and innovations Qualifications For The Role Degree in Analytics, Engineering, IT, Computer Science or related fields 8+ years Business Intelligence experience, of which 5+ years in developing and implementing enterprise scale reports and dashboards using Power BI Develop and maintain processes of automated data and using various business intelligence re-porting tools to create dashboards Work in cross-functional agile scrum teams to deliver visualization dashboards More about us: ABB Finance is a trusted partner to the business and a world-class team who delivers forward-looking insights that drive sustainable long-term results and operates with the highest standards. We value people from different backgrounds. Apply today for your next career step within ABB and visit www.abb.com to learn about the impact of our solutions across the globe. We value people from different backgrounds. Could this be your story? Apply today or visit www.abb.com to read more about us and learn about the impact of our solutions across the globe. Fraud Warning: Any genuine offer from ABB will always be preceded by a formal application and interview process. We never ask for money from job applicants. For current open positions you can visit our career website https://global.abb/group/en/careers and apply. Please refer to detailed recruitment fraud caution notice using the link https://global.abb/group/en/careers/how-to-apply/fraud-warning.
Posted 1 week ago
5.0 - 10.0 years
20 - 35 Lacs
Pune
Work from Office
Description: Hiring Data Engineer with AWS or GCP Cloud Requirements: Role Summary: The Data Engineer will be responsible for designing, implementing, and maintaining the data infrastructure and pipelines necessary for AI/ML model training and deployment. They will work closely with data scientists and engineers to ensure data is clean, accessible, and efficiently processed Required Experience: • 6-8 years of experience in data engineering, ideally in financial services. • Strong proficiency in SQL, Python, and big data technologies (e.g., Hadoop, Spark). • Experience with cloud platforms (e.g., AWS, Azure, GCP) and data warehousing solutions. • Familiarity with ETL processes and tools. • Knowledge of data governance, security, and compliance best practices. Job Responsibilities: Key Responsibilities: • Build and maintain scalable data pipelines for data collection, processing, and analysis. • Ensure data quality and consistency for training and testing AI models. • Collaborate with data scientists and AI engineers to provide the required data for model development. • Optimize data storage and retrieval to support AI-driven applications. • Implement data governance practices to ensure compliance and security. What We Offer: Exciting Projects: We focus on industries like High-Tech, communication, media, healthcare, retail and telecom. Our customer list is full of fantastic global brands and leaders who love what we build for them. Collaborative Environment: You Can expand your skills by collaborating with a diverse team of highly talented people in an open, laidback environment — or even abroad in one of our global centers or client facilities! Work-Life Balance: GlobalLogic prioritizes work-life balance, which is why we offer flexible work schedules, opportunities to work from home, and paid time off and holidays. Professional Development: Our dedicated Learning & Development team regularly organizes Communication skills training(GL Vantage, Toast Master),Stress Management program, professional certifications, and technical and soft skill trainings. Excellent Benefits: We provide our employees with competitive salaries, family medical insurance, Group Term Life Insurance, Group Personal Accident Insurance , NPS(National Pension Scheme ), Periodic health awareness program, extended maternity leave, annual performance bonuses, and referral bonuses. Fun Perks: We want you to love where you work, which is why we host sports events, cultural activities, offer food on subsidies rates, Corporate parties. Our vibrant offices also include dedicated GL Zones, rooftop decks and GL Club where you can drink coffee or tea with your colleagues over a game of table and offer discounts for popular stores and restaurants!
Posted 1 week ago
2.0 years
0 Lacs
Vadodara, Gujarat, India
On-site
Job Description Business Unit: Data Science Function: Center of Statistical and Data Science Research – Non Cooperator Estimation Role: Lead Data Scientist / Sr Data Scientist Location: Vadodara, India In 100 countries around the world, NielsenIQ provides clients the most complete understanding of how the FMCG market evolves and what consumers buy. As a global leader in measurement and information, we believe providing our clients a precise understanding of the consumer is the key to making the right decisions -- decisions that can lead to profitable growth. At NielsenIQ, we’re always innovating to keep pace with emerging market trends and the increasingly diverse, demanding and connected consumer. Being a Lead Data Scientist in the Center of Statistical and Data Science Research, you will play a critical role in developing and implementing the next generation of solutions we offer our clients, powered by algorithms and AI and leveraging large volumes of internal data, client data and third party data. You’ll be actively participating with other team members in the overall Product creation process. From ideation through Proof Of Concept and experimenting for methodological development, to output visualization and fine tuning the setup to take into account different market specifics. You will be learning and exploring the newest technologies & applications for data streaming, data architecture, and data modelling solutions. You’ll work alongside a group of talented individuals with different areas of expertise including best in class data scientists, product managers, and data collection specialists with the main purpose of accelerating innovation. On a daily basis you’ll be expected to: Ideate and develop solutions for the product innovation pipeline Build statistical and analytical models (including ML/AI approaches) as prototypes and POCs to address specific client business needs Adapt implementation and fine tune the setup of these models to new markets, new projects Support productionizing those models by writing specifications to our Technology teams, supporting the development and testing the production implementation against your prototypes Document the solution and experimentation to arrive to it, and create specifications for Technology team to productionize it Refine and enhance product features Maintain data integrity and quality throughout the development lifecycle Read, write, comment, maintain, and share legible and quality computer code utilized in prototyping and solutioning Qualifications Role requirements: E=essential, P=preferred E - Bachelors or Masters degree in Computer Science, Data Science, Statistics, Mathematics, Engineering or related field with requiring outstanding analytical expertise and strong technical background E - 2+ years of experience in data science/analytics related product development and/or data solutions development E - Knowledge of statistical and machine learning methodologies: Sampling theory, Probability Theory, Variation analysis, Outlier identification techniques, Regressions, Classification, Time series analysis, Clustering, Neural Networks, etc E – Ability to lead prototyping as well as supporting pilot programs for R&D purposes. The primary areas include – but not limited to – trend analyses, identifying gaps for improvements in coverage, representation/ sampling, bias reduction, indirect estimation, data integration, automation, generalization, harmonization as well as working with different data sources E - Proficient in Python/ R/ SQL or other statistical packages as well as version control P - Ability to lead and influence with other functional areas as a team and deliver results on time and per-spec P - Experience with Cloud computing, ETL, advanced data processing techniques (Spark, Hadoop, etc.) P - Experience working on Agile teams P - Domain in managing clients and proven experience of solving complex client requirements P -Strong Communication skills and ability to present and explain methodological and operational solutions to executive leadership Additional Information Our Benefits Flexible working environment Volunteer time off LinkedIn Learning Employee-Assistance-Program (EAP) About NIQ NIQ is the world’s leading consumer intelligence company, delivering the most complete understanding of consumer buying behavior and revealing new pathways to growth. In 2023, NIQ combined with GfK, bringing together the two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights—delivered with advanced analytics through state-of-the-art platforms—NIQ delivers the Full View™. NIQ is an Advent International portfolio company with operations in 100+ markets, covering more than 90% of the world’s population. For more information, visit NIQ.com Want to keep up with our latest updates? Follow us on: LinkedIn | Instagram | Twitter | Facebook Our commitment to Diversity, Equity, and Inclusion NIQ is committed to reflecting the diversity of the clients, communities, and markets we measure within our own workforce. We exist to count everyone and are on a mission to systematically embed inclusion and diversity into all aspects of our workforce, measurement, and products. We enthusiastically invite candidates who share that mission to join us. We are proud to be an Equal Opportunity/Affirmative Action-Employer, making decisions without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability status, age, marital status, protected veteran status or any other protected class. Our global non-discrimination policy covers these protected classes in every market in which we do business worldwide. Learn more about how we are driving diversity and inclusion in everything we do by visiting the NIQ News Center: https://nielseniq.com/global/en/news-center/diversity-inclusion
Posted 1 week ago
3.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Title: Full-Time Trainer – Data Science & Data Analytics Location: Pune - Wakad, Shivajinagar Experience: 3+ years (in industry and/or training) Job Type: Full-Time Salary: Competitive, based on experience About Us: Ethans Tech is a leading IT training institute dedicated to empowering learners through industry-aligned education. As an ISO 9001:2015 certified member of NASSCOM and a partner of E&ICT Academy, IIT Guwahati, we train over 5000+ students annually in emerging technologies including Data Science, AI, Cloud, and DevOps. Job Summary: We are seeking a passionate and knowledgeable Data Science & Analytics Trainer who can deliver high-quality, hands-on training to students and working professionals. The ideal candidate will possess a strong technical background, real-world project experience, and a genuine enthusiasm for teaching and mentoring. Key Responsibilities: Conduct engaging, interactive classroom and/or online training sessions on: Python for Data Science Statistics & Probability Data Wrangling & Visualization (Pandas, Matplotlib, Seaborn, Power BI) Machine Learning (Scikit-learn, supervised & unsupervised learning) SQL and databases Excel for Analytics Exploratory Data Analysis (EDA) Data storytelling and business insights Guide learners through hands-on projects, assignments, and case studies. Evaluate and provide feedback on assignments and project submissions. Stay up-to-date with the latest industry trends and tools. Customize training materials and session plans as per audience background (freshers, working professionals, etc.). Mentor students for career readiness – interviews, portfolio building, and real-world application. Collaborate with curriculum designers and academic teams for content updates. Required Skills & Qualifications: Bachelor’s/Master’s in Computer Science, Statistics, Engineering, or a related field. 3+ years of experience in Data Science/Data Analytics domain. Prior teaching or mentoring experience is a plus . Strong command of Python, SQL, Excel, and tools like Power BI or Tableau. Familiarity with machine learning algorithms and libraries (Pandas, NumPy, Scikit-learn). Excellent communication and presentation skills. Ability to explain complex concepts in simple terms. Good to Have: Familiarity with tools like Jupyter, Anaconda, GitHub. Certifications in Data Science / Analytics (Microsoft, IBM, Coursera, etc.). Exposure to Big Data tools (Hadoop/Spark) or NLP is a plus. Perks and Benefits: Opportunity to work with one of the top-rated IT training institutes in India. Exposure to live projects and industry collaborations. Certifications and upskilling support. Competitive salary with performance incentives. Career growth in academic leadership or content development.
Posted 1 week ago
0 years
1 - 9 Lacs
Pune
On-site
About VOIS: VO IS (Vodafone Intelligent Solutions) is a strategic arm of Vodafone Group Plc, creating value and enhancing quality and efficiency across 28 countries, and operating from 7 locations: Albania, Egypt, Hungary, India, Romania, Spain and the UK. Over 29,000 highly skilled individuals are dedicated to being Vodafone Group’s partner of choice for talent, technology, and transformation. We deliver the best services across IT, Business Intelligence Services, Customer Operations, Business Operations, HR, Finance, Supply Chain, HR Operations, and many more. Established in 2006, VO IS has evolved into a global, multi-functional organisation, a Centre of Excellence for Intelligent Solutions focused on adding value and delivering business outcomes for Vodafone. About VOIS India: In 2009, VO IS started operating in India and now has established global delivery centres in Pune, Bangalore and Ahmedabad. With more than 14,500 employees, VO IS India supports global markets and group functions of Vodafone, and delivers best-in-class customer experience through multi-functional services in the areas of Information Technology, Networks, Business Intelligence and Analytics, Digital Business Solutions (Robotics & AI), Commercial Operations (Consumer & Business), Intelligent Operations, Finance Operations, Supply Chain Operations and HR Operations and more. Job Description: Job Requirement: Expert experience of Google Cloud Platform using its services such as Data Fusion, Data Flow, Pub/Sub, Kubernetes, Cloud Storage, Composer, Compute engine etc … Good exposure of Docker container and Kubernetes Working experience in GCP Resource Multi tenancy infrastructure development. Good Understanding of Secure by Design concept and GCP IAM Hand on experience on Google Cloud Build. Infrastructure as code (IaaC) Expert level experience with Hadoop ecosystem (Spark, Hive/Impala, HBase, Yarn); VOIS Equal Opportunity Employer Commitment India: VO IS is proud to be an Equal Employment Opportunity Employer. We celebrate differences and we welcome and value diverse people and insights. We believe that being authentically human and inclusive powers our employees’ growth and enables them to create a positive impact on themselves and society. We do not discriminate based on age, colour, gender (including pregnancy, childbirth, or related medical conditions), gender identity, gender expression, national origin, race, religion, sexual orientation, status as an individual with a disability, or other applicable legally protected characteristics. As a result of living and breathing our commitment, our employees have helped us get certified as a Great Place to Work in India for four years running. We have been also highlighted among the Top 10 Best Workplaces for Millennials, Equity, and Inclusion , Top 50 Best Workplaces for Women , Top 25 Best Workplaces in IT & IT-BPM and 10th Overall Best Workplaces in India by the Great Place to Work Institute in 2024. These achievements position us among a select group of trustworthy and high-performing companies which put their employees at the heart of everything they do. By joining us, you are part of our commitment. We look forward to welcoming you into our family which represents a variety of cultures, backgrounds, perspectives, and skills! Apply now, and we’ll be in touch!
Posted 1 week ago
10.0 years
0 Lacs
Chennai
On-site
Job Description Cloudera Data Warehouse Hive team looking for a passionate senior developer to join our growing engineering team. This group is targeting the biggest enterprises wanting to utilize Cloudera’s services in a private and public cloud environment. Our product is built on open source technologies like Hive, Impala, Hadoop, Kudu, Spark and so many more providing unlimited learning opportunities. A Day in the Life Over the past 10+ years, Cloudera has experienced tremendous growth making us the leading contributor to Big Data platforms and ecosystems and a leading provider for enterprise solutions based on Apache Hadoop. You will work with some of the best engineers in the industry who are tackling challenges that will continue to shape the Big Data revolution. We foster an engaging, supportive, and productive work environment where you can do your best work. The team culture values engineering excellence, technical depth, grassroots innovation, teamwork, and collaboration. You will manage product development for our CDP components, develop engineering tools and scalable services to enable efficient development, testing, and release operations. You will be immersed in many exciting, cutting-edge technologies and projects, including collaboration with developers, testers, product, field engineers, and our external partners, both software and hardware vendors. Opportunity: Cloudera is a leader in the fast-growing big data platforms market. This is a rare chance to make a name for yourself in the industry and in the Open Source world. The candidate will responsible for Apache Hive and CDW projects. We are looking for a candidate who would like to work on these projects upstream and downstream. If you are curious about the project and code quality you can check the project and the code at the following link. You can start the development before you join. This is one of the beauties of the OSS world. Apache Hive Responsibilities: Build robust and scalable data infrastructure software Design and create services and system architecture for your projects Improve code quality through writing unit tests, automation, and code reviews The candidate would write Java code and/or build several services in the Cloudera Data Warehouse. Worked with a team of engineers who reviewed each other's code/designs and held each other to an extremely high bar for the quality of code/designs The candidate has to understand the basics of Kubernetes. Build out the production and test infrastructure. Develop automation frameworks to reproduce issues and prevent regressions. Work closely with other developers providing services to our system. Help to analyze and to understand how customers use the product and improve it where necessary. Qualifications: Deep familiarity with Java programming language. Hands-on experience with distributed systems. Knowledge of database concepts, RDBMS internals. Knowledge of the Hadoop stack, containers, or Kubernetes is a strong plus. Has experience working in a distributed team. Has 3+ years of experience in software development.
Posted 1 week ago
2.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Title: AI Engineer Experience: Minimum 2 years CTC: 3 LPA - 6 LPA Location: Hinjewadi, Pune Work Mode: Work from Office Availability: Immediate Joiner About Us: Rasta.AI, a product of AI Unika Technologies (P) Ltd, is a pioneering technology company based in Pune. We specialize in road infrastructure monitoring and maintenance using cutting-edge AI, computer vision, and 360-degree imaging. Our platform delivers real-time insights into road conditions to improve safety, efficiency, and sustainability. We collaborate with government agencies, private enterprises, and citizens to enhance road management through innovative tools and solutions. Role Description: This is a full-time, on-site role. As an AI Engineer, you will be responsible for developing innovative AI models and software solutions to address real-world challenges. You will collaborate with cross-functional teams to identify business opportunities and provide customized solutions. You will also work alongside talented engineers, designers, and data scientists to implement and maintain these models and solutions. Technical Skills Required: Programming Languages: Python (and other AI-supported languages) Generative AI: LLMs (e.g., GPT, LLaMA), LangChain, Hugging Face Transformers, OpenAI API Databases: SQL, MongoDB Vector Databases: FAISS, Pinecone, Weaviate, ChromaDB Python Libraries: NumPy, Pandas, Scikit-learn, Streamlit Deep Neural Networks: CNN, RNN, and LLM Data Analysis Libraries: TensorFlow, Pandas, NumPy, Scikit-learn, Matplotlib, Tensor Board Frameworks: Flask, Django Operating Systems: Ubuntu, Windows Tools: Jupyter Notebook, PyCharm IDE, Excel, Roboflow Big Data (Bonus): Hadoop (Hive, Sqoop, Flume), Kafka, Spark Code Repository Tools: Git, GitHub DevOps-AWS: Docker, Kubernetes, Instance hosting and management Analytical Skills Exploratory Data Analysis Predictive Modeling Text Mining Natural Language Processing Machine Learning Image Processing Object Detection Instance Segmentation Deep Learning DevOps AWS Knowledge Expertise Proficiency in TensorFlow library with RNN and CNN Familiarity with pre-trained models like VGG-16, ResNet-50, and Mobile Net Knowledge of Spark Core, Spark SQL, Spark Streaming, Cassandra, and Kafka Designing and Architecting Hadoop Applications Experience with chat-bot platforms (a bonus) Responsibilities The entire lifecycle of model development Data Collection and Preprocessing Model Development Model Training Model Testing Model Validation Deployment and Maintenance Collaboration and Communication Qualifications Bachelor's or Master's degree in a relevant field (AI, Data Science, Computer Science, etc.) Minimum 2 years of experience developing and deploying AI-based software products Strong programming skills in Python (and potentially C++ or Java) Experience with machine learning libraries (TensorFlow, PyTorch, Kera's, scikit-learn) Experience with Generative AI (e.g., LLMs, prompt engineering, RAG pipelines using tools like LangChain or Hugging Face) Experience with Vector Databases (e.g., FAISS, Pinecone, Weaviate) for semantic search and retrieval-augmented generation Experience with computer vision, natural language processing, or recommendation systems Experience with cloud computing platforms (Google Cloud, AWS) Problem-solving skills Excellent communication and presentation skills Experience with data infrastructure and tools (SQL, NoSQL, and big data platforms) Teamwork skills Join Us! If you are passionate about AI and want to contribute to groundbreaking projects in a dynamic startup environment, we encourage you to apply! Be part of our mission to drive technological advancement in India. Drop Your CV - hr@aiunika.com
Posted 1 week ago
7.0 years
0 Lacs
Pune, Maharashtra, India
On-site
About the Role: We are looking for a highly skilled and experienced Lead Gen AI engineer to spearhead AI/ML initiatives and oversee the development of advanced machine learning models, deep learning architectures, and generative AI systems. The ideal candidate will have 7-8 years of hands-on experience in data science, machine learning, and data engineering, with a strong focus on leadership, innovation, and generative AI technologies. You will be responsible for guiding a team, delivering AI solutions, and collaborating with cross-functional stakeholders to meet business goals. The desired candidate should be well versed in AI/ML solutioning and should have worked on end to end Product Deliveries. Key Responsibilities: Lead the development and deployment of machine learning models, deep learning frameworks, and AI-driven solutions across the organization. Work closely with stakeholders to define data-driven strategies and drive innovation using AI and machine learning. Design and implement robust data pipelines and workflows in collaboration with data engineers and software developers. Develop and deploy APIs using web frameworks for seamless integration of AI/ML models into production environments. Mentor and lead a team of data scientists and engineers, providing technical guidance and fostering professional growth. Leverage LangChain or LlamaIndex to enhance model integration, document management, and data retrieval capabilities. Lead projects in Generative AI technologies, such as Large Language Models (LLM), Retrieval-Augmented Generation (RAG), and AI agents , to create innovative AI-driven products and services. Stay updated on the latest AI/ML trends, ensuring that cutting-edge methodologies are adopted across projects. Collaborate with cross-functional teams to translate business problems into technical solutions and communicate findings effectively to both technical and non-technical stakeholders. Required Skills and Qualifications: Experience: 7-8 years of experience in data science and AI/ML, with a strong foundation in machine learning, deep learning, generative AI, and data engineering. Generative AI Expertise: Minimum 2+ years of experience with generative AI. Hands-on experience with LLMs, RAG , and AI agents . AI Agents & Frameworks: Hands-on experience with AI agent frameworks/libraries (e.g., AutoGen, CrewAI, OpenAI's Function Calling, Semantic Kernel, etc.). Programming: Strong proficiency in Python , with experience using TensorFlow, PyTorch, and Scikit-learn . LangChain & LlamaIndex: Experience integrating LLMs with structured and unstructured data . Knowledge Graphs: Expertise in building and utilizing Knowledge Graphs for AI-driven applications. SQL & NoSQL Databases: Hands-on experience with SQL (PostgreSQL, MySQL) and NoSQL (MongoDB, Cassandra, etc.) . API Development: Experience in developing APIs using Flask, FastAPI, or Django . Cloud & MLOps: Experience working with AWS, GCP, Azure, and MLOps best practices. Excellent communication, leadership, and project management skills. Strong problem-solving ability with a focus on delivering scalable, impactful solutions. Preferred Skills: Experience with Computer Vision applications. Chain of Thought Reasoning: Familiarity with CoT prompting and reasoning techniques . Ontology: Understanding of ontologies for knowledge representation in AI systems. Data Engineering: Experience with ETL pipelines and data engineering workflows . Familiarity with big data tools like Spark, Hadoop, or distributed computing. Why Join Us? Be at the forefront of AI innovation, working on cutting-edge projects. Collaborate with a talented team in a dynamic and innovative environment. Competitive salary, benefits, and leadership opportunities.
Posted 1 week ago
2.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Description- ML Engineer: Strong experience of at-least 2-3 years in Python. 2 + years’ experience of working on feature/data pipelines and feature stores using Py-Spark. Exposure to AWS cloud services such as Sagemaker, Bedrock, Kendra etc. Experience with machine learning model lifecycle management tools, and an understanding of MLOps principles and best practice. Knowledge on Docker and Kubernetes. Experience with orchestration/scheduling tools like Argo. Experience building and consuming data from REST APIs. Demonstrable ability to think outside of the box and not be dependent on readily available tools. Excellent communication, presentation and interpersonal skills are a must. Py-Spark AWS Engineer: Good hands-on experience of python and Bash Scripts. 4+ years of good hands-on exposure with Big Data technologies – Pyspark (Data frame and Spark SQL), Hadoop, and Hive Hands-on experience with using Cloud Platform provided Big Data technologies (i.e. Glue, EMR, RedShift, S3, Kinesis) Ability to write Glue jobs and utilise the different core functionalities of Glue. Good understanding of SQL and data warehouse tools like (Redshift). Experience with orchestration/scheduling tools like Airflow. Strong analytical, problem-solving, data analysis and research skills. Demonstrable ability to think outside of the box and not be dependent on readily available tools. Excellent communication, presentation and interpersonal skills are a must. Roles & Responsibilities- Collaborate with data engineers & architects to implement and deploy scalable solutions. Provide technical guidance and code review of the deliverables. Play active role in estimation and planning. Communicate results to diverse technical and non-technical audiences. Generate actionable insights for business improvements. Ability to understand business requirements. Use case derivation and solution creation from structured/unstructured data. Actively drive a culture of knowledge-building and sharing within the team Encourage continuous innovation and out-of-the-box thinking. Good To Have: ML Engineer: Experience researching and applying large language and Generative AI models. Experience with Langchain, LLAMA Index, and Performance Evaluation frameworks. Experience working with model registry, model deployment & monitoring tools. ML-Flow / App. Monitoring tools. Py-Spark AWS Engineer: Experience in migrating workload from on-premises to cloud and cloud to cloud migrations. Experience with Data quality frameworks.
Posted 1 week ago
7.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
We are looking for a self-motivated individual with appetite to learn new skills and be part of a fast-paced team that is delivering cutting edge solutions that drive new products and features that are critical for our customers. Our senior software engineers are responsible for designing, developing and ensuring the quality, reliability and availability of key systems that provide critical data and algorithms. Responsibilities of this role will include developing new and enhancing existing applications and you will work collaboratively with technical leads and architect to design, develop and test these critical applications. About The Role Actively participate in the full life cycle of software delivery, including analysis, design, implementation and testing of new projects and features using Hadoop, Spark/Pyspark, Scala or Java, Hive, SQL, and other open-source tools and design patterns. Python knowledge is a bonus for this role. Working experience with HUDI , Snowflake or similar Must have technologies like Big Data, AWS services like EMR, S3, Lambdas, Elastic, step functions. Actively participate in the development and testing of features for assigned projects with little to no guidance. The position holds opportunities to work under technical experts and also to provide guidance and assistance to less experienced team members or new joiners in the path of the project. Appetite for learning will be key attribute for doing well in the role as the Org is very dynamic and have tremendous scope into various technical landscapes. We consider AI inclusion as a key to excel in this role, we want dynamic candidates who use AI tools as build partners and share experiences to ignite the Org. Proactively share knowledge and best practices on using new and emerging technologies across all of the development and testing groups Create, review and maintain technical documentation of software development and testing artifacts Work collaboratively with others in a team-based environment. Identify and participate in the resolution of issues with the appropriate technical and business resources Generate innovative approaches and solutions to technology challenges Effectively balance and prioritize multiple projects concurrently. About You Bachelor’s or Master’s degree in computer science or a related field 7+ year experience in IT industry Product and Platform development preferred. Strong programming skill with Java or Scala. Must have technologies includes Big Data, AWS. Exposure to services like EMR, S3, Lambdas, Elastic, step functions. Knowledge of Python will be preferred. Experience with Agile methodology, continuous integration and/or Test-Driven Development. Self-motivated with a strong desire for continual learning Take personal responsibility to impact results and deliver on commitments. Effective verbal and written communication skills. Ability to work independently or as part of an agile development team. What’s in it For You? Hybrid Work Model: We’ve adopted a flexible hybrid working environment (2-3 days a week in the office depending on the role) for our office-based roles while delivering a seamless experience that is digitally and physically connected. Flexibility & Work-Life Balance: Flex My Way is a set of supportive workplace policies designed to help manage personal and professional responsibilities, whether caring for family, giving back to the community, or finding time to refresh and reset. This builds upon our flexible work arrangements, including work from anywhere for up to 8 weeks per year, empowering employees to achieve a better work-life balance. Career Development and Growth: By fostering a culture of continuous learning and skill development, we prepare our talent to tackle tomorrow’s challenges and deliver real-world solutions. Our Grow My Way programming and skills-first approach ensures you have the tools and knowledge to grow, lead, and thrive in an AI-enabled future. Industry Competitive Benefits: We offer comprehensive benefit plans to include flexible vacation, two company-wide Mental Health Days off, access to the Headspace app, retirement savings, tuition reimbursement, employee incentive programs, and resources for mental, physical, and financial wellbeing. Culture: Globally recognized, award-winning reputation for inclusion and belonging, flexibility, work-life balance, and more. We live by our values: Obsess over our Customers, Compete to Win, Challenge (Y)our Thinking, Act Fast / Learn Fast, and Stronger Together. Social Impact: Make an impact in your community with our Social Impact Institute. We offer employees two paid volunteer days off annually and opportunities to get involved with pro-bono consulting projects and Environmental, Social, and Governance (ESG) initiatives. Making a Real-World Impact: We are one of the few companies globally that helps its customers pursue justice, truth, and transparency. Together, with the professionals and institutions we serve, we help uphold the rule of law, turn the wheels of commerce, catch bad actors, report the facts, and provide trusted, unbiased information to people all over the world. About Us Thomson Reuters informs the way forward by bringing together the trusted content and technology that people and organizations need to make the right decisions. We serve professionals across legal, tax, accounting, compliance, government, and media. Our products combine highly specialized software and insights to empower professionals with the data, intelligence, and solutions needed to make informed decisions, and to help institutions in their pursuit of justice, truth, and transparency. Reuters, part of Thomson Reuters, is a world leading provider of trusted journalism and news. We are powered by the talents of 26,000 employees across more than 70 countries, where everyone has a chance to contribute and grow professionally in flexible work environments. At a time when objectivity, accuracy, fairness, and transparency are under attack, we consider it our duty to pursue them. Sound exciting? Join us and help shape the industries that move society forward. As a global business, we rely on the unique backgrounds, perspectives, and experiences of all employees to deliver on our business goals. To ensure we can do that, we seek talented, qualified employees in all our operations around the world regardless of race, color, sex/gender, including pregnancy, gender identity and expression, national origin, religion, sexual orientation, disability, age, marital status, citizen status, veteran status, or any other protected classification under applicable law. Thomson Reuters is proud to be an Equal Employment Opportunity Employer providing a drug-free workplace. We also make reasonable accommodations for qualified individuals with disabilities and for sincerely held religious beliefs in accordance with applicable law. More information on requesting an accommodation here. Learn more on how to protect yourself from fraudulent job postings here. More information about Thomson Reuters can be found on thomsonreuters.com.
Posted 1 week ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Role description Exp 5 8 yr Location Hyd JD Pyspark developer to work on a range of datadriven projects using PysparkSQLPython and Apache Airflow for Job scheduling and orchestration on Google Cloud PlatformGCPIn this this role you will be responsible for implementing data pipelines processing large datasets writing sql queries and ensuring smooth orchestration and automation of jobs using Airflow Required Skills Qualifications Experience with Pyspark for data processing and largescale data processing Proficiency in SQL for writing complex queries and optimizing database operations Strong knowledge of Python and experience in using Python libraries like Pandas and Numpy Handsonexperience with Apache Airflow for job scheduling DAG creation and workflow management experience working with Google Cloud PlatformGCP including Goolg Cloud StorageGCS BigQuery Dataflow and Dataproc Strong understanding of ETL processes and data pipeline development Familiarity with version control systems like Git Skills Mandatory Skills : GCP Storage,GCP BigQuery,GCP DataProc,GCP Cloud Composer,GCP DMS,Apache airflow,Java,Python,Scala,GCP Datastream,Google Analytics Hub,GCP Workflows,GCP Dataform,GCP Datafusion,GCP Pub/Sub,ANSI-SQL,GCP Dataflow,GCP Data Flow,GCP Cloud Pub/Sub,Big Data Hadoop Ecosystem
Posted 1 week ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Required Skills Proficiency in multiple programming languages ideally Python Proficiency in at least one cluster computing frameworks (preferably Spark, alternatively Flink or Storm) Proficiency in at least one cloud data lakehouse platforms (preferably AWS data lake services or Databricks, alternatively Hadoop), at least one relational data stores (Postgres, Oracle or similar) and at least one NOSQL data stores (Cassandra, Dynamo, MongoDB or similar) Proficiency in at least one scheduling/orchestration tools (preferably Airflow, alternatively AWS Step Functions or similar) Proficiency with data structures, data serialization formats (JSON, AVRO, Protobuf, or similar), big-data storage formats (Parquet, Iceberg, or similar), data processing methodologies (batch, micro-batching, and stream), one or more data modelling techniques (Dimensional, Data Vault, Kimball, Inmon, etc. ), Agile methodology (develop PI plans and roadmaps), TDD (or BDD) and CI/CD tools (Jenkins, Git,) Strong organizational, problem-solving and critical thinking skills; Strong documentation skills Preferred Skills Proficiency in IaC (preferably Terraform, alternatively AWS cloud formation) (ref:hirist.tech)
Posted 1 week ago
5.0 - 8.0 years
20 - 35 Lacs
Pune, Chennai, Bengaluru
Hybrid
Greetings from LTIMindtree!! About the job Are you looking for a new career challenge? With LTIMindtree, are you ready to embark on a data-driven career? Working for global leading manufacturing client for providing an engaging product experience through best-in-class PIM implementation and building rich, relevant, and trusted product information across channels and digital touchpoints so their end customers can make an informed purchase decision will surely be a fulfilling experience. Location: Pan India . Key Skill : Hadoop-Spark SparkSQL – Java Interested candidates kindly apply in below link and share updated cv to Hemalatha1@ltimindtree.com https://forms.office.com/r/zQucNTxa2U Skills needed: 1. Hand-on Experience on Java and Big data Technology including Spark. Hive, Impala 2. Experience with Streaming Framework such as Kafka 3. Hands-on Experience with Object Storage. Should be able to develop data Archival and retrieval patters 4. Good to have experience of any Public platform like AWS, Azure, GCP etc. 5. Ready to upskill as and when needed on project technologies viz Abinitio Why join us? Work in industry leading implementations for Tier-1 clients Accelerated career growth and global exposure Collaborative, inclusive work environment rooted in innovation Exposure to best-in-class automation framework Innovation first culture: We embrace automation, AI insights and clean data Know someone who fits this perfectly? Tag them – let’s connect the right talent with right opportunity DM or email to know more Let’s build something great together
Posted 1 week ago
5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About the role Lead solution scoping and development to drive Enterprise Analytics team’s partnership with Business teams across Tesco to enable data driven decisions and deliver on organization's key strategic priorities You will be responsible for -Represent Talent Acquisition in all forums/ seminars pertaining to process, compliance and audit-Perform other miscellaneous duties as required by management-Driving CI culture, implementing CI projects and innovation for withing the teamEngaging with business & functional partners to understand business priorities, ask relevant questions and scope same into a analytical solution document calling out how application of data science will improve decision making- In depth understanding of techniques to prepare the analytical data set leveraging multiple complex data set sources- Building Statistical models and ML algorithms with practitioner level competency- Writing structured, modularized & codified algorithms using Continuous Improvement principles (development of knowledge assets and reusable modules on GitHub, Wiki, etc) with expert competency- Building easy visualization layer on top of the algorithms in order to empower end-users to take decisions - this could be on a visualization platform (Tableau / Python) or through a recommendation set through PPTs- Proactively driving consumption of solutions developed by the team and owning the initiative to identify and address areas of improvement in the larger Tesco business- Keeping up-to-date with the latest in data science and retail analytics and disseminating the knowledge among colleagues- Mentoring and leading a small team of Applied Data Scientists to deliver high impact analytics projects You will need - 5+ years experience in data science application in and delivering analytics solutions in industries such as retail, consumer packaged goods (CPG), telecom, or hospitality preferred- Exposure to functional areas like marketing, supply chain, customer analytics, merchandising, operations, finance, or digital analytics - Applied Math: Applied Statistics, Design of Experiments, Regression, Decision Trees, Forecasting, Optimization algorithms, Clustering, NLP- Tech: SQL, Hadoop, Spark, Python, Tableau, MS Excel, MS Powerpoint, GitHub- Business: Basic understanding of Retail domain- Soft Skill: Analytical Thinking & Problem solving, Storyboarding, Stakeholder engagement,Leading team Whats in it for you? At Tesco, we are committed to providing the best for you. As a result, our colleagues enjoy a unique, differentiated, market- competitive reward package, based on the current industry practices, for all the work they put into serving our customers, communities and planet a little better every day. Our Tesco Rewards framework consists of pillars - Fixed Pay, Incentives, and Benefits. Total Rewards offered at Tesco is determined by four principles - simple, fair, competitive, and sustainable. Salary - Your fixed pay is the guaranteed pay as per your contract of employment. Performance Bonus - Opportunity to earn additional compensation bonus based on performance, paid annually Leave & Time-off - Colleagues are entitled to 30 days of leave (18 days of Earned Leave, 12 days of Casual/Sick Leave) and 10 national and festival holidays, as per the company’s policy. Making Retirement Tension-FreeSalary - In addition to Statutory retirement beneets, Tesco enables colleagues to participate in voluntary programmes like NPS and VPF. Health is Wealth - Tesco promotes programmes that support a culture of health and wellness including insurance for colleagues and their family. Our medical insurance provides coverage for dependents including parents or in-laws. Mental Wellbeing - We offer mental health support through self-help tools, community groups, ally networks, face-to-face counselling, and more for both colleagues and dependents. Financial Wellbeing - Through our financial literacy partner, we offer one-to-one financial coaching at discounted rates, as well as salary advances on earned wages upon request. Save As You Earn (SAYE) - Our SAYE programme allows colleagues to transition from being employees to Tesco shareholders through a structured 3-year savings plan. Physical Wellbeing - Our green campus promotes physical wellbeing with facilities that include a cricket pitch, football field, badminton and volleyball courts, along with indoor games, encouraging a healthier lifestyle. About Us Tesco in Bengaluru is a multi-disciplinary team serving our customers, communities, and planet a little better every day across markets. Our goal is to create a sustainable competitive advantage for Tesco by standardising processes, delivering cost savings, enabling agility through technological solutions, and empowering our colleagues to do even more for our customers. With cross-functional expertise, a wide network of teams, and strong governance, we reduce complexity, thereby offering high-quality services for our customers. Tesco in Bengaluru, established in 2004 to enable standardisation and build centralised capabilities and competencies, makes the experience better for our millions of customers worldwide and simpler for over 3,30,000 colleagues. Tesco Business Solutions: Established in 2017, Tesco Business Solutions (TBS) has evolved from a single entity traditional shared services in Bengaluru, India (from 2004) to a global, purpose-driven solutions-focused organisation. TBS is committed to driving scale at speed and delivering value to the Tesco Group through the power of decision science. With over 4,400 highly skilled colleagues globally, TBS supports markets and business units across four locations in the UK, India, Hungary, and the Republic of Ireland. The organisation underpins everything that the Tesco Group does, bringing innovation, a solutions mindset, and agility to its operations and support functions, building winning partnerships across the business. TBS's focus is on adding value and creating impactful outcomes that shape the future of the business. TBS creates a sustainable competitive advantage for the Tesco Group by becoming the partner of choice for talent, transformation, and value creation
Posted 1 week ago
5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About the role Lead solution scoping and development to drive Enterprise Analytics team’s partnership with Business teams across Tesco to enable data driven decisions and deliver on organization's key strategic priorities You will be responsible for -Represent Talent Acquisition in all forums/ seminars pertaining to process, compliance and audit-Perform other miscellaneous duties as required by management-Driving CI culture, implementing CI projects and innovation for withing the teamEngaging with business & functional partners to understand business priorities, ask relevant questions and scope same into a analytical solution document calling out how application of data science will improve decision making- In depth understanding of techniques to prepare the analytical data set leveraging multiple complex data set sources- Building Statistical models and ML algorithms with practitioner level competency- Writing structured, modularized & codified algorithms using Continuous Improvement principles (development of knowledge assets and reusable modules on GitHub, Wiki, etc) with expert competency- Building easy visualization layer on top of the algorithms in order to empower end-users to take decisions - this could be on a visualization platform (Tableau / Python) or through a recommendation set through PPTs- Proactively driving consumption of solutions developed by the team and owning the initiative to identify and address areas of improvement in the larger Tesco business- Keeping up-to-date with the latest in data science and retail analytics and disseminating the knowledge among colleagues- Mentoring and leading a small team of Applied Data Scientists to deliver high impact analytics projects You will need - 5+ years experience in data science application in and delivering analytics solutions in industries such as retail, consumer packaged goods (CPG), telecom, or hospitality preferred- Exposure to functional areas like marketing, supply chain, customer analytics, merchandising, operations, finance, or digital analytics - Applied Math: Applied Statistics, Design of Experiments, Regression, Decision Trees, Forecasting, Optimization algorithms, Clustering, NLP - Tech: SQL, Hadoop, Spark, Python, Tableau, MS Excel, MS Powerpoint, GitHub - Business: Basic understanding of Retail domain - Soft Skill: Analytical Thinking & Problem solving, Storyboarding, Stakeholder engagement,Leading team Whats in it for you? At Tesco, we are committed to providing the best for you. As a result, our colleagues enjoy a unique, differentiated, market- competitive reward package, based on the current industry practices, for all the work they put into serving our customers, communities and planet a little better every day. Our Tesco Rewards framework consists of pillars - Fixed Pay, Incentives, and Benefits. Total Rewards offered at Tesco is determined by four principles - simple, fair, competitive, and sustainable. Salary - Your fixed pay is the guaranteed pay as per your contract of employment. Performance Bonus - Opportunity to earn additional compensation bonus based on performance, paid annually Leave & Time-off - Colleagues are entitled to 30 days of leave (18 days of Earned Leave, 12 days of Casual/Sick Leave) and 10 national and festival holidays, as per the company’s policy. Making Retirement Tension-FreeSalary - In addition to Statutory retirement beneets, Tesco enables colleagues to participate in voluntary programmes like NPS and VPF. Health is Wealth - Tesco promotes programmes that support a culture of health and wellness including insurance for colleagues and their family. Our medical insurance provides coverage for dependents including parents or in-laws. Mental Wellbeing - We offer mental health support through self-help tools, community groups, ally networks, face-to-face counselling, and more for both colleagues and dependents. Financial Wellbeing - Through our financial literacy partner, we offer one-to-one financial coaching at discounted rates, as well as salary advances on earned wages upon request. Save As You Earn (SAYE) - Our SAYE programme allows colleagues to transition from being employees to Tesco shareholders through a structured 3-year savings plan. Physical Wellbeing - Our green campus promotes physical wellbeing with facilities that include a cricket pitch, football field, badminton and volleyball courts, along with indoor games, encouraging a healthier lifestyle. About Us Tesco in Bengaluru is a multi-disciplinary team serving our customers, communities, and planet a little better every day across markets. Our goal is to create a sustainable competitive advantage for Tesco by standardising processes, delivering cost savings, enabling agility through technological solutions, and empowering our colleagues to do even more for our customers. With cross-functional expertise, a wide network of teams, and strong governance, we reduce complexity, thereby offering high-quality services for our customers. Tesco in Bengaluru, established in 2004 to enable standardisation and build centralised capabilities and competencies, makes the experience better for our millions of customers worldwide and simpler for over 3,30,000 colleagues. Tesco Business Solutions: Established in 2017, Tesco Business Solutions (TBS) has evolved from a single entity traditional shared services in Bengaluru, India (from 2004) to a global, purpose-driven solutions-focused organisation. TBS is committed to driving scale at speed and delivering value to the Tesco Group through the power of decision science. With over 4,400 highly skilled colleagues globally, TBS supports markets and business units across four locations in the UK, India, Hungary, and the Republic of Ireland. The organisation underpins everything that the Tesco Group does, bringing innovation, a solutions mindset, and agility to its operations and support functions, building winning partnerships across the business. TBS's focus is on adding value and creating impactful outcomes that shape the future of the business. TBS creates a sustainable competitive advantage for the Tesco Group by becoming the partner of choice for talent, transformation, and value creation
Posted 1 week ago
0 years
0 Lacs
Bagalur, Karnataka, India
Remote
When you join Verizon You want more out of a career. A place to share your ideas freely even if theyre daring or different. Where the true you can learn, grow, and thrive. At Verizon, we power and empower how people live, work, and play by connecting them to what brings them joy. We do what we love driving innovation, creativity, and impact in the world. Our V Team is a community of people who anticipate, lead, and believe that listening is where learning begins. In crisis and in celebration, we come together lifting our communities and building trust in how we show up, everywhere & always. Want in? Join the #VTeamLife. What Youll Be Doing... As an Engineer II - Data Engineering in the Artificial Intelligence and Data Organization (AI&D) , you will drive various activities including data engineering, data operations automation, data frameworks, and platforms to improve the efficiency, customer experience, and profitability of the company. At Verizon, we are on a journey to industrialize our data science and AI capabilities. Very simply, this means that AI will fuel all decisions and business processes across the company. With our leadership in bringing the 5G network nationwide, the opportunity for AI will only grow exponentially in going from enabling billions of predictions to possibly trillions of predictions that are automated and real-time. Building high-quality Data Engineering applications. Design and implement data pipelines using Apache Airflow via Composer, Dataflow, and Dataproc for batch and streaming workloads. Develop and optimize SQL queries and data models in BigQuery to support downstream analytics and reporting. Automate data ingestion, transformation, and export processes across various GCP components using Cloud Functions and Cloud Run. Monitor and troubleshoot data workflows using Cloud Monitoring and Cloud Logging to ensure system reliability and performance. Collaborate with data analysts, scientists, and business stakeholders to gather requirements and deliver data-driven solutions. Ensure adherence to data security, quality, and governance best practices throughout the pipeline lifecycle. Support the deployment of production-ready data solutions and assist in performance tuning and scalability efforts. Debugging the production failures and identifying the solution. Working on ETL/ELT development. What were looking for... We are looking for a highly motivated and skilled Engineer II Data Engineer with strong experience in Google Cloud Platform (GCP) to join our growing data engineering team. The ideal candidate will work on building and maintaining scalable data pipelines and cloud-native workflows using a wide range of GCP services such as Airflow (Composer), BigQuery, Dataflow, Dataproc, Cloud Functions, Cloud Run, Cloud Monitoring, and Cloud Logging. You'll Need To Have Bachelor's or one or more years of work experience. Two or more years of relevant work experience. Two or more years of relevant work experience in GCP. Hands-on experience with Google Cloud Platform (GCP) and services such as: Airflow (Composer) for workflow orchestration BigQuery for data warehousing and analytics Dataflow for scalable data processing Dataproc for Spark/Hadoop-based jobs Cloud Functions and Cloud Run for event-driven and container-based computing Cloud Monitoring and Logging for observability and alerting Proficiency in Python for scripting and pipeline development. Good understanding of SQL, data modelling, and data transformation best practices. Ability to troubleshoot complex data issues and optimize performance. Ability to effectively communicate through presentation, interpersonal, verbal, and written skills. Strong communication skills, collaboration, problem-solving, analytical, and critical-thinking skills. Even better if you have one or more of the following: Master's degree in Computer Science, Information Systems, and/or related technical discipline. Hands-on experience with AI/ML Models and Agentic AI building, tuning, and deploying for Data Engineering applications. Big Data Analytics Certification in Google Cloud. Hands-on experience with Hadoop-based environments (HDFS, Hive, Spark, Dataproc). Knowledge of cost optimization techniques for cloud workloads. Knowledge of telecom architecture. If Verizon and this role sound like a fit for you, we encourage you to apply even if you dont meet every even better qualification listed above. Where youll be working In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. Scheduled Weekly Hours 40 Equal Employment Opportunity Verizon is an equal opportunity employer. We evaluate qualified applicants without regard to race, gender, disability, or any other legally protected characteristics. Locations Hyderabad, India Bangalore, India Chennai, India
Posted 1 week ago
0 years
0 Lacs
Hyderabad, Telangana, India
Remote
When you join Verizon You want more out of a career. A place to share your ideas freely even if theyre daring or different. Where the true you can learn, grow, and thrive. At Verizon, we power and empower how people live, work and play by connecting them to what brings them joy. We do what we love driving innovation, creativity, and impact in the world. Our V Team is a community of people who anticipate, lead, and believe that listening is where learning begins. In crisis and in celebration, we come together lifting our communities and building trust in how we show up, everywhere & always. Want in? Join the #VTeamLife. What Youll Be Doing We are looking for data engineers who can work with world class team members to help drive telecom business to its full potential. We are building data products / assets for telecom wireless and wireline business which includes consumer analytics, telecom network performance and service assurance analytics etc. We are working on cutting edge technologies like digital twin to build these analytical platforms and provide data support for varied AI ML implementations. As a data engineer, you will be collaborating with business product owners, coaches, industry renowned data scientists and system architects to develop strategic data solutions from sources which includes batch, file and data streams. As a Data Engineer with ETL/ELT expertise for our growing data platform & analytics teams, you will understand and enable the required data sets from different sources both structured and unstructured data into our data warehouse and data lake with real-time streaming and/or batch processing to generate insights and perform analytics for business teams within Verizon. Understanding the business requirements and converting them to the technical design. Working on Data Ingestion, Preparation and Transformation. Developing data streaming applications. Debugging the production failures and identifying the solution. Working on ETL/ELT development. Understanding devops process and contributing for devops pipelines. What were looking for... Youre curious about new technologies and the game-changing possibilities it creates. You like to stay up-to-date with the latest trends and apply your technical expertise to solving business problems. Youll Need To Have Bachelors degree or four or more years of work experience. Experience with Data Warehouse concepts and Data Management life cycle. Experience in GCP cloud platform - (BigQuery/Cloud Composer/Data Proc(or Hadoop Spark))/Cloud Function). Experience in any programming language preferably Python. Proficiency in graph data modeling, including experience with graph data models and graph query language. Exposure in working on GenAI use cases. Experience in troubleshooting the data issues. Experience in writing complex SQL and performance tuning. Experience in DevOps. Experience in GraphDB, Core Java. Experience in real-time streaming and lambda architecture. Even better if you have one or more of the following: Three or more years of relevant experience. Any relevant Certification on ETL/ELT developer. Certification in GCP-Data Engineer. Good problem-solving, analytical, and research capabilities. Good verbal and written communication. Experience presenting to and influencing stakeholders. Where youll be working In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. Scheduled Weekly Hours 40 Equal Employment Opportunity Verizon is an equal opportunity employer. We evaluate qualified applicants without regard to race, gender, disability, or any other legally protected characteristics. Locations - Hyderabad, India
Posted 1 week ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About The Role We are looking for an exceptional data scientist to be a part of the CRM platform data science team. As a Data Scientist in this role, you will be responsible for leveraging data-driven insights and advanced analytics techniques to drive marketing strategies and optimize our product offerings. You will collaborate with cross-functional teams, including applied and data science, marketing, product management, and engineering, to develop data-driven solutions that enhance customer experiences and maximize business outcomes. What The Candidate Will Need / Bonus Points ---- What the Candidate Will Do ---- Collaborate with measurement and optimization teams to design experiments and share readouts. Develop metrics and dashboards to monitor product performance, customer engagement, and conversion rates, and provide insights for continuous improvement. Collaborate with product management teams to prioritize product roadmap initiatives based on data-driven insights and market demand. Partner with internal stakeholders including Operations, Product and the Marketing Technology team to develop marketing strategies and budget decisions based on data insights Collaborate with marketing applied scientists on ML centered initiatives for comms and growth platform Basic Qualifications M.S., or Bachelor's degree in Statistics, Economics, Machine Learning, Operations Research, or other quantitative fields. Knowledge of underlying mathematical foundations of statistics, optimization, economics, and analytics. Knowledge of experimental design and analysis. Strong experience in data analysis, statistical modeling, and machine learning techniques, with a proven track record of solving complex business problems Meticulous attention to detail and rigorous data quality assessment to drive accurate and reliable insights Excellent communication skills and stakeholder management Advanced SQL expertise with a strong focus on time and space optimization Proficiency in Python, and experience working with data manipulation and analysis libraries (e.g., Pandas, NumPy, scikit-learn). Demonstrated experience working with big data frameworks (e.g. Hadoop, Spark) and prior experience in building long running / stable data pipelines Solid understanding of data visualization principles and experience with visualization tools (e.g., Tableau, Looker) to effectively communicate insights. Excellent judgment, critical-thinking, and decision-making skills Preferred Qualifications Prior experience in analysis of product usage patterns, root-cause determination, and customer feedback to identify opportunities for product enhancements & new feature development. Work experience in applying advanced analytics, applied science, or causal inference to marketing problems. Demonstrated capacity to clearly and concisely communicate complex business activities, technical requirements, and recommendations.
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
31458 Jobs | Dublin
Wipro
16542 Jobs | Bengaluru
EY
10788 Jobs | London
Accenture in India
10711 Jobs | Dublin 2
Amazon
8660 Jobs | Seattle,WA
Uplers
8559 Jobs | Ahmedabad
IBM
7988 Jobs | Armonk
Oracle
7535 Jobs | Redwood City
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi
Capgemini
6091 Jobs | Paris,France