Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 years
0 Lacs
India
Remote
Job Title Senior Product Analyst at Careem (Fully remote) Company Details Careem is building the Everything App for the greater Middle East, making it easier than ever to move around, order food and groceries, manage payments, and more. Careem is led by a powerful purpose to simplify and improve the lives of people and build an awesome organisation that inspires. Since 2012, Careem has created earnings for over 2.5 million Captains, simplified the lives of over 70 million customers, and built a platform for the region’s best talent to thrive and for entrepreneurs to scale their businesses. Careem operates in over 70 cities across 10 countries, from Morocco to Pakistan. About the team The Careem Analytics team’s mission is to build and track the full lively picture for Careem businesses and experiences, uphold the experience bar, provide actionable insights, formulate problems and contribute to solving them. As part of this team, you will be a core team member fulfilling this mission. You will be working alongside the top analyst talent of the region, leveraging modern analysis and visualization tools to solve the region’s day to day problems. Job Roles & Responsibilities Act as the first point of contact to answer all business data queries Develop effective reporting solutions by utilizing engineering best practices and various reporting tools Participate in continuous improvement of these reporting solutions Drive and support your designated business unit by converting complex data and findings into understandable tables, graphs, and written reports Present appropriate analysis and commentary to technical and non-technical audience Gain subject matter expertise and help define appropriate key metrics for the business unit and discover untapped areas for business improvement Provide concrete data-driven insights Test and communicate new features to users Run regular data integrity audits Devise and evaluate methods for collecting data, such as surveys, questionnaires, and opinion polls Gather data about consumers, competitors, and market conditions Cultural Expectations 4+ years of demonstrated experience working in an analytical role 3+ years of demonstrated experience with business intelligence and visualization tools creating management dashboards Strong analytical skills and a passion to work with large sets of data Passionate about learning new technologies and working on a product of massive scale and impact Expert at writing SQL queries against large amounts of data Self-starter with excellent communication and organizational skills Ability to get hands-on in a complex operational environment Must be process orientated and a logical thinker with good attention to detail Working knowledge with reporting tools such as Tableau, MicroStrategy or Looker Working knowledge in Python, R, Spark, Hive Experience in ETL / ELT is a plus Experience working with dimensional data, data lakes is a plus Experience working with MPP databases like Redshift is a plus Hiring Process 2-3 rounds of interviews with the hiring team
Posted 1 week ago
7.0 years
0 Lacs
Greater Chennai Area
On-site
Redefine the future of customer experiences. One conversation at a time. We’re changing the game with a first-of-its-kind, conversation-centric platform that unifies team collaboration and customer experience in one place. Powered by AI, built by amazing humans. Our culture is forward-thinking, customer-obsessed and built on an unwavering belief that connection fuels business and life; connections to our customers with our signature Amazing Service®, our products and services, and most importantly, each other. Since 2008, 100,000+ companies and 1M+ users rely on Nextiva for customer and team communication. If you’re ready to collaborate and create with amazing people, let your personality shine and be on the frontlines of helping businesses deliver amazing experiences, you’re in the right place. Build Amazing - Deliver Amazing - Live Amazing - Be Amazing We’re looking for an experienced Engineering Manager to lead backend and data platform teams building the next generation product. You will be responsible for leading the development of Java-based services , ETL pipelines , and data infrastructure that power mission-critical features like scheduling, labor forecasting, time tracking, and analytics. You’ll collaborate closely with product, data science, and infrastructure teams to ensure our systems are scalable, reliable, and data-driven — enabling our customers to optimize workforce operations in real time. Key Responsibilities Lead a team of backend and data engineers responsible for: Building and maintaining Java microservices (Spring Boot) for WFM features. Designing and scaling ETL pipelines, data ingestion, and data lake components. Supporting reporting, analytics, and forecasting models with high-quality datasets. Define and evolve the architecture for data processing, streaming, and batch workloads using tools like Apache Kafka, Airflow, AWS Glue, or Spark. Collaborate with Product Managers and Data Analysts to turn business requirements into scalable data solutions. Drive engineering best practices in CI/CD, code quality, observability, and data governance. Mentor engineers, foster a strong team culture, and support career growth through coaching and feedback. Work cross-functionally with QA, DevOps, and InfoSec to ensure compliance, scalability, and performance. Required Qualifications 7+ years of backend software engineering experience, with at least 3+ years in engineering leadership roles. Strong hands-on experience with Java (Spring Boot) and microservice architecture. Proven experience managing ETL workflows, data pipelines, and distributed data processing. Knowledge of relational and analytical databases (e.g., PostgreSQL, Redshift, Snowflake). Experience with event streaming platforms (Kafka, Kinesis, or similar). Cloud-native development experience with AWS, GCP, or Azure. Familiarity with data warehousing, schema evolution, and data quality best practices. Solid understanding of Agile development methodologies and team management. Preferred Qualifications Experience with observability tools like Prometheus, Grafana, or Datadog. Exposure to ML/forecasting models for labor planning is a plus. Nextiva DNA (Core Competencies) Nextiva’s most successful team members share common traits and behaviors: Drives Results: Action-oriented with a passion for solving problems. They bring clarity and simplicity to ambiguous situations, challenge the status quo, and ask what can be done differently. They lead and drive change, celebrating success to build more success. Critical Thinker: Understands the "why" and identifies key drivers, learning from the past. They are fact-based and data-driven, forward-thinking, and see problems a few steps ahead. They provide options, recommendations, and actions, understanding risks and dependencies. Right Attitude: They are team-oriented, collaborative, competitive, and hate losing. They are resilient, able to bounce back from setbacks, zoom in and out, and get in the trenches to help solve important problems. They cultivate a culture of service, learning, support, and respect, caring for customers and teams. Total Rewards Our Total Rewards offerings are designed to allow our employees to take care of themselves and their families so they can be their best, in and out of the office. Our compensation packages are tailored to each role and candidate's qualifications. We consider a wide range of factors, including skills, experience, training, and certifications, when determining compensation. We aim to offer competitive salaries or wages that reflect the value you bring to our team. Depending on the position, compensation may include base salary and/or hourly wages, incentives, or bonuses. Medical 🩺 - Medical insurance coverage is available for employees, their spouse, and up to two dependent children with a limit of 500,000 INR, as well as their parents or in-laws for up to 300,000 INR. This comprehensive coverage ensures that essential healthcare needs are met for the entire family unit, providing peace of mind and security in times of medical necessity. Group Term & Group Personal Accident Insurance 💼 - Provides insurance coverage against the risk of death / injury during the policy period sustained due to an accident caused by violent, visible & external means. Coverage Type - Employee Only Sum Insured - 3 times of annual CTC with minimum cap of INR 10,00,000 Free Cover Limit - 1.5 Crore Work-Life Balance ⚖️ - 15 days of Privilege leaves per calendar year, 6 days of Paid Sick leave per calendar year, 6 days of Casual leave per calendar year. Paid 26 weeks of Maternity leaves, 1 week of Paternity leave, a day off on your Birthday, and paid holidays Financial Security💰 - Provident Fund & Gratuity Wellness 🤸 - Employee Assistance Program and comprehensive wellness initiatives Growth 🌱 - Access to ongoing learning and development opportunities and career advancement At Nextiva, we're committed to supporting our employees' health, well-being, and professional growth. Join us and build a rewarding career! Established in 2008 and headquartered in Scottsdale, Arizona, Nextiva secured $200M from Goldman Sachs in late 2021, valuing the company at $2.7B.To check out what’s going on at Nextiva, check us out on Instagram, Instagram (MX), YouTube, LinkedIn, and the Nextiva blog.
Posted 1 week ago
7.0 - 10.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Data Engineer Description We are seeking a skilled Data Engineer with 7-10 years of experience to join our dynamic team in India. The ideal candidate will have a strong background in designing and optimizing data pipelines, as well as a passion for working with large datasets to drive business insights. Location : Trivandrum, Kochi, Bangalore Responsibilities Design, build, and maintain scalable data pipelines and architecture. Develop and optimize ETL processes for data ingestion and transformation. Collaborate with data scientists and analysts to understand data requirements and deliver solutions. Implement data quality checks and monitor data integrity. Utilize cloud-based data technologies and services for data storage and processing. Ensure compliance with data governance and security policies. Skills and Qualifications Bachelor's or Master's degree in Computer Science, Engineering, or a related field. Proficiency in SQL and experience with database technologies such as MySQL, PostgreSQL, or Oracle. Strong knowledge of programming languages such as Python, Java, or Scala. Experience with big data technologies like Hadoop, Spark, or Kafka. Familiarity with cloud platforms such as AWS, Azure, or Google Cloud. Understanding of data warehousing concepts and tools (e.g., Redshift, Snowflake). Experience with data modeling and data architecture design.
Posted 1 week ago
7.0 - 11.0 years
0 Lacs
India
Remote
Position - AWS Data Engineer Experience Range: 7 to 11 Years Location: Remote Shift Timings: 12 PM to 9 PM Primary Skills: Python, Pyspark, SQL, AWS Responsibilities · Data Architecture: Develop and maintain the overall data architecture, ensuring scalability, performance, and data quality. · AWS Data Services: Expertise in using AWS data services such as AWS Glue, S3, SNS, SES, Dynamo DB, Redshift, Cloud formation, Cloud watch, IAM, DMS, Event bridge scheduler etc. · Data Warehousing: Design and implement data warehouses on AWS, leveraging AWS Redshift or other suitable options. · Data Lakes: Build and manage data lakes on AWS using AWS S3 and other relevant services. · Data Pipelines: Design and develop efficient data pipelines to extract, transform, and load data from various sources. · Data Quality: Implement data quality frameworks and best practices to ensure data accuracy, completeness, and consistency. · Cloud Optimization: Optimize data engineering solutions for performance, cost-efficiency, and scalability on the AWS cloud. · Team Leadership: Mentor and guide data engineers, ensuring they adhere to best practices and meet project deadlines. Qualifications · Bachelor’s degree in computer science, Engineering, or a related field. · 6-7 years of experience in data engineering roles, with a focus on AWS cloud platforms. · Strong understanding of data warehousing and data lake concepts. · Proficiency in SQL and at least one programming language (Python/Pyspark). · Good to have - Experience with any big data technologies like Hadoop, Spark, and Kafka. · Knowledge of data modeling and data quality best practices. · Excellent problem-solving, analytical, and communication skills. · Ability to work independently and as part of a team. · Preferred Qualifications · Certifications in AWS Certified Data Analytics - Specialty or AWS Certified Solutions Architect - Data.
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
noida, uttar pradesh
On-site
As an Informatica IDMC Developer at Coforge, your primary responsibility will be to design, develop, and maintain resilient ETL pipelines using Informatica Intelligent Data Management Cloud (IDMC/IICS). You will work closely with data architects, analysts, and business stakeholders to comprehend data requirements and integrate data from various sources, including databases, APIs, and flat files. Your role will involve optimizing data workflows for enhanced performance, scalability, and reliability while monitoring and troubleshooting ETL jobs to address data quality issues. In addition, you will be expected to implement data governance and security best practices, ensuring compliance and confidentiality. Maintaining detailed documentation of data flows, transformations, and architecture will be essential. Active participation in code reviews and contributing to continuous improvement initiatives are also part of your responsibilities. To excel in this role, you must possess substantial hands-on experience with Informatica IDMC (IICS) and cloud-based ETL tools. Proficiency in SQL and prior experience with relational databases like Oracle, SQL Server, and PostgreSQL is necessary. Familiarity with cloud platforms such as AWS, Azure, or GCP and data warehousing concepts and tools like Snowflake, Redshift, or BigQuery is highly desirable. Strong problem-solving skills and effective communication abilities are key attributes that will contribute to your success in this position. Preferred qualifications for this role include experience with CI/CD pipelines and version control systems, knowledge of data modeling and metadata management, and certifications in Informatica or cloud platforms, which would be considered advantageous. If you have 5 to 8 years of relevant experience and possess the required skills and qualifications, we encourage you to apply for this Informatica IDMC Developer position based in Greater Noida. Kindly send your CV to Gaurav.2.Kumar@coforge.com.,
Posted 1 week ago
5.0 - 6.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Company Description Sufalam Technologies, based in Ahmedabad, India, is an IT services and solutions company known for integrating and optimizing complex technologies and data to enhance business results. We bring together real-world business experience with deep technology expertise, service delivery tools, and proven methodologies to help clients achieve their strategic objectives. Our services include Custom Application Development, Web Application and Development, BPO, and much more. Our expertise in different vertical industry domains and a wide range of software tools has ensured a consistent track record of delivering top-notch IT Services globally. Role Description This is a full-time on-site role for an AWS Data Engineer located in Ahmedabad. The AWS Data Engineer will be responsible for designing and implementing data engineering solutions, developing data models, managing Extract, Transform, Load (ETL) processes, and ensuring the efficient operation of data warehousing solutions. Additionally, the engineer will contribute to data analytics activities to support business decision-making and strategic goals. Key Responsibilities ● Design and implement scalable and secure ETL/ELT pipelines on AWS for processing financial data. ● Build automated data reconciliation systems to ensure data integrity and accuracy across multiple financial sources (e.g., bank statements, internal ledgers, ERP, payment gateways). ● Collaborate closely with Finance, Data Science, and Product teams to understand reconciliation needs and ensure timely data delivery. ● Implement monitoring and alerting for pipeline health and data quality. ● Maintain detailed documentation on data flows, data models, and reconciliation logic. ● Ensure compliance with financial data handling and audit standards. Must-Have Skills ● 5-6 years of experience in data engineering, with a strong focus on AWS data services. ● Hands-on experience with: ○ AWS Glue, Lambda, S3, Redshift, Athena, Step Functions ○ AWS Lake Formation and IAM for secure data governance ● Solid understanding of data reconciliation processes in the finance domain (e.g., matching transactions, resolving mismatches, variance analysis). ● Strong SQL skills and experience with data warehousing and data lakes. ● Experience with Python or PySpark for data transformation. ● Knowledge of financial accounting principles or experience working with financial datasets (AR, AP, General Ledger, etc.).
Posted 1 week ago
2.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Category Engineering Experience Manager Primary Address Bangalore, Karnataka Overview Voyager (94001), India, Bangalore, Karnataka Manager, Data Engineering Do you love building and pioneering in the technology space? Do you enjoy solving complex business problems in a fast-paced, collaborative,inclusive, and iterative delivery environment? At Capital One, you'll be part of a big group of makers, breakers, doers and disruptors, who solve real problems and meet real customer needs. We are seeking Data Engineers who are passionate about marrying data with emerging technologies. As a Capital One Data Engineer, you’ll have the opportunity to be on the forefront of driving a major transformation within Capital One. What You’ll Do: Collaborate with and across Agile teams to design, develop, test, implement, and support technical solutions in full-stack development tools and technologies Work with a team of developers with deep experience in machine learning, distributed microservices, and full stack systems Utilize programming languages like Java, Scala, Python and Open Source RDBMS and NoSQL databases and Cloud based data warehousing services such as Redshift and Snowflake Share your passion for staying on top of tech trends, experimenting with and learning new technologies, participating in internal & external technology communities, and mentoring other members of the engineering community Collaborate with digital product managers, and deliver robust cloud-based solutions that drive powerful experiences to help millions of Americans achieve financial empowerment Perform unit tests and conduct reviews with other team members to make sure your code is rigorously designed, elegantly coded, and effectively tuned for performance Basic Qualifications: Bachelor’s Degree At least 4 years of experience in application development (Internship experience does not apply) At least 2 years of experience in big data technologies At least 1 year experience with cloud computing (AWS, Microsoft Azure, Google Cloud) At least 2 years of people management experience Preferred Qualifications: 7+ years of experience in application development including Python, SQL, Scala, or Java 4+ years of experience with a public cloud (AWS, Microsoft Azure, Google Cloud) 4+ years experience with Distributed data/computing tools (MapReduce, Hadoop, Hive, EMR, Kafka, Spark, Gurobi, or MySQL) 4+ year experience working on real-time data and streaming applications 4+ years of experience with NoSQL implementation (Mongo, Cassandra) 4+ years of data warehousing experience (Redshift or Snowflake) 4+ years of experience with UNIX/Linux including basic commands and shell scripting 2+ years of experience with Agile engineering practices At this time, Capital One will not sponsor a new applicant for employment authorization for this position No agencies please. Capital One is an equal opportunity employer (EOE, including disability/vet) committed to non-discrimination in compliance with applicable federal, state, and local laws. Capital One promotes a drug-free workplace. Capital One will consider for employment qualified applicants with a criminal history in a manner consistent with the requirements of applicable laws regarding criminal background inquiries, including, to the extent applicable, Article 23-A of the New York Correction Law; San Francisco, California Police Code Article 49, Sections 4901-4920; New York City’s Fair Chance Act; Philadelphia’s Fair Criminal Records Screening Act; and other applicable federal, state, and local laws and regulations regarding criminal background inquiries. If you have visited our website in search of information on employment opportunities or to apply for a position, and you require an accommodation, please contact Capital One Recruiting at 1-800-304-9102 or via email at RecruitingAccommodation@capitalone.com. All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodations. For technical support or questions about Capital One's recruiting process, please send an email to Careers@capitalone.com Capital One does not provide, endorse nor guarantee and is not liable for third-party products, services, educational tools or other information available through this site. Capital One Financial is made up of several different entities. Please note that any position posted in Canada is for Capital One Canada, any position posted in the United Kingdom is for Capital One Europe and any position posted in the Philippines is for Capital One Philippines Service Corp. (COPSSC). This carousel contains a column of headings. Selecting a heading will change the main content in the carousel that follows. Use the Previous and Next buttons to cycle through all the options, use Enter to select. This carousel shows one item at a time. Use the preceding navigation carousel to select a specific heading to display the content here. How We Hire We take finding great coworkers pretty seriously. Step 1 Apply It only takes a few minutes to complete our application and assessment. Step 2 Screen and Schedule If your application is a good match you’ll hear from one of our recruiters to set up a screening interview. Step 3 Interview(s) Now’s your chance to learn about the job, show us who you are, share why you would be a great addition to the team and determine if Capital One is the place for you. Step 4 Decision The team will discuss — if it’s a good fit for us and you, we’ll make it official! How to Pick the Perfect Career Opportunity Overwhelmed by a tough career choice? Read these tips from Devon Rollins, Senior Director of Cyber Intelligence, to help you accept the right offer with confidence. Your wellbeing is our priority Our benefits and total compensation package is designed for the whole person. Caring for both you and your family. Healthy Body, Healthy Mind You have options and we have the tools to help you decide which health plans best fit your needs. Save Money, Make Money Secure your present, plan for your future and reduce expenses along the way. Time, Family and Advice Options for your time, opportunities for your family, and advice along the way. It’s time to BeWell. Career Journey Here’s how the team fits together. We’re big on growth and knowing who and how coworkers can best support you.
Posted 1 week ago
2.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Voyager (94001), India, Bangalore, Karnataka Manager, Data Engineering Do you love building and pioneering in the technology space? Do you enjoy solving complex business problems in a fast-paced, collaborative, inclusive, and iterative delivery environment? At Capital One, you'll be part of a big group of makers, breakers, doers and disruptors, who solve real problems and meet real customer needs. We are seeking Data Engineers who are passionate about marrying data with emerging technologies. As a Capital One Data Engineer, you’ll have the opportunity to be on the forefront of driving a major transformation within Capital One. What You’ll Do: Collaborate with and across Agile teams to design, develop, test, implement, and support technical solutions in full-stack development tools and technologies Work with a team of developers with deep experience in machine learning, distributed microservices, and full stack systems Utilize programming languages like Java, Scala, Python and Open Source RDBMS and NoSQL databases and Cloud based data warehousing services such as Redshift and Snowflake Share your passion for staying on top of tech trends, experimenting with and learning new technologies, participating in internal & external technology communities, and mentoring other members of the engineering community Collaborate with digital product managers, and deliver robust cloud-based solutions that drive powerful experiences to help millions of Americans achieve financial empowerment Perform unit tests and conduct reviews with other team members to make sure your code is rigorously designed, elegantly coded, and effectively tuned for performance Basic Qualifications: Bachelor’s Degree At least 4 years of experience in application development (Internship experience does not apply) At least 2 years of experience in big data technologies At least 1 year experience with cloud computing (AWS, Microsoft Azure, Google Cloud) At least 2 years of people management experience Preferred Qualifications: 7+ years of experience in application development including Python, SQL, Scala, or Java 4+ years of experience with a public cloud (AWS, Microsoft Azure, Google Cloud) 4+ years experience with Distributed data/computing tools (MapReduce, Hadoop, Hive, EMR, Kafka, Spark, Gurobi, or MySQL) 4+ year experience working on real-time data and streaming applications 4+ years of experience with NoSQL implementation (Mongo, Cassandra) 4+ years of data warehousing experience (Redshift or Snowflake) 4+ years of experience with UNIX/Linux including basic commands and shell scripting 2+ years of experience with Agile engineering practices At this time, Capital One will not sponsor a new applicant for employment authorization for this position No agencies please. Capital One is an equal opportunity employer (EOE, including disability/vet) committed to non-discrimination in compliance with applicable federal, state, and local laws. Capital One promotes a drug-free workplace. Capital One will consider for employment qualified applicants with a criminal history in a manner consistent with the requirements of applicable laws regarding criminal background inquiries, including, to the extent applicable, Article 23-A of the New York Correction Law; San Francisco, California Police Code Article 49, Sections 4901-4920; New York City’s Fair Chance Act; Philadelphia’s Fair Criminal Records Screening Act; and other applicable federal, state, and local laws and regulations regarding criminal background inquiries. If you have visited our website in search of information on employment opportunities or to apply for a position, and you require an accommodation, please contact Capital One Recruiting at 1-800-304-9102 or via email at RecruitingAccommodation@capitalone.com. All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodations. For technical support or questions about Capital One's recruiting process, please send an email to Careers@capitalone.com Capital One does not provide, endorse nor guarantee and is not liable for third-party products, services, educational tools or other information available through this site. Capital One Financial is made up of several different entities. Please note that any position posted in Canada is for Capital One Canada, any position posted in the United Kingdom is for Capital One Europe and any position posted in the Philippines is for Capital One Philippines Service Corp. (COPSSC).
Posted 1 week ago
10.0 - 15.0 years
0 Lacs
delhi
On-site
You are looking for a Senior Data Architect to join the team at Wingify in Delhi. As a Senior Data Architect, you will be responsible for leading and mentoring a team of data engineers, optimizing scalable data infrastructure, driving data governance frameworks, collaborating with cross-functional teams, and ensuring data security, compliance, and quality. Your role will involve optimizing data processing workflows, fostering a culture of innovation and technical excellence, and aligning technical strategy with business objectives. To be successful in this role, you should have at least 10 years of experience in software/data engineering, with a minimum of 3 years in a leadership position. You should possess expertise in backend development using programming languages like Java, PHP, Python, Node.JS, GoLang, JavaScript, HTML, and CSS. Proficiency in SQL, Python, and Scala for data processing and analytics is essential, along with a strong understanding of cloud platforms such as AWS, GCP, or Azure and their data services. Additionally, you should have experience with big data technologies like Spark, Hadoop, Kafka, and distributed computing frameworks, as well as hands-on experience with data warehousing solutions like Snowflake, Redshift, or BigQuery. Deep knowledge of data governance, security, and compliance, along with familiarity with NoSQL databases and automation/DevOps tools, is required. Strong leadership, communication, and stakeholder management skills are crucial for this role. Preferred qualifications include experience in machine learning infrastructure or MLOps, exposure to real-time data processing and analytics, and interest in data structures, algorithm analysis and design, multicore programming, and scalable architecture. Prior experience in a SaaS or high-growth tech company would be advantageous. Please note that candidates must have a minimum of 10 years of experience to be eligible for this role. Graduation from Tier - 1 colleges, such as IIT, is preferred. Candidates from B2B Product Companies with High data-traffic are encouraged to apply, while those who do not meet these criteria are kindly requested not to apply.,
Posted 1 week ago
2.0 - 6.0 years
0 Lacs
pune, maharashtra
On-site
You should have 2-7 years of experience in Noida, Gurugram, Indore, Pune, or Bangalore with a notice period of currently serving or immediate joiners. Your primary responsibilities will include having 2-6 years of hands-on experience with Big Data technologies like PySpark (Data frame and SparkSQL), Hadoop, and Hive. Additionally, you should have good experience with Python and Bash Scripts, a solid understanding of SQL and data warehouse concepts, and strong analytical, problem-solving, data analysis, and research skills. You should also demonstrate the ability to think creatively and independently, along with excellent communication, presentation, and interpersonal skills. It would be beneficial if you have hands-on experience with using Cloud Platform provided Big Data technologies such as IAM, Glue, EMR, RedShift, S3, and Kinesis. Experience in orchestration with Airflow and any job scheduler, as well as experience in migrating workloads from on-premise to cloud and cloud to cloud migrations, would be considered a plus.,
Posted 1 week ago
3.0 years
0 Lacs
Andhra Pradesh, India
On-site
At PwC, our people in infrastructure focus on designing and implementing robust, secure IT systems that support business operations. They enable the smooth functioning of networks, servers, and data centres to optimise performance and minimise downtime. In infrastructure engineering at PwC, you will focus on designing and implementing robust and scalable technology infrastructure solutions for clients. Your work will involve network architecture, server management, and cloud computing experience. Data Modeler Job Description: Looking for candidates with a strong background in data modeling, metadata management, and data system optimization. You will be responsible for analyzing business needs, developing long term data models, and ensuring the efficiency and consistency of our data systems. Key areas of expertise include Analyze and translate business needs into long term solution data models. Evaluate existing data systems and recommend improvements. Define rules to translate and transform data across data models. Work with the development team to create conceptual data models and data flows. Develop best practices for data coding to ensure consistency within the system. Review modifications of existing systems for cross compatibility. Implement data strategies and develop physical data models. Update and optimize local and metadata models. Utilize canonical data modeling techniques to enhance data system efficiency. Evaluate implemented data systems for variances, discrepancies, and efficiency. Troubleshoot and optimize data systems to ensure optimal performance. Strong expertise in relational and dimensional modeling (OLTP, OLAP). Experience with data modeling tools (Erwin, ER/Studio, Visio, PowerDesigner). Proficiency in SQL and database management systems (Oracle, SQL Server, MySQL, PostgreSQL). Knowledge of NoSQL databases (MongoDB, Cassandra) and their data structures. Experience working with data warehouses and BI tools (Snowflake, Redshift, BigQuery, Tableau, Power BI). Familiarity with ETL processes, data integration, and data governance frameworks. Strong analytical, problem-solving, and communication skills. Qualifications: Bachelor's degree in Engineering or a related field. 3 to 5 years of experience in data modeling or a related field. 4+ years of hands-on experience with dimensional and relational data modeling. Expert knowledge of metadata management and related tools. Proficiency with data modeling tools such as Erwin, Power Designer, or Lucid. Knowledge of transactional databases and data warehouses. Preferred Skills: Experience in cloud-based data solutions (AWS, Azure, GCP). Knowledge of big data technologies (Hadoop, Spark, Kafka). Understanding of graph databases and real-time data processing. Certifications in data management, modeling, or cloud data engineering. Excellent communication and presentation skills. Strong interpersonal skills to collaborate effectively with various teams. Preferred Skills: Experience in cloud-based data solutions (AWS, Azure, GCP). Knowledge of big data technologies (Hadoop, Spark, Kafka). Understanding of graph databases and real-time data processing. Certifications in data management, modeling, or cloud data engineering. Excellent communication and presentation skills. Strong interpersonal skills to collaborate effectively with various teams.
Posted 1 week ago
10.0 - 14.0 years
0 Lacs
maharashtra
On-site
As the Technical Lead of Data Engineering at Assent, you will collaborate with various stakeholders including Product Managers, Product Designers, and Engineering team members to identify opportunities and evaluate the feasibility of solutions. Your role will involve offering technical guidance, influencing decision-making, and aligning data engineering initiatives with business objectives as part of Assent's roadmap development. You will be responsible for driving the technical strategy, overseeing team execution, and implementing process improvements to construct resilient and scalable data systems. In addition, you will lead data engineering efforts, mentor a growing team, and establish robust and scalable data infrastructure. Key Requirements & Responsibilities: - Lead the technical execution of data engineering projects to ensure high-quality and timely delivery, covering discovery, delivery, and adoption stages. - Collaborate with Architecture team members to design and implement scalable, high-performance data pipelines and infrastructure. - Provide technical guidance to the team, ensuring adherence to best practices in data engineering, performance optimization, and system reliability. - Work cross-functionally with various teams such as Product Managers, Software Development, Analysts, and AI/ML teams to define and implement data initiatives. - Partner with the team manager to plan and prioritize work, striking a balance between short-term deliverables and long-term technical enhancements. - Keep abreast of emerging technologies and methodologies, advocating for their adoption to accelerate the team's objectives. - Ensure compliance with corporate security policies and follow the established guidelines and procedures of Assent. Qualifications: Your Knowledge, Skills and Abilities: - Possess 10+ years of experience in data engineering, software development, or related fields. - Proficient in cloud data platforms, particularly AWS. - Expertise in modern data technologies like Spark, Airflow, dbt, Snowflake, Redshift, or similar. - Deep understanding of distributed systems and data pipeline design, with specialization in ETL/ELT processes, data warehousing, and real-time streaming. - Strong programming skills in Python, SQL, Scala, or similar languages. - Experience with infrastructure as code tools like Terraform, CloudFormation, and knowledge of DevOps best practices. - Ability to influence technical direction and promote best practices across teams. - Excellent communication and leadership skills, with a focus on fostering collaboration and technical excellence. - A learning mindset, continuously exploring new technologies and best practices. - Experience in security, compliance, and governance related to data systems is a plus. This is not an exhaustive list of duties, and responsibilities may be modified or added as needed to meet business requirements. Life at Assent: At Assent, we are dedicated to cultivating an inclusive environment where team members feel valued, respected, and heard. Our diversity, equity, and inclusion practices are guided by our Diversity and Inclusion Working Group and Employee Resource Groups (ERGs), ensuring that team members from diverse backgrounds are recruited, retained, and provided opportunities to contribute to business success. If you need assistance or accommodation during any stage of the interview and selection process, please reach out to talent@assent.com, and we will be happy to assist you.,
Posted 1 week ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Proficiency in building highly scalable ETL and streaming-based data pipelines using Google Cloud Platform (GCP) services and products like Bigquery, Cloud Proficiency in large scale data platforms and data processing systems such as Google BigQuery, Amazon Redshift, Azure DataLake Excellent Python, PySpark and SQL development and debugging skills, exposure to other BigData frameworks like Hadoop Hive would be added advantage Experience building systems to retrieve and aggregate data from event-driven messaging frameworks (e.g. RabbitMQ and Pub/Sub) Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs. Work with data and analytics experts to strive for greater functionality in our data systems. Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader. Keep our data separated and secure across national boundaries through multiple data centres and Azure/AWS Skills : Advanced working SQL knowledge and experience working with relational databases, query authoring, optimizing (SQL) as well as working familiarity with a variety of databases. Experience building and optimizing 'big data' data pipelines, architectures, and data sets. Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement. Strong analytic skills related to working with unstructured Skills : CloudBigTable, AI/Ml solutions, Compute Engine, Cloud Profile : ClicFlyer provides SAAS Business Intelligence and Analytics Solutions to Retail Chains and Brands across the Middle East leading Global FMCG Companies. From data integration and standardization to tailor-fit action plans, we take care of all the large and complex data to provide the retailer with customer Business Intelligence and Analytical centric insights. Our services are available in UAE, KSA, Bahrain, Qatar, Kuwait, Oman, Jordan, Egypt, S. Africa & Indonesia. (ref:hirist.tech)
Posted 1 week ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Join us as a Software Engineer, responsible for supporting the successful delivery of Location Strategy projects to plan, budget, agreed quality and governance standards. You'll spearhead the evolution of our digital landscape, driving innovation and excellence. You will harness cutting-edge technology to revolutionise our digital offerings, ensuring unparalleled customer experiences. To be successful as a Software Engineer you should have experience with: Demonstrable expertise with front-end and back-end skillsets. Java Proficiency (Java 17+) and Spring Ecosystem (Spring MVC, Data JPA, Security etc) with strong SQL and NoSQL integration expertise. React.js and javascript expertise : material UI, Ant design and state management expertise (Redus, Zustand or Context API). Strong knowledge of runtime (virtualisation, containers and Kubernetes) and expertise with test driven development using frameworks like cypress, playwright, selenium etc. Strong knowledge of CI/CD pipelines and tooling : Github Actions, Jenkins, Gitlab CI or similar. Monitoring and Observability - logging/tracing and alerting with knowledge of SRE integrations into opensource tooling like grafana/ELK etc. Some Other Highly Valued Skills May Include Expertise building ELT pipelines and cloud/storage integrations - data lake/warehouse integrations (redshift, BigQuery, snowflake etc). Expertise with security (OAuth2, CSRF/XSS protection), secure coding practice and Performance Optimization - JVM tuning, performance profiling, caching, lazy loading, rate limiting and high availability in large datasets. Expertise in Public, Private and Hybrid Cloud technologies (DC, AWS, Azure, GCP etc) and across broad Network domains (physical and wireless) – VXLAN/EVPN/WAN/SD-WAN/LAN/WLAN etc. You may be assessed on the key critical skills relevant for success in role, such as risk and controls, change and transformation, business acumen strategic thinking and digital and technology, as well as job-specific technical skills. This role is based in Pune. Purpose of the role To design, develop and improve software, utilising various engineering methodologies, that provides business, platform, and technology capabilities for our customers and colleagues. Accountabilities Development and delivery of high-quality software solutions by using industry aligned programming languages, frameworks, and tools. Ensuring that code is scalable, maintainable, and optimized for performance. Cross-functional collaboration with product managers, designers, and other engineers to define software requirements, devise solution strategies, and ensure seamless integration and alignment with business objectives. Collaboration with peers, participate in code reviews, and promote a culture of code quality and knowledge sharing. Stay informed of industry technology trends and innovations and actively contribute to the organization’s technology communities to foster a culture of technical excellence and growth. Adherence to secure coding practices to mitigate vulnerabilities, protect sensitive data, and ensure secure software solutions. Implementation of effective unit testing practices to ensure proper code design, readability, and reliability. Assistant Vice President Expectations To advise and influence decision making, contribute to policy development and take responsibility for operational effectiveness. Collaborate closely with other functions/ business divisions. Lead a team performing complex tasks, using well developed professional knowledge and skills to deliver on work that impacts the whole business function. Set objectives and coach employees in pursuit of those objectives, appraisal of performance relative to objectives and determination of reward outcomes If the position has leadership responsibilities, People Leaders are expected to demonstrate a clear set of leadership behaviours to create an environment for colleagues to thrive and deliver to a consistently excellent standard. The four LEAD behaviours are: L – Listen and be authentic, E – Energise and inspire, A – Align across the enterprise, D – Develop others. OR for an individual contributor, they will lead collaborative assignments and guide team members through structured assignments, identify the need for the inclusion of other areas of specialisation to complete assignments. They will identify new directions for assignments and/ or projects, identifying a combination of cross functional methodologies or practices to meet required outcomes. Consult on complex issues; providing advice to People Leaders to support the resolution of escalated issues. Identify ways to mitigate risk and developing new policies/procedures in support of the control and governance agenda. Take ownership for managing risk and strengthening controls in relation to the work done. Perform work that is closely related to that of other areas, which requires understanding of how areas coordinate and contribute to the achievement of the objectives of the organisation sub-function. Collaborate with other areas of work, for business aligned support areas to keep up to speed with business activity and the business strategy. Engage in complex analysis of data from multiple sources of information, internal and external sources such as procedures and practises (in other areas, teams, companies, etc).to solve problems creatively and effectively. Communicate complex information. 'Complex' information could include sensitive information or information that is difficult to communicate because of its content or its audience. Influence or convince stakeholders to achieve outcomes. All colleagues will be expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence and Stewardship – our moral compass, helping us do what we believe is right. They will also be expected to demonstrate the Barclays Mindset – to Empower, Challenge and Drive – the operating manual for how we behave.
Posted 1 week ago
2.0 years
0 Lacs
Delhi, India
On-site
JOB_POSTING-3-72598-2 Job Description Role Title: Software Engineer (L09) Company Overview Synchrony (NYSE: SYF) is a premier consumer financial services company delivering one of the industry’s most complete digitally enabled product suites. Our experience, expertise and scale encompass a broad spectrum of industries including digital, health and wellness, retail, telecommunications, home, auto, outdoors, pet and more. We have recently been ranked #2 among India’s Best Companies to Work for by Great Place to Work. We were among the Top 50 India’s Best Workplaces in Building a Culture of Innovation by All by GPTW and Top 25 among Best Workplaces in BFSI by GPTW. We have also been recognized by AmbitionBox Employee Choice Awards among the Top 20 Mid-Sized Companies, ranked #3 among Top Rated Companies for Women, and Top-Rated Financial Services Companies. Synchrony celebrates ~52% women talent. We offer Flexibility and Choice for all employees and provide best-in-class employee benefits and programs that cater to work-life integration and overall well-being. We provide career advancement and upskilling opportunities, focusing on Advancing Diverse Talent to take up leadership roles. Organizational Overview This role will be part of the Data Architecture & Analytics group part of CTO organization. The API Service team specializes in provisioning REST APIs for real time interactions with Enterprise Data Lake supporting business decision-making by designing and delivering resilient, scalable, secure and low latency solutions using cutting edge tools and cloud technologies. The team leverages Java Spring Boot Microservice architecture with built-in advanced solutions to accomplish stability while maintaining performance across multiple platforms that provide actionable insights across various business functions. Through collaboration with cross-functional teams, the API team ensures seamless deployment and optimization of real time solutions in the Cloud Foundry. Role Summary/Purpose We are looking for an API Developer to design and develop consumer-centric low latency scalable cloud native applications leveraging Spring & Cloud technologies for our Enterprise Data Lake initiative. This high visibility, fast-paced key initiative will integrate data across internal and external sources, provide analytical insights and integrate with our critical systems. Key Responsibilities Design and develop containerized microservices for cloud native applications using Spring framework Develop low latency Java Springboot APIs and deploy on Pivotal Cloud Foundry (PCF) in a complex data management environment using CI/CD pipelines. Develop integration of Kakfa, Hbase, Redshift, MySQL and Hive databases Research caching techniques and develop solutions for data caches such as Gemfire, Redis Develop and deploy code on on-prem and on AWS Required Skills/Knowledge Experience with deployment of microservice architecture on Pivotal Cloud Foundry platform Experience with public cloud computing platforms such as AWS Experience integrating with load balancers and Protegrity platform for tokenization Experience with Agile project management methods and practices. Demonstrated excellent planning and organizational skills Collaborate across teams of internal and external technical staff, business analysts, software support and operations staff Able to work effectively with multiple teams and stakeholders Desired Skills/Knowledge Good to have experience working in Financial Industry or Credit processing experience. Willing to stay abreast of the latest developments in technology Experience with working on a geographically distributed team managing onshore/offshore resources with shifting priorities. Eligibility Criteria Bachelor's degree in a quantitative field (such as Engineering, Computer Science, Statistics, Econometrics) with minimum 2 years of professional Java development experience; or lieu of a degree with 4 years of professional development (Java/J2EE) experience. Minimum 2 years of in-depth experience in design, and development using J2EE/Eclipse, Spring Boot, REST Services in complex large scale environments Minimum 1 year of experience using in memory data grid technology such as GemFire, Redis, Hazelcast Minimum 1 year of experience integration with middleware platforms such as IBM WebSphere, Tibco, Oracle ESB Minimum 1 year of experience integration with Hadoop/Hive, SQL and HBase/NoSQL data stores like Cassandra and MongoDB Work Timings: 3 PM to 12 AM IST This role qualifies for Enhanced Flexibility and Choice offered in Synchrony India and will require the incumbent to be available between 06:00 AM Eastern Time – 11:30 AM Eastern Time (timings are anchored to US Eastern hours and will adjust twice a year locally). This window is for meetings with India and US teams. The remaining hours will be flexible for the employee to choose. Exceptions may apply periodically due to business needs. Please discuss this with the hiring manager for more details. For Internal Applicants Understand the criteria or mandatory skills required for the role, before applying Inform your manager and HRM before applying for any role on Workday Ensure that your professional profile is updated (fields such as education, prior experience, other skills) and it is mandatory to upload your updated resume (Word or PDF format) Must not be any corrective action plan (First Formal/Final Formal, LPP) L4 to L7 Employees who have completed 12 months in the organization and 12 months in current role and level are only eligible. L8+ Employees who have completed 18 months in the organization and 12 months in current role and level are only eligible. L04+ Employees can apply Grade/Level: 09 Job Family Group Information Technology
Posted 1 week ago
2.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
JOB_POSTING-3-72598-1 Job Description Role Title: Software Engineer (L09) Company Overview Synchrony (NYSE: SYF) is a premier consumer financial services company delivering one of the industry’s most complete digitally enabled product suites. Our experience, expertise and scale encompass a broad spectrum of industries including digital, health and wellness, retail, telecommunications, home, auto, outdoors, pet and more. We have recently been ranked #2 among India’s Best Companies to Work for by Great Place to Work. We were among the Top 50 India’s Best Workplaces in Building a Culture of Innovation by All by GPTW and Top 25 among Best Workplaces in BFSI by GPTW. We have also been recognized by AmbitionBox Employee Choice Awards among the Top 20 Mid-Sized Companies, ranked #3 among Top Rated Companies for Women, and Top-Rated Financial Services Companies. Synchrony celebrates ~52% women talent. We offer Flexibility and Choice for all employees and provide best-in-class employee benefits and programs that cater to work-life integration and overall well-being. We provide career advancement and upskilling opportunities, focusing on Advancing Diverse Talent to take up leadership roles. Organizational Overview This role will be part of the Data Architecture & Analytics group part of CTO organization. The API Service team specializes in provisioning REST APIs for real time interactions with Enterprise Data Lake supporting business decision-making by designing and delivering resilient, scalable, secure and low latency solutions using cutting edge tools and cloud technologies. The team leverages Java Spring Boot Microservice architecture with built-in advanced solutions to accomplish stability while maintaining performance across multiple platforms that provide actionable insights across various business functions. Through collaboration with cross-functional teams, the API team ensures seamless deployment and optimization of real time solutions in the Cloud Foundry. Role Summary/Purpose We are looking for an API Developer to design and develop consumer-centric low latency scalable cloud native applications leveraging Spring & Cloud technologies for our Enterprise Data Lake initiative. This high visibility, fast-paced key initiative will integrate data across internal and external sources, provide analytical insights and integrate with our critical systems. Key Responsibilities Design and develop containerized microservices for cloud native applications using Spring framework Develop low latency Java Springboot APIs and deploy on Pivotal Cloud Foundry (PCF) in a complex data management environment using CI/CD pipelines. Develop integration of Kakfa, Hbase, Redshift, MySQL and Hive databases Research caching techniques and develop solutions for data caches such as Gemfire, Redis Develop and deploy code on on-prem and on AWS Required Skills/Knowledge Experience with deployment of microservice architecture on Pivotal Cloud Foundry platform Experience with public cloud computing platforms such as AWS Experience integrating with load balancers and Protegrity platform for tokenization Experience with Agile project management methods and practices. Demonstrated excellent planning and organizational skills Collaborate across teams of internal and external technical staff, business analysts, software support and operations staff Able to work effectively with multiple teams and stakeholders Desired Skills/Knowledge Good to have experience working in Financial Industry or Credit processing experience. Willing to stay abreast of the latest developments in technology Experience with working on a geographically distributed team managing onshore/offshore resources with shifting priorities. Eligibility Criteria Bachelor's degree in a quantitative field (such as Engineering, Computer Science, Statistics, Econometrics) with minimum 2 years of professional Java development experience; or lieu of a degree with 4 years of professional development (Java/J2EE) experience. Minimum 2 years of in-depth experience in design, and development using J2EE/Eclipse, Spring Boot, REST Services in complex large scale environments Minimum 1 year of experience using in memory data grid technology such as GemFire, Redis, Hazelcast Minimum 1 year of experience integration with middleware platforms such as IBM WebSphere, Tibco, Oracle ESB Minimum 1 year of experience integration with Hadoop/Hive, SQL and HBase/NoSQL data stores like Cassandra and MongoDB Work Timings: 3 PM to 12 AM IST This role qualifies for Enhanced Flexibility and Choice offered in Synchrony India and will require the incumbent to be available between 06:00 AM Eastern Time – 11:30 AM Eastern Time (timings are anchored to US Eastern hours and will adjust twice a year locally). This window is for meetings with India and US teams. The remaining hours will be flexible for the employee to choose. Exceptions may apply periodically due to business needs. Please discuss this with the hiring manager for more details. For Internal Applicants Understand the criteria or mandatory skills required for the role, before applying Inform your manager and HRM before applying for any role on Workday Ensure that your professional profile is updated (fields such as education, prior experience, other skills) and it is mandatory to upload your updated resume (Word or PDF format) Must not be any corrective action plan (First Formal/Final Formal, LPP) L4 to L7 Employees who have completed 12 months in the organization and 12 months in current role and level are only eligible. L8+ Employees who have completed 18 months in the organization and 12 months in current role and level are only eligible. L04+ Employees can apply Grade/Level: 09 Job Family Group Information Technology
Posted 1 week ago
2.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
JOB_POSTING-3-72598 Job Description Role Title: Software Engineer (L09) Company Overview Synchrony (NYSE: SYF) is a premier consumer financial services company delivering one of the industry’s most complete digitally enabled product suites. Our experience, expertise and scale encompass a broad spectrum of industries including digital, health and wellness, retail, telecommunications, home, auto, outdoors, pet and more. We have recently been ranked #2 among India’s Best Companies to Work for by Great Place to Work. We were among the Top 50 India’s Best Workplaces in Building a Culture of Innovation by All by GPTW and Top 25 among Best Workplaces in BFSI by GPTW. We have also been recognized by AmbitionBox Employee Choice Awards among the Top 20 Mid-Sized Companies, ranked #3 among Top Rated Companies for Women, and Top-Rated Financial Services Companies. Synchrony celebrates ~52% women talent. We offer Flexibility and Choice for all employees and provide best-in-class employee benefits and programs that cater to work-life integration and overall well-being. We provide career advancement and upskilling opportunities, focusing on Advancing Diverse Talent to take up leadership roles. Organizational Overview This role will be part of the Data Architecture & Analytics group part of CTO organization. The API Service team specializes in provisioning REST APIs for real time interactions with Enterprise Data Lake supporting business decision-making by designing and delivering resilient, scalable, secure and low latency solutions using cutting edge tools and cloud technologies. The team leverages Java Spring Boot Microservice architecture with built-in advanced solutions to accomplish stability while maintaining performance across multiple platforms that provide actionable insights across various business functions. Through collaboration with cross-functional teams, the API team ensures seamless deployment and optimization of real time solutions in the Cloud Foundry. Role Summary/Purpose We are looking for an API Developer to design and develop consumer-centric low latency scalable cloud native applications leveraging Spring & Cloud technologies for our Enterprise Data Lake initiative. This high visibility, fast-paced key initiative will integrate data across internal and external sources, provide analytical insights and integrate with our critical systems. Key Responsibilities Design and develop containerized microservices for cloud native applications using Spring framework Develop low latency Java Springboot APIs and deploy on Pivotal Cloud Foundry (PCF) in a complex data management environment using CI/CD pipelines. Develop integration of Kakfa, Hbase, Redshift, MySQL and Hive databases Research caching techniques and develop solutions for data caches such as Gemfire, Redis Develop and deploy code on on-prem and on AWS Required Skills/Knowledge Experience with deployment of microservice architecture on Pivotal Cloud Foundry platform Experience with public cloud computing platforms such as AWS Experience integrating with load balancers and Protegrity platform for tokenization Experience with Agile project management methods and practices. Demonstrated excellent planning and organizational skills Collaborate across teams of internal and external technical staff, business analysts, software support and operations staff Able to work effectively with multiple teams and stakeholders Desired Skills/Knowledge Good to have experience working in Financial Industry or Credit processing experience. Willing to stay abreast of the latest developments in technology Experience with working on a geographically distributed team managing onshore/offshore resources with shifting priorities. Eligibility Criteria Bachelor's degree in a quantitative field (such as Engineering, Computer Science, Statistics, Econometrics) with minimum 2 years of professional Java development experience; or lieu of a degree with 4 years of professional development (Java/J2EE) experience. Minimum 2 years of in-depth experience in design, and development using J2EE/Eclipse, Spring Boot, REST Services in complex large scale environments Minimum 1 year of experience using in memory data grid technology such as GemFire, Redis, Hazelcast Minimum 1 year of experience integration with middleware platforms such as IBM WebSphere, Tibco, Oracle ESB Minimum 1 year of experience integration with Hadoop/Hive, SQL and HBase/NoSQL data stores like Cassandra and MongoDB Work Timings: 3 PM to 12 AM IST This role qualifies for Enhanced Flexibility and Choice offered in Synchrony India and will require the incumbent to be available between 06:00 AM Eastern Time – 11:30 AM Eastern Time (timings are anchored to US Eastern hours and will adjust twice a year locally). This window is for meetings with India and US teams. The remaining hours will be flexible for the employee to choose. Exceptions may apply periodically due to business needs. Please discuss this with the hiring manager for more details. For Internal Applicants Understand the criteria or mandatory skills required for the role, before applying Inform your manager and HRM before applying for any role on Workday Ensure that your professional profile is updated (fields such as education, prior experience, other skills) and it is mandatory to upload your updated resume (Word or PDF format) Must not be any corrective action plan (First Formal/Final Formal, LPP) L4 to L7 Employees who have completed 12 months in the organization and 12 months in current role and level are only eligible. L8+ Employees who have completed 18 months in the organization and 12 months in current role and level are only eligible. L04+ Employees can apply Grade/Level: 09 Job Family Group Information Technology
Posted 1 week ago
2.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Description When you attract people who have the DNA of pioneers and the DNA of explorers, you build a company of like-minded people who want to invent. And that’s what they think about when they get up in the morning: how are we going to work backwards from customers and build a great service or a great product” – Jeff Bezos Amazon.com’s success is built on a foundation of customer obsession. Have you ever thought about what it takes to successfully deliver millions of packages to Amazon customers seamlessly every day like a clock work? In order to make that happen, behind those millions of packages, billions of decision gets made by machines and humans. What is the accuracy of customer provided address? Do we know exact location of the address on Map? Is there a safe place? Can we make unattended delivery? Would signature be required? If the address is commercial property? Do we know open business hours of the address? What if customer is not home? Is there an alternate delivery address? Does customer have any special preference? What are other addresses that also have packages to be delivered on the same day? Are we optimizing delivery associate’s route? Does delivery associate know locality well enough? Is there an access code to get inside building? And the list simply goes on. At the core of all of it lies quality of underlying data that can help make those decisions in time. The person in this role will be a strong influencer who will ensure goal alignment with Technology, Operations, and Finance teams. This role will serve as the face of the organization to global stakeholders. This position requires a results-oriented, high-energy, dynamic individual with both stamina and mental quickness to be able to work and thrive in a fast-paced, high-growth global organization. Excellent communication skills and executive presence to get in front of VPs and SVPs across Amazon will be imperative. Key Strategic Objectives: Amazon is seeking an experienced leader to own the vision for quality improvement through global address management programs. As a Business Intelligence Engineer of Amazon last mile quality team, you will be responsible for shaping the strategy and direction of customer-facing products that are core to the customer experience. As a key member of the last mile leadership team, you will continually raise the bar on both quality and performance. You will bring innovation, a strategic perspective, a passionate voice, and an ability to prioritize and execute on a fast-moving set of priorities, competitive pressures, and operational initiatives. You will partner closely with product and technology teams to define and build innovative and delightful experiences for customers. You must be highly analytical, able to work extremely effectively in a matrix organization, and have the ability to break complex problems down into steps that drive product development at Amazon speed. You will set the tempo for defect reduction through continuous improvement and drive accountability across multiple business units in order to deliver large scale high visibility/ high impact projects. You will lead by example to be just as passionate about operational performance and predictability as you will be about all other aspects of customer experience. The Successful Candidate Will Be Able To Effectively manage customer expectations and resolve conflicts that balance client and company needs. Develop process to effectively maintain and disseminate project information to stakeholders. Be successful in a delivery focused environment and determining the right processes to make the team successful. This opportunity requires excellent technical, problem solving, and communication skills. The candidate is not just a policy maker/spokesperson but drives to get things done. Possess superior analytical abilities and judgment. Use quantitative and qualitative data to prioritize and influence, show creativity, experimentation and innovation, and drive projects with urgency in this fast-paced environment. Partner with key stakeholders to develop the vision and strategy for customer experience on our platforms. Influence product roadmaps based on this strategy along with your teams. Support the scalable growth of the company by developing and enabling the success of the Operations leadership team. Serve as a role model for Amazon Leadership Principles inside and outside the organization Actively seek to implement and distribute best practices across the operation Key job responsibilities Metric Reporting Dashboard Development Design ETL pipelines Automation Experiment Design and Support Deep Dive Analysis Insight Generation Product Improvement Opportunity Identification Opportunity or Problem Sizing Support Anecdotal Audits etc. Basic Qualifications 2+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience Experience with data visualization using Tableau, PowerBI, Quicksight, or similar tools Experience performing AB Testing, applying basic statistical methods (e.g. regression) to difficult business problems Experience with scripting language (e.g., Python, Java, or R) Experience building and maintaining basic data artifacts (e.g., ETL, data models, queries) Track record of generating key business insights and collaborating with stakeholders Preferred Qualifications Experience in designing and implementing custom reporting systems using automation tools Knowledge of data modeling and data pipeline design Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI HYD 13 SEZ - H84 Job ID: A2876240
Posted 1 week ago
2.0 years
0 Lacs
Pune, Maharashtra, India
On-site
JOB_POSTING-3-72598-4 Job Description Role Title: Software Engineer (L09) Company Overview Synchrony (NYSE: SYF) is a premier consumer financial services company delivering one of the industry’s most complete digitally enabled product suites. Our experience, expertise and scale encompass a broad spectrum of industries including digital, health and wellness, retail, telecommunications, home, auto, outdoors, pet and more. We have recently been ranked #2 among India’s Best Companies to Work for by Great Place to Work. We were among the Top 50 India’s Best Workplaces in Building a Culture of Innovation by All by GPTW and Top 25 among Best Workplaces in BFSI by GPTW. We have also been recognized by AmbitionBox Employee Choice Awards among the Top 20 Mid-Sized Companies, ranked #3 among Top Rated Companies for Women, and Top-Rated Financial Services Companies. Synchrony celebrates ~52% women talent. We offer Flexibility and Choice for all employees and provide best-in-class employee benefits and programs that cater to work-life integration and overall well-being. We provide career advancement and upskilling opportunities, focusing on Advancing Diverse Talent to take up leadership roles. Organizational Overview This role will be part of the Data Architecture & Analytics group part of CTO organization. The API Service team specializes in provisioning REST APIs for real time interactions with Enterprise Data Lake supporting business decision-making by designing and delivering resilient, scalable, secure and low latency solutions using cutting edge tools and cloud technologies. The team leverages Java Spring Boot Microservice architecture with built-in advanced solutions to accomplish stability while maintaining performance across multiple platforms that provide actionable insights across various business functions. Through collaboration with cross-functional teams, the API team ensures seamless deployment and optimization of real time solutions in the Cloud Foundry. Role Summary/Purpose We are looking for an API Developer to design and develop consumer-centric low latency scalable cloud native applications leveraging Spring & Cloud technologies for our Enterprise Data Lake initiative. This high visibility, fast-paced key initiative will integrate data across internal and external sources, provide analytical insights and integrate with our critical systems. Key Responsibilities Design and develop containerized microservices for cloud native applications using Spring framework Develop low latency Java Springboot APIs and deploy on Pivotal Cloud Foundry (PCF) in a complex data management environment using CI/CD pipelines. Develop integration of Kakfa, Hbase, Redshift, MySQL and Hive databases Research caching techniques and develop solutions for data caches such as Gemfire, Redis Develop and deploy code on on-prem and on AWS Required Skills/Knowledge Experience with deployment of microservice architecture on Pivotal Cloud Foundry platform Experience with public cloud computing platforms such as AWS Experience integrating with load balancers and Protegrity platform for tokenization Experience with Agile project management methods and practices. Demonstrated excellent planning and organizational skills Collaborate across teams of internal and external technical staff, business analysts, software support and operations staff Able to work effectively with multiple teams and stakeholders Desired Skills/Knowledge Good to have experience working in Financial Industry or Credit processing experience. Willing to stay abreast of the latest developments in technology Experience with working on a geographically distributed team managing onshore/offshore resources with shifting priorities. Eligibility Criteria Bachelor's degree in a quantitative field (such as Engineering, Computer Science, Statistics, Econometrics) with minimum 2 years of professional Java development experience; or lieu of a degree with 4 years of professional development (Java/J2EE) experience. Minimum 2 years of in-depth experience in design, and development using J2EE/Eclipse, Spring Boot, REST Services in complex large scale environments Minimum 1 year of experience using in memory data grid technology such as GemFire, Redis, Hazelcast Minimum 1 year of experience integration with middleware platforms such as IBM WebSphere, Tibco, Oracle ESB Minimum 1 year of experience integration with Hadoop/Hive, SQL and HBase/NoSQL data stores like Cassandra and MongoDB Work Timings: 3 PM to 12 AM IST This role qualifies for Enhanced Flexibility and Choice offered in Synchrony India and will require the incumbent to be available between 06:00 AM Eastern Time – 11:30 AM Eastern Time (timings are anchored to US Eastern hours and will adjust twice a year locally). This window is for meetings with India and US teams. The remaining hours will be flexible for the employee to choose. Exceptions may apply periodically due to business needs. Please discuss this with the hiring manager for more details. For Internal Applicants Understand the criteria or mandatory skills required for the role, before applying Inform your manager and HRM before applying for any role on Workday Ensure that your professional profile is updated (fields such as education, prior experience, other skills) and it is mandatory to upload your updated resume (Word or PDF format) Must not be any corrective action plan (First Formal/Final Formal, LPP) L4 to L7 Employees who have completed 12 months in the organization and 12 months in current role and level are only eligible. L8+ Employees who have completed 18 months in the organization and 12 months in current role and level are only eligible. L04+ Employees can apply Grade/Level: 09 Job Family Group Information Technology
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
As a Data Engineer at our company, you will be an integral part of a skilled Data Engineering team focused on developing reusable capabilities and tools to automate various data processing pipelines. Your responsibilities will include contributing to data acquisition, ingestion, processing, monitoring pipelines, and validating data. Your role is pivotal in maintaining the smooth operation of data ingestion and processing pipelines, ensuring that data in the data lake is up-to-date, valid, and usable at all times. With a minimum of 3 years of experience in data engineering, you should be proficient in Python programming and have a strong background in working with both RDBMS and NoSQL systems. Experience in the AWS ecosystem, including components like Airflow, EMR, Redshift, S3, Athena, and PySpark, is essential. Additionally, you should have expertise in developing REST APIs using Python frameworks such as flask and fastapi. Familiarity with crawling libraries like BeautifulSoup in Python would be advantageous. Your skill in writing complex SQL queries to retrieve key metrics and working with various data lake storage formats will be key to your success in this role. Key Responsibilities: - Design and implement scalable data pipelines capable of handling large data volumes. - Develop ETL/ELT pipelines to extract data from upstream sources and synchronize it with data lakes in formats like parquet, iceberg, and delta. - Optimize and maintain data pipelines to ensure smooth operation and business continuity. - Collaborate with cross-functional teams to source data for various business use cases. - Stay informed about emerging data technologies and trends to enhance our data infrastructure and architecture continuously. - Adhere to best practices in data querying and manipulation to uphold data integrity. If you are a motivated Data Engineer with a passion for building robust data pipelines and ensuring data quality, we invite you to join our dynamic team and contribute to the success of our data engineering initiatives.,
Posted 1 week ago
1.0 - 3.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
FanCode is India’s premier sports destination committed to giving fans a highly personalised experience across content and merchandise for a wide variety of sports. Founded by sports industry veterans Yannick Colaco and Prasana Krishnan in March 2019, FanCode has over 100 million users. It has partnered with domestic, international sports leagues and associations across multiple sports. In content, FanCode offers interactive live streaming for sports with industry-first subscription formats like Match, Bundle and Tour Passes, along with monthly and annual subscriptions at affordable prices. Through FanCode Shop, it also offers fans a wide range of sports merchandise for sporting teams, brands and leagues across the world. Technology @ FanCode We have one mission: Create a platform for all sports fans. Built by sports fans for sports fans, we cover Sports Live Video Streaming, Live Scores & Commentary, Video On Demand, Player Analytics, Fantasy Research, News, and very recently, e-Commerce. We’re at the beginning of our story and growing at an incredible pace. Our tech stack is hosted on AWS and GCP, built on Amazon EC2, CloudFront, Lambda, API Gateway, and Google Compute Engine, Cloud Functions, and Google Cloud Storage. We have a microservices-based architecture using Java, Node.js , Python, PHP, Redis, MySQL, Cassandra, and Elasticsearch as our end-to-end stack serving product features. As a data-driven team, we also use Python and other big data technologies for Machine Learning and Predictive Analytics. Additionally, we heavily use Kafka, Spark, Redshift, and BigQuery, and other cutting-edge technologies to keep improving FanCode's performance. You will be joining the Core Infra Engineering Team at FanCode, which runs a fresh, stable, and secure environment for our talented developers to thrive. Along with building a great foundation, this Core Infra team is also responsible for spreading their knowledge throughout the other teams, ensuring everyone takes advantage of the easy-to-use infrastructure, and applying best practices when it comes to Continuous Delivery, Containerization, Performance, Networking, and Security. Your Role Deploy solid, resilient Cloud Architectures by writing Infrastructure as Code automation tools. Design and implement the services and tools needed to manage and scale a service-oriented architecture, e.g., service discovery, config managers, container orchestration, and more. Build self-serve infrastructure automation, optimise deployment workflow at scale. Build and maintain a Compute Orchestration Platform using EC2 and GCE as the foundation. Develop and support tools for infrastructure, and evangelise best practices to be used by other engineers. Write code to implement networking and security at scale. Mentor and support engineers regarding development, concepts, and best practices. Must Haves: 1 to 3 years of relevant experience Proficient with at least one scripting language (Python or Bash) Strong Infrastructure fundamentals (preferably on AWS and GCP) Experience in containers and orchestration tools like Kubernetes (GKE/EKS) Good to Haves: Experience with implementing CI and CD pipelines using Jenkins, ArgoCD, Github Actions Experience using monitoring solutions like DataDog/ NewRelic, CloudWatch, ELK Stack, Prometheus/Grafana Experience with DevOps tools like Terraform, Ansible AWS, GCP, Azure certification(s) is a plus Previous experience of working in a startup Love for sports Dream Sports is India’s leading sports technology company with 250 million users, housing brands such as Dream11 , the world’s largest fantasy sports platform, FanCode , India’s digital sports destination, and DreamSetGo , a sports experiences platform. Dream Sports is based in Mumbai and has a workforce of close to 1,000 ‘Sportans’. Founded in 2008 by Harsh Jain and Bhavit Sheth, Dream Sports’ vision is to ‘Make Sports Better’ for fans through the confluence of sports and technology. For more information: https://dreamsports.group/
Posted 1 week ago
12.0 - 16.0 years
0 Lacs
karnataka
On-site
As a Senior Data Modeller, you will be responsible for leading the design and development of conceptual, logical, and physical data models for enterprise and application-level databases. Your expertise in data modeling, data warehousing, and data governance, particularly in cloud environments, Databricks, and Unity Catalog, will be crucial for the role. You should have a deep understanding of business processes related to master data management in a B2B environment and experience with data governance and data quality concepts. Your key responsibilities will include designing and developing data models, translating business requirements into structured data models, defining and maintaining data standards, collaborating with cross-functional teams to implement models, analyzing existing data systems for optimization, creating entity relationship diagrams and data flow diagrams, supporting data governance initiatives, and ensuring compliance with organizational data policies and security requirements. To be successful in this role, you should have at least 12 years of experience in data modeling, data warehousing, and data governance. Strong familiarity with Databricks, Unity Catalog, and cloud environments (preferably Azure) is essential. Additionally, you should possess a background in data normalization, denormalization, dimensional modeling, and schema design, along with hands-on experience with data modeling tools like ERwin. Experience in Agile or Scrum environments, proficiency in integration, databases, data warehouses, and data processing, as well as a track record of successfully selling data and analytics software to enterprise customers are key requirements. Your technical expertise should cover Big Data, streaming platforms, Databricks, Snowflake, Redshift, Spark, Kafka, SQL Server, PostgreSQL, and modern BI tools. Your ability to design and scale data pipelines and architectures in complex environments, along with excellent soft skills including leadership, client communication, and stakeholder management will be valuable assets in this role.,
Posted 1 week ago
3.0 years
0 Lacs
India
On-site
Data Engineer About RevX RevX helps app businesses acquire and reengage users via programmatic to retain, monetize, and accelerate revenue. We're all about taking your app businesses to a new growth level. We rely on data science, innovative technology, and AI, and a skilled team, to create and deliver seamless ad experiences to delight your app users. That’s why RevX is the ideal partner for app marketers that demand trustworthy insights, a hands-on team, and a commitment to growth. We help you build sound mobile strategies, combining programmatic UA, app re engagement, and performance branding to drive real and verifiable results so you can scale your business: with real users, high retention, and incremental revenue. About the Role We are seeking a forward-thinking Data Engineer who can bridge the gap between traditional data pipelines and modern Generative AI (GenAI) -enabled analytics tools. You'll design intelligent internal analytics systems using SQL, automation platforms like n8n , BI tools like Looker , and GenAI interfaces such as ChatGPT , Gemini , or LangChain . This is a unique opportunity to innovate at the intersection of data engineering , AI , and product analytics . Key Responsibilities Design, build, and maintain analytics workflows/tools leveraging GenAI platforms (e.g., ChatGPT, Gemini etc.) and automation tools (e.g., n8n , Looker etc.). Collaborate with product, marketing, and engineering teams to identify and deliver data-driven insights. Use SQL to query data from data warehouses (BigQuery, Redshift, Snowflake, etc.) and transform it for analysis or reporting. Build automated reporting and insight generation systems using visual dashboards and GenAI-based interfaces. Evaluate GenAI tools and APIs for applicability in data analytics workflows. Explore use cases where GenAI can assist in natural language querying , automated summarization , and explanatory analytics . Work closely with business teams to enable self-service analytics via intuitive GenAI-powered interfaces. Design and maintain robust data pipelines to ensure timely and accurate ingestion, transformation, and availability of data across systems. Implement best practices in data modeling, testing, and monitoring to ensure data quality and reliability in analytics workflows. Requirements 3+ years of experience in data analysis or a related field. Strong proficiency in SQL with the ability to work across large datasets. Hands-on experience building data tools/workflows using any of the following: n8n , Looker/LookML , ChatGPT API , Gemini , LangChain , or similar. Familiarity with GenAI concepts , LLMs, prompt engineering, and their practical application in data querying and summarization. Excellent problem-solving skills and a mindset to automate and optimize wherever possible. Strong communication skills with the ability to translate complex data into actionable insights for non-technical stakeholders. Nice to Have Prior experience in AdTech (ad operations, performance marketing, attribution, audience insights, etc.). Experience with Python , Jupyter Notebooks , or scripting for data manipulation. Familiarity with cloud platforms like Google Cloud Platform (GCP) or AWS . Knowledge of data visualization tools like Tableau , Power BI , or Looker etc. Why Join Us? Work on the cutting edge of GenAI and data analytics innovation . Contribute to building scalable analytics tools that empower entire teams. Be part of a fast-moving, experimentation-driven culture where your ideas matter. For more information visit www.revx.io
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
kochi, kerala
On-site
As an experienced professional with 57 years of expertise, you will contribute to the seamless replication and integration of data from various source systems such as Oracle EBS, PLM, Retail, and SQL Server to Redshift and BI platforms. Your role will involve utilizing tools like GoldenGate, Qlik Replicate, and ODI to ensure efficient data transfer. Your key responsibilities will include configuring, monitoring, and troubleshooting Oracle GoldenGate replication pipelines for EBS, PLM, and BI systems. You will also be responsible for administering Qlik Replicate jobs and monitoring data synchronization from Oracle/SQL Server to AWS Redshift. Additionally, you will manage ODI interfaces for batch ETL processes that feed the data lake or reporting systems. It will be crucial for you to ensure data integrity, adhere to latency SLAs, and plan for failure recovery in replication and transformation pipelines. Collaboration with BI teams to resolve source-side data issues and support schema evolution planning will also be a part of your responsibilities. To excel in this role, you must possess expertise in Oracle GoldenGate for both active-active and active-passive replication scenarios. You should be familiar with Qlik Replicate and Redshift pipelines. Proficiency in ODI interface development and job orchestration is essential. Moreover, having familiarity with data integrity validation, performance tuning, and log-based replication will be beneficial for this position.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
You are looking for an AWS Serverless Developer to join our team in Bengaluru with 3-5 years of experience. As an AWS Serverless Developer, your role will involve designing, developing, and maintaining serverless data pipelines using AWS Lambda, AWS Glue, I AM Roles & Policies, RDS, and other AWS services to ensure scalability and efficiency. You will be responsible for building Audit & logging processes, ensuring data quality and governance, implementing CICD and Git repo for code versioning, and developing new features with testing. Collaboration with the Lead Developer, Business Analyst, and application Architect is essential to align on project goals. You will troubleshoot issues, perform bug fixing, and maintain documentation for applications, architecture, and processes. Working with a distributed team following Agile methodologies is crucial for timely project delivery. To be successful in this role, you should have 3-5 years of software development experience focusing on AWS Lambda, I AM Roles & Policies, RDS, and other AWS services. Proficiency in PySpark or Python programming languages, CloudFormation Templates, Git for version control, and hands-on experience with AWS Aurora and/or Redshift are required. Experience in Data engineering topics such as AWS Glue and Data pipelines development, API data interfacing over API Gateway, strong analytical, problem-solving, and debugging skills, as well as excellent organizational skills for task prioritization based on project needs are key qualifications for this position.,
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
32455 Jobs | Dublin
Wipro
16590 Jobs | Bengaluru
EY
11025 Jobs | London
Accenture in India
10991 Jobs | Dublin 2
Amazon
8878 Jobs | Seattle,WA
Uplers
8715 Jobs | Ahmedabad
IBM
8204 Jobs | Armonk
Oracle
7750 Jobs | Redwood City
Capgemini
6181 Jobs | Paris,France
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi