Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 6.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Data Engineer - AWS (Financial Data Reconciliation) Exp: 5 -6 years Location: On-Site Ahmedabad Technical Skills: • AWS Stack: Redshift, Glue (PySpark), Lambda, Step Functions, CloudWatch, S3, Athena • Languages: Python (Pandas, PySpark), SQL (Redshift/PostgreSQL) • ETL & Orchestration: Apache Airflow (MWAA), AWS Glue Workflows, AWS Step Functions • Data Modeling: Experience with financial/transactional data schemas. • Data Architecture: Medallion (bronze/silver/gold) design, lakehouse patterns, slowly changing dimensions
Posted 1 week ago
0.0 - 5.0 years
0 - 0 Lacs
Ahmedabad, Gujarat
On-site
Data Engineer - AWS (Financial Data Reconciliation) Exp 5 -6 years Location - On-Site Ahmedabad Technical Skills: AWS Stack: Redshift, Glue (PySpark), Lambda, Step Functions, CloudWatch, S3, Athena Languages: Python (Pandas, PySpark), SQL (Redshift/PostgreSQL) ETL & Orchestration: Apache Airflow (MWAA), AWS Glue Workflows, AWS Step Functions Data Modeling: Experience with financial/transactional data schemas. Data Architecture: Medallion (bronze/silver/gold) design, lakehouse patterns, slowly changing dimensions Job Type: Contractual / Temporary Contract length: 6 months Pay: ₹60,000.00 - ₹70,000.00 per month Schedule: Day shift Ability to commute/relocate: Ahmedabad, Gujarat: Reliably commute or planning to relocate before starting work (Preferred) Experience: AWS: 5 years (Preferred) Work Location: In person Expected Start Date: 10/07/2025
Posted 1 week ago
0.0 - 12.0 years
0 Lacs
Hyderabad, Telangana
Remote
Job Information Date Opened 07/08/2025 Job Type Full time Industry IT Services City Hyderabad State/Province Telangana Country India Zip/Postal Code 500059 About Us About DATAECONOMY: We are a fast-growing data & analytics company headquartered in Dublin with offices inDublin, OH, Providence, RI, and an advanced technology center in Hyderabad,India. We are clearly differentiated in the data & analytics space via our suite of solutions, accelerators, frameworks, and thought leadership. Job Description We are seeking a highly experienced and hands-on Lead/ Senior Data Engineer to architect, develop, and optimize data solutions in a cloud-native environment. The ideal candidate will have 7–12 years of strong technical expertise in AWS Glue, PySpark, and Python , along with experience designing robust data pipelines and frameworks for large-scale enterprise systems. Prior exposure to the financial domain or regulated environments is a strong advantage. Key Responsibilities: Solution Architecture : Design scalable and secure data pipelines using AWS Glue, PySpark, and related AWS services (EMR, S3, Lambda, etc.) Leadership & Mentorship : Guide junior engineers, conduct code reviews, and enforce best practices in development and deployment. ETL Development : Lead the design and implementation of end-to-end ETL processes for structured and semi-structured data. Framework Building : Develop and evolve data frameworks, reusable components, and automation tools to improve engineering productivity. Performance Optimization : Optimize large-scale data workflows for performance, cost, and reliability. Data Governance : Implement data quality, lineage, and governance strategies in compliance with enterprise standards. Collaboration : Work closely with product, analytics, compliance, and DevOps teams to deliver high-quality solutions aligned with business goals. CI/CD Automation : Set up and manage continuous integration and deployment pipelines using AWS CodePipeline, Jenkins, or GitLab. Documentation & Presentations : Prepare technical documentation and present architectural solutions to stakeholders across levels. Requirements Required Qualifications: 7–12 years of experience in data engineering or related fields. Strong expertise in Python programming with a focus on data processing. Extensive experience with AWS Glue (both Glue Jobs and Glue Studio/Notebooks). Deep hands-on experience with PySpark for distributed data processing. Solid AWS knowledge : EMR, S3, Lambda, IAM, Athena, CloudWatch, Redshift, etc. Proven experience in architecture and managing complex ETL workflows . Proficiency with Apache Airflow or similar orchestration tools. Hands-on experience with CI/CD pipelines and DevOps best practices. Familiarity with data quality , data lineage , and metadata management . Strong experience working in agile/scrum teams. Excellent communication and stakeholder engagement skills. Preferred/Good to Have: Experience in financial services, capital markets, or compliance systems . Knowledge of data modeling , data lakes , and data warehouse architecture . Familiarity with SQL (Athena/Presto/Redshift Spectrum). Exposure to ML pipeline integration or event-driven architecture is a plus. Benefits Flexible work culture and remote options Opportunity to lead cutting-edge cloud data engineering projects Skill-building in large-scale, regulated environments.
Posted 1 week ago
0.0 - 5.0 years
0 Lacs
Chennai, Tamil Nadu
On-site
Location Chennai, Tamil Nadu, India Job ID R-230693 Date posted 08/07/2025 Job Title: Engineer Introduction to role:- Are you ready to make a significant impact in the world of biopharmaceuticals? AstraZeneca, a global leader in innovation-driven prescription medicines, is seeking a dedicated Data Engineer to join our Commercial IT Data Analytics & AI (DAAI) team. With operations in over 100 countries and headquarters in the United Kingdom, AstraZeneca offers a unique workplace culture that fosters innovation and collaboration. As a Data Engineer, you will play a crucial role in supporting and enhancing our data platforms built on AWS services. Your expertise in ETL, Data Warehousing, Databricks, and AWS applications will be vital in ensuring business continuity and driving efficiency. Are you up for the challenge? Accountabilities Monitor and maintain the health and performance of production systems and applications. Provide timely incident response, solve, and resolution for technical issues raised by users or monitoring tools. Perform root cause analysis for recurring issues and implement preventive measures. Investigate data anomalies, solve failures, and coordinate with relevant teams for resolution. Collaborate with development and infrastructure teams to support deployments and configuration changes. Maintain and update technical documentation, standard operating procedures, and knowledge bases. Ensure alignment to service-level agreements (SLAs) and minimize downtime or service disruptions. Manage user access, permissions, and security-related requests as per organizational policies. Participate in on-call rotations and provide after-hours support as needed. Communicate effectively with collaborators, providing status updates and post-incident reports. Proactively find opportunities for automation and process improvement in support activities. Support data migration, upgrades, and transitions as required. Support business continuity and disaster recovery exercises as required.. Essential Skills/Experience Education Background: B.E/B.Tech/MCA/MSc/BSc Overall Years of Experience: 3 to 5 years of experience Solid experience with SQL, data warehousing, and building ETL pipelines Hands-on experience with AWS services, including EMR, EC2, S3, Athena, RDS, Databricks, and Redshift. Skilled in working with columnar databases such as Redshift, Cassandra or BigQuery. Good understanding of ETL processes and data warehousing concepts. Familiarity with scheduling tools (especially Airflow is a plus). Able to write complex SQL queries for data extraction, transformation, and reporting. Excellent communication skills and ability to work well with both technical and non-technical teams. Strong analytical and troubleshooting skills in complex data environments Desirable Skills/Experience Experience with Databricks or Snowflake Proficient in scripting and programming languages such as Shell Scripting and Python Familiar with CI/CD using Bamboo Proficient in version control systems, including Bitbucket and GitHub Preferably experienced with release management processes Significant prior experience in an IT environment within the pharmaceutical or healthcare industry At AstraZeneca, we are committed to driving exciting transformation on our journey to becoming a digital and data-led enterprise. Our work connects across the entire business to power each function, influencing patient outcomes and improving lives. By unleashing the power of our latest innovations in data, machine learning, and technology, we turn complex information into life-changing insights. Join us to work alongside leading experts in our specialist communities, where your contributions are recognized from the top. Ready to take the next step? Apply now to become part of our dynamic team! Date Posted 09-Jul-2025 Closing Date 13-Jul-2025 AstraZeneca embraces diversity and equality of opportunity. We are committed to building an inclusive and diverse team representing all backgrounds, with as wide a range of perspectives as possible, and harnessing industry-leading skills. We believe that the more inclusive we are, the better our work will be. We welcome and consider applications to join our team from all qualified candidates, regardless of their characteristics. We comply with all applicable laws and regulations on non-discrimination in employment (and recruitment), as well as work authorization and employment eligibility verification requirements.
Posted 1 week ago
9.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
** Hiring for our customer - a well known Data/Cloud Engineering company ** Job Title - Lead Cloud Data Engineer Job Locations - Noida, Bengaluru, Indore Experience - 9 Years to 12 Years Hiring hands-on technical lead Hands on exposure with Big Data technologies – PySpark (Data frame and SparkSQL) Strong SQL and Data Warehousing AWS - Create AWS pipeline with required AWS services i.e S3,IAM, Glue, EMR, Redshift etc. Orchestration with Airflow and Any job scheduler experience Develop efficient ETL pipelines as per business requirements Team Leading experience is a Must.
Posted 1 week ago
2.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Description About Amazon.com: Amazon.com strives to be Earth's most customer-centric company where people can find and discover virtually anything they want to buy online. By giving customers more of what they want - low prices, vast selection, and convenience - Amazon.com continues to grow and evolve as a world-class e-commerce platform. Amazon's evolution from Web site to e-commerce partner to development platform is driven by the spirit of innovation that is part of the company's DNA. The world's brightest technology minds come to Amazon.com to research and develop technology that improves the lives of shoppers and sellers around the world. About Team The RBS group is an integral part of Amazon online product lifecycle and buying operations. The team is designed to ensure Amazon remains competitive in the online retail space with the best price, wide selection and good product information. The team’s primary role is to create and enhance retail selection on the worldwide Amazon online catalog. The tasks handled by this group have a direct impact on customer buying decisions and online user experience. Overview of the role The Business research Analyst will be responsible for continuous improvement projects across the RBS teams leading to each of its delivery levers. The long-term goal of Research analyst (RA) role is to eliminate Defects and automate qualifying tasks. Secondary goals is to improve the vendor or customer experience, and to enhance GMS/ FCF. This will require collaboration with local and global teams, which have process and technical expertise. Therefore, RA should be a self-starter who is passionate about discovering and solving complicated problems, learning complex systems, working with numbers, and organizing and communicating data and reports. RA should works across team(s) and Ops organization at country, regional and/or cross regional level to drive improvements and enablers to implement solutions for customer, cost savings in process workflow, systems configuration and performance metrics. Leads projects and opportunities across the Operations (FCs, Sortation, logistic centres, Supply Chain, Transportation, Engineering ...) that are business critical, and may be global in nature. RA should perform Big data analysis to identify the defects patterns/process gaps and come up with long term solutions to eliminate the defects/issues. RA should Writes clear and detailed functional specifications based on business requirements as well as writes and reviews business cases. Key Responsibilities for this Role:- Scoping, driving and delivering complex projects across multiple teams. Performs root cause analysis by understand the data need, get data / pull the data and analyze it to form the hypothesis and validate it using data. Dive deep to drive product pilots, build and analyze large data sets, and construct problem hypotheses that help steer the product feature roadmap (e.g. with use of R,SAS, STATA, Matlab, Python or JAVA), tools for database (e.g. SQL, Redshift) and ML tools (Rapid Miner, Eider) Build programs to create a culture of continuous improvement within the business unit, and foster a customer-centric focus on the quality, productivity, and scalability of our services. Find the scalable solution for business problem by executing pilots and build Deterministic and ML model (plug and play on readymade ML models and python skills). Manages meetings, business and technical discussions regarding their part of the projects. Makes recommendations and decisions that impact development schedules and the success for a product or project. Drives team(s)/partners to meet program and/or product goals. Coordinates design effort between internal team and External team to develop optimal solutions for their part of project for Amazon’s network. Supports identification of down-stream problems (i.e. system incompatibility, resource unavailability) and escalate them to the appropriate level before they become project-threatening. Performs supporting research, conduct analysis of the bigger part of the projects and effectively interpret reports to identify opportunities, optimize processes, and implement changes within their part of project. Ability to convince and interact with stakeholders at all level either to gather data and information or to execute and implement according to the plan. Ability to deal with ambiguity and problem solver Build reports from established data warehouses and self-service reporting tools Communicate ideas effectively and with influence (both verbally and in writing), within and outside the team. Key Performance Areas Solve large and complex business problems by aligning multiple teams together. Data analytics and Data Sciences Machine learning Basic Qualifications 2+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience Bachelor's degree 3+ year on Python or R experience 1+ year experience in financial/business analysis 2+ year on SQL 2+ year ML project experience 2+ year on use experience of data analysis packages (Numpy, Pandas, Scipy etc.) Preferred Qualifications Knowledge of data modeling and data pipeline design NLP and Text Processing Deep Learning Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI - BLR 14 SEZ - F07 Job ID: A2968108
Posted 1 week ago
10.0 years
0 Lacs
Pune, Maharashtra, India
Remote
JOB_POSTING-3-72216-5 Job Description Role Title: VP, Data Engineering Tech Lead (L12) Company Overview COMPANY OVERVIEW: Synchrony (NYSE: SYF) is a premier consumer financial services company delivering one of the industry’s most complete digitally enabled product suites. Our experience, expertise and scale encompass a broad spectrum of industries including digital, health and wellness, retail, telecommunications, home, auto, outdoors, pet and more. We have recently been ranked #5 among India’s Best Companies to Work for 2023, #21 under LinkedIn Top Companies in India list, and received Top 25 BFSI recognition from Great Place To Work India. We have been ranked Top 5 among India’s Best Workplaces in Diversity, Equity, and Inclusion, and Top 10 among India’s Best Workplaces for Women in 2022. We offer 100% Work from Home flexibility for all our Functional employees and provide some of the best-in-class Employee Benefits and Programs catering to work-life balance and overall well-being. In addition to this, we also have Regional Engagement Hubs across India and a co-working space in Bangalore Organizational Overview Organizational Overview: This role will be part of the Data Architecture & Analytics group part of CTO organization Data team is responsible for designing and developing scalable data pipelines for efficient data ingestion, transformation, and loading(ETL). Collaborating with cross-functional teams to integrate new data sources and ensure data quality and consistency. Building and maintaining data models to facilitate data access and analysis by Data Scientists and Analysts. Responsible for the SYF public cloud platform & services. Govern health, performance, capacity, and costs of resources and ensure adherence to service levels Build well defined processes for cloud application development and service enablement. Role Summary/Purpose We are seeking a highly skilled Cloud Technical Lead with expertise in Data Engineering who will work in multi-disciplinary environments harnessing data to provide valuable impact for our clients. The Cloud Technical Lead will work closely with technology and functional teams to drive migration of legacy on-premises data systems/platforms to cloud-based solutions. The successful candidate will need to develop intimate knowledge of SYF key data domains (originations, loan activity, collection, etc.) and maintain a holistic view across SYF functions to minimize redundancies and optimize the analytics environment. Key Responsibilities Manage end-to-end project lifecycle, including planning, execution, and delivery of cloud-based data engineering projects. Providing guidance on suitable options, designing, and creating data pipeline for the analytical solutions across data lake, data warehouses and cloud implementations. Architect and design robust data pipelines and ETL processes leveraging Ab Initio and Amazon Redshift. Ensure data integration, transformation, and storage process are optimized for scalability and performance in cloud environment. Ensure data security, governance, and compliance in the cloud infrastructure. Provide leadership and guidance to data engineering teams, ensuring best practices are followed. Ensure timely delivery of high-quality solutions in an Agile environment. Required Skills/Knowledge Minimum 10+ years of experience with Bachelor's degree in Computer Science or similar technical field of study or in lieu of a degree 12+ years of relevant experience Minimum 10+ years of experience in managing large scale data platforms (Data warehouse/Data Late/Cloud) environments Minimum 10+ years of financial services experience Minimum 6+ years of experience working with Data Warehouses/Data Lake/Cloud. 6+ years’ of hards-on programming experience in ETL tools - Ab Initio or Informatica highly preferred. Be able to read and reverse engineer the logic in Ab Initio graphs. Hands on experience with cloud platforms such as S3, Redshift, Snowflake, etc. Working knowledge of Hive, Spark, Kafka and other data lake technologies. Strong familiarity with data governance, data lineage, data processes, DML, and data architecture control execution. Experience to analyze system requirements and implement migration methods for existing data. Ability to develop and maintain strong collaborative relationships at all levels across IT and the business. Excellent written and oral communication skills, along with a strong ability to lead and influence others. Experience working iteratively in a fast-paced agile environment. Demonstrated ability to drive change and work effectively across business and geographical boundaries. Expertise in evaluating technology and solution engineering, with strong focus on architecture and deployment of new technology Superior decision-making, client relationship, and vendor management skills. Desired Skills/Knowledge Prior work experience in a credit card/banking/fintech company. Experience dealing with sensitive data in a highly regulated environment. Demonstrated implementation of complex and innovative solutions. Agile experience using JIRA or similar Agile tools. Eligibility Criteria Bachelor's degree in Computer Science or similar technical field of study (Masters degree preferred) Minimum 12+ years of experience in managing large scale data platforms (Data warehouse/Data Late/Cloud) environments Minimum 12+ years of financial services experience Minimum 8+ years of experience working with Oracle Data Warehouses/Data Lake/Cloud 8+ years’ of programming experience in ETL tools - Ab Initio or Informatica highly preferred. Be able to read and reverse engineer the logic in Ab Initio graphs. Hands on experience with cloud platforms such as S3, Redshift, Snowflake, etc. Rigorous data analysis through SQL in Oracle and various Hadoop technologies. Involvement in large scale data analytics migration from on premises to a public cloud Strong familiarity with data governance, data lineage, data processes, DML, and data architecture control execution. Experience to analyze system requirements and implement migration methods for existing data. Excellent written and oral communication skills, along with a strong ability to lead and influence others. Experience working iteratively in a fast-paced agile environment. Work Timings: 3:00 PM IST to 12:00 AM IST (WORK TIMINGS: This role qualifies for Enhanced Flexibility and Choice offered in Synchrony India and will require the incumbent to be available between 06:00 AM Eastern Time – 11:30 AM Eastern Time (timings are anchored to US Eastern hours and will adjust twice a year locally). This window is for meetings with India and US teams. The remaining hours will be flexible for the employee to choose. Exceptions may apply periodically due to business needs. Please discuss this with the hiring manager for more details .) For Internal Applicants Understand the criteria or mandatory skills required for the role, before applying Inform your manager and HRM before applying for any role on Workday Ensure that your professional profile is updated (fields such as education, prior experience, other skills) and it is mandatory to upload your updated resume (Word or PDF format) Must not be any corrective action plan (First Formal/Final Formal, PIP) L10+ Employees who have completed 18 months in the organization and 12 months in current role and level are only eligible. L10+ Employees can apply Level / Grade : 12 Job Family Group Information Technology
Posted 1 week ago
5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Data Analytics Good to have skills : Microsoft SQL Server, Python (Programming Language), AWS Redshift Minimum 5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary The purpose of the Data Engineering function within the Data and Analytics team is to develop and deliver great data assets and data domain management for our Personal Banking customers and colleagues seamlessly and reliably every time. As a Senior Data Engineer, you will bring expertise on data handling, curation and conformity capabilities to the team; support the design and development of solutions which assist analysis of data to drive tangible business benefit; and assist colleagues in developing solutions that will enable the capture and curation of data for analysis, analytical and/or reporting purposes. The Senior Data Engineer must be experience working as part of an agile team to develop a solution in a complex enterprise. Roles & Responsibilities Hands on development experience in Data Warehousing, and or Software Development Experience utilising tools and practices to build, verify and deploy solutions in the most efficient ways Experience in Data Integration and Data Sourcing activities Experience developing data assets to support optimised analysis for customer and regulatory outcomes. Provide ongoing support for platforms as required e.g. problem and incident management Experience in Agile software development including Github, Confluence, Rally Professional & Technical Skills Experience with cloud technologies, especially AWS (S3, Redshift, Airflow), DevOps and DataOps tools (Jenkins, Git, Erwin) Advanced SQL and Python user Knowledge of UNIX, Spark and Databricks Additional Information Position: Senior Analyst, Data Engineering Reports to: Manager, Data Engineering Division: Personal Bank Group: 3 Industry/domain skills: Some expertise in Retail Banking, Business Banking and or Wealth Management preferred
Posted 1 week ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Join Barclays as a Senior R&D Infrastructure and Data Engineer where you'll spearhead the evolution of our digital landscape, driving innovation and excellence. In this role, you will be an integral part of our Cyber Fraud Fusion Centre, delivering scalable CFFC services to disrupt and prevent upstream economic crime. To be successful as a Senior R&D Infrastructure and Data Engineer, you will need the following: Experience working as cloud engineer in global organisation, covering broad spectre of architectures including deployment of GUI applications, data transformation, automation and data storage solutions. Demonstrated ability to coach software and data engineers to deliver complex projects under consistent deliver process (e.g. CI/CD) Broad experience in AWS Cloud technology including management and governance tools (e.g. Lambda, Auto Scaling, VPC, EC2, KMS, IAM, CloudFormation, CloudWatch, S3, RDS, Glue, Athena, Lake Formation, Redshift), and knowledge in Databricks and Snowflake Experience supporting analytics use cases from ML Ops and data hydration perspective Ability to drive the projects technically to completion, identify risks, costs , challenge architecture and long-term sustainability Understands different project methodologies, project lifecycles, major phases, dependencies and milestones within a project, and the required documentation needs. Ability to collaborate across teams to deliver complex systems and components and manage stakeholder’s expectations well Some Other Highly Valued Skills May Include ML Ops experience on on-prem and cloud platforms Knowledge of latest social engineering tactics used by cybercriminals, particularly in the scams and frauds spaces Experience processing real-time events data at scale Practical understanding of full stack web development Experience working in fraud technology function You may be assessed on the key critical skills relevant for success in role, such as risk and controls, change and transformation, business acumen strategic thinking and digital and technology, as well as job-specific technical skills. The successful candidate can either be based in Pune. Purpose of the role To build and maintain the systems that collect, store, process, and analyse data, such as data pipelines, data warehouses and data lakes to ensure that all data is accurate, accessible, and secure. Accountabilities Build and maintenance of data architectures pipelines that enable the transfer and processing of durable, complete and consistent data. Design and implementation of data warehoused and data lakes that manage the appropriate data volumes and velocity and adhere to the required security measures. Development of processing and analysis algorithms fit for the intended data complexity and volumes. Collaboration with data scientist to build and deploy machine learning models. Assistant Vice President Expectations Consult on complex issues; providing advice to People Leaders to support the resolution of escalated issues. Identify ways to mitigate risk and developing new policies/procedures in support of the control and governance agenda. Take ownership for managing risk and strengthening controls in relation to the work done. Perform work that is closely related to that of other areas, which requires understanding of how areas coordinate and contribute to the achievement of the objectives of the organisation sub-function. Collaborate with other areas of work, for business aligned support areas to keep up to speed with business activity and the business strategy. Engage in complex analysis of data from multiple sources of information, internal and external sources such as procedures and practises (in other areas, teams, companies, etc).to solve problems creatively and effectively. Communicate complex information. 'Complex' information could include sensitive information or information that is difficult to communicate because of its content or its audience. Influence or convince stakeholders to achieve outcomes. All colleagues will be expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence and Stewardship – our moral compass, helping us do what we believe is right. They will also be expected to demonstrate the Barclays Mindset – to Empower, Challenge and Drive – the operating manual for how we behave. Back to nav Share job X(Opens in new tab or window) Facebook(Opens in new tab or window) LinkedIn(Opens in new tab or window)
Posted 1 week ago
2.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description Associate Database Administrator So what does an Associate Database Administrator do? Think of yourself as someone who will be performing administrative tasks and day to day Database Management duties for our Core Apps tools and solutions. Responsibilities Oversee administration and maintenance of different database engines, both On-Prem and in AWS. This also includes Data Warehouses in the Cloud (Redshift experience is a plus). Perform health checks and performance tuning to ensure databases are available and healthy. Place and Test disaster recovery plans for all databases. Ensure protocols for DB security and data privacy are in place. Maintain compliance of all databases to various InfoSec and Audit bodies Automate tasks as deemed necessary. Handle Database service request tickets in a timely manner Audit accounts on periodic review Collaborate with Various IT groups for support needed on Infrastructure requests and dependencies Qualifications Bachelor's Degree or equivalent experience in Computer Science, Technology, or a related field of study. Advanced Experience (minimum of 2 years) in handling DBA tasks for MySQL and SQL Servers including installation, upgrades, patching, and deployments. Experience in administration of Data Warehouses (Redshift is a plus). Experience in handling Amazon Aurora (fully managed relational database engine compatible with MySQL and PostgreSQL). Proficient with high availability tools like AWS DMS, Always On, FCI, Replication, etc. Experience in Database monitoring tools such as SQL Server Profiler Proficient in SQL (Structured Query Language) for managing relational databases, primarily T-SQL extension. Hands on experience implementing Disaster Recovery for different platforms. Experience or knowledge of handling PCI-DSS, Data Privacy, and DB audits DB hardening based on CIS Knowledge of DB performance tuning and SQL tuning Work Location / Work Schedule / Travel: Hybrid Rotational Shifts Five Working Days TaskUs is proud to be an equal opportunity workplace and is an affirmative action employer. We celebrate and support diversity; we are committed to creating an inclusive environment for all employees. TaskUs people first culture thrives on it for the benefit of our employees, our clients, our services, and our community. Req Id: R_2505_6646 Posted At: Thu May 22 2025 00:00:00 GMT+0000 (Coordinated Universal Time)
Posted 1 week ago
3.0 - 8.0 years
0 Lacs
India
Remote
Position: IICS Developer Location - Gurugram, Kolkata, Mumbai, Chennai, Hyderabad, Bangalore, Pune (Hybrid – 2 days on-site, 3 days remote) Duration: Permanent Role Job Description 3-8 Years of technology Consulting experience A minimum of 4 Years of experience in IICS Data integration development. High degree of knowledge using various IICS services like Data Integration, Mass ingestion and more. Strong knowledge in using AWS services like lambda, GLUE, S3, Redshift, SNS, SQS and more. Strong scripting experience with python and ability to write SQL queries and string analytical skills. Experience working on CICD/DevOps is must to have. Proven experience with implementing migratio projects with IICS. Experience working on Matillion ETL tool is a nice to have. Proven experience with agile/iterative methodologies implementing Cloud projects. Ability to translate business requirements and technical requirements into technical design. Good knowledge of end to end project delivery methodology implementing Cloud projects. Strong UNIX operating system concepts and shell scripting knowledge Effective communication skills (written and verbal) to properly articulate complicated cloud reports to management and other IT development partners. Ability to operate independently with clear focus on schedule and outcomes.
Posted 1 week ago
6.0 - 10.0 years
22 - 25 Lacs
Bengaluru
Work from Office
Our engineering team is looking for a Data Engineer who is very proficient in python, has a very good understanding of AWS cloud computing, ETL Pipelines and demonstrates proficiency with SQL and relational database concepts. In this role you will be a very mid-to senior-level individual contributor guiding our migration efforts by serving as a Senior data engineer working closely with the Data architects to evaluate best-fit solutions and processes for our team. You will work with the rest of the team as we move away from legacy tech and introduce new tools and ETL pipeline solutions. You will collaborate with subject matter experts, data architects, informaticists and data scientists to evolve our current cloud based ETL to the next Generation. Responsibilities Independently prototypes/develop data solutions of high complexity to meet the needs of the organization and business customers. Designs proof-of-concept solutions utilizing an advanced understanding of multiple coding languages to meet technical and business requirements, with an ability to perform iterative solution testing to ensure specifications are met. Designs and develops data solutions that enables effective self-service data consumption, and can describe their value to the customer. Collaborates with stakeholders in defining metrics that are impactful to the business. Prioritize efforts based on customer value. Has an in-depth understanding of Agile techniques. Can set expectations for deliverables of high complexity. Can assist in the creation of roadmaps for data solutions. Can turn vague ideas or problems into data product solutions. Influences strategic thinking across the team and the broader organization. Maintains proof-of-concepts and prototype data solutions, and manages any assessment of their viability and scalability, with own team or in partnership with IT. Working with IT, assists in building robust systems focusing on long-term and ongoing maintenance and support. Ensure data solutions include deliverables required to achieve high quality data. Displays a strong understanding of complex multi-tier, multi-platform systems, and applies principles of metadata, lineage, business definitions, compliance, and data security to project work. Has an in-depth understanding of Business Intelligence tools, including visualization and user experience techniques. Can set expectations for deliverables of high complexity. Works with IT to help scale prototypes. Demonstrates a comprehensive understanding of new technologies as needed to progress initiatives. Requirements Expertise in Python programming, with demonstrated real-world experience building out data tools in a Python environment. Expertise in AWS Services , with demonstrated real-world experience building out data tools in a Python environment. Bachelor`s Degree in Computer Science, Computer Engineering, or related discipline preferred. Masters in same or related disciplines strongly preferred. 3+ years experience in coding for data management, data warehousing, or other data environments, including, but not limited to, Python and Spark. Experience with SAS is preferred. 3+ years experience as developer working in an AWS cloud computing environment. 3+ years experience using GIT or Bitbucket. Experience with Redshift, RDS, DynomoDB is preferred. Strong written and oral communication skills are required. Experience in the healthcare industry with healthcare data analytics products Experience with healthcare vocabulary and data standards (OMOP, FHIR) is a plus
Posted 1 week ago
170.0 years
0 Lacs
Pune, Maharashtra, India
On-site
About Birlasoft: Birlasoft, a global leader at the forefront of Cloud, AI, and Digital technologies, seamlessly blends domain expertise with enterprise solutions. The company’s consultative and design-thinking approach empowers societies worldwide, enhancing the efficiency and productivity of businesses. As part of the multibillion-dollar diversified CKA Birla Group, Birlasoft with its 12,000+ professionals, is committed to continuing the Group’s 170-year heritage of building sustainable communities. Job Title – AWS Redshift Expert Role Overview We are seeking a highly skilled AWS Redshift Expert to join our data engineering team. This role is critical to supporting our AWS ProServe engagements and internal analytics initiatives. The ideal candidate will have deep expertise in Redshift architecture, performance tuning, and integration with BI tools such as Looker. You will work closely with cross-functional teams including AWS tech leads, data analysts, and client stakeholders to ensure scalable, secure, and high-performing data solutions. Key Responsibilities • Design, deploy, and manage AWS Redshift clusters for enterprise-scale data warehousing. • Optimise query performance using DISTKEY, SORTKEY, and materialised views • Collaborate with BI teams to refactor LookML models and improve dashboard performance • Conduct performance benchmarking and implement automated alerts for degradation • Lead data migration efforts from platforms like BigQuery to Redshift • Ensure data security, compliance, and backup/recovery protocols are in place • Provide technical leadership during client interviews and solution discussions Required Skills & Experience • 5+ years of experience in data engineering with at least 3 years focused on AWS Redshift. • Hands-on experience with Redshift performance tuning and workload management. • Familiarity with BI tools like Looker, Power BI, and semantic layer optimisation • Experience in cloud architecture and AWS services (EC2, S3, IAM, VPC) • Excellent communication skills to interface with clients and internal leadership.
Posted 1 week ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Proven Experience in Business and Data Analytics Mentor team-mates on various business critical projects Solid experience in data analysis and reporting; Exposure to BFSI customer and business data is a plus. Able to communicate with the various stakeholders, manage tasks and issues and monitor progress to ensure the project is on track Proficient in SQL (Data Prep, Procedures, etc.) and Adv. Excel (Pivots, Data Models, Adv. Formulas, etc.) Experience working on MSSQL, Redshift, Databricks and business intelligence tools (e.g. Tableau) Problem-solving skills; methodical and logical approach Willingness to learn and adapt to new technologies Excellent written and verbal communication skills
Posted 1 week ago
6.0 years
0 Lacs
Thiruvananthapuram, Kerala, India
Remote
About The Company Armada is an edge computing startup that provides computing infrastructure to remote areas where connectivity and cloud infrastructure is limited, as well as areas where data needs to be processed locally for real-time analytics and AI at the edge. We’re looking to bring on the most brilliant minds to help further our mission of bridging the digital divide with advanced technology infrastructure that can be rapidly deployed anywhere . About The Role We are seeking a highly motivated Data Engineer to join our Data Platform team for our Edge Computing AI Platform. As a Data Engineer in our Data Platform team, you will be responsible for helping us shape the future of data ingestion, processing, and analysis, while maintaining and improving existing data systems. If you are a highly motivated individual with a passion for cutting-edge AI, cloud, edge, and infrastructure technology and are ready to take on the challenge of defining and delivering a new computing and AI platform, we would love to hear from you. Location. This role is office-based at our Trivandrum, Kerala office. What You'll Do (Key Responsibilities) Build new tools and services that support other teams’ data workflows, ingestion, processing, and distribution. Design, discuss, propose, and implement to our existing data tooling and services. Collaborate with a diverse group of people, giving and receiving feedback for growth. Execute on big opportunities and contribute to building a company culture rising to the top of the AI and Edge Computing industry. Required Qualifications 6+ years of experience in software development. Experience with data modeling, ETL/ELT processes, streaming data pipelines. Familiarity with data warehousing technologies like Databricks/Snowflake/BigQuery/Redshift and data processing platforms like Spark; working with data warehousing file formats like Avro and Parquet. Strong understanding of Storage (Object Stores, Data Virtualization) and Compute (Spark on K8S, Databricks, AWS EMR and the like) architectures used by data stack solutions and platforms. Experience with scheduler tooling like Airflow. Experience with version control systems like Git and working using a standardized git flow. Strong analytical and problem-solving skills, with the ability to work independently and collaboratively in a team environment. Professional experience developing data-heavy platforms and/or APIs. A strong understanding of distributed systems and how architectural decisions affect performance and maintainability. Bachelor’s degree in computer science, Electrical Engineering, or related field. Preferred Qualifications Experience analyzing ML algorithms that could be used to solve a given problem and ranking them by their success probability. Proficiency with a deep learning framework such as TensorFlow or Keras. Understanding of MLOps practices and practical experience with platforms like Kubeflow / Sagemaker. Compensation & Benefits For India-based candidates: We offer a competitive base salary along with equity options, providing an opportunity to share in the success and growth of Armada. You're a Great Fit if You're A go-getter with a growth mindset. You're intellectually curious, have strong business acumen, and actively seek opportunities to build relevant skills and knowledge A detail-oriented problem-solver. You can independently gather information, solve problems efficiently, and deliver results with a "get-it-done" attitude Thrive in a fast-paced environment. You're energized by an entrepreneurial spirit, capable of working quickly, and excited to contribute to a growing company A collaborative team player. You focus on business success and are motivated by team accomplishment vs personal agenda Highly organized and results-driven. Strong prioritization skills and a dedicated work ethic are essential for you Equal Opportunity Statement At Armada, we are committed to fostering a work environment where everyone is given equal opportunities to thrive. As an equal opportunity employer, we strictly prohibit discrimination or harassment based on race, color, gender, religion, sexual orientation, national origin, disability, genetic information, pregnancy, or any other characteristic protected by law. This policy applies to all employment decisions, including hiring, promotions, and compensation. Our hiring is guided by qualifications, merit, and the business needs at the time.
Posted 1 week ago
5.0 - 7.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Title Data Scientist I/II Job Description At Philips, data is in the center of everything we do. We believe that, every data item has a story of its own. If you have a penchant for unravelling the hidden patterns, then you are the person we are looking for! To build our next generation smarts, you will be working with experts in a variety of fields, including clinical & regulatory specialists, UX designers, dental and skin care professionals, system architects, product management to integrate new technologies and refine system performance. You will help build learning systems, leveraging data sets consisting of user interactions and actions, collected from hundreds of thousands of consumer devices, per day to model, analyse, and predict user behaviours. While some of our algorithms run on resource constrained devices, others require large clusters on our cloud infrastructure. As data scientist you will investigate, discern, and interpret data sets to recognize user behaviour and predict trends. Job Responsibilities Use predictive modeling to increase and optimize customer experiences, revenue generation, campaign optimization and other business outcomes Work with product management to develop data use cases and embed predictive models in workflows on resource constrained platforms and cloud enabled. Selecting features, building and optimizing classifiers using machine learning and deep learning techniques Collaborates with Data Engineers to enhance data collection and ingestion/curation techniques to include information that is relevant for building analytic systems Processing, cleansing, and verifying the integrity of data used for analysis Develop processes and tools to monitor and analyze model performance and data accuracy. Life cycle management of predictive models. Adherence to compliance procedures in accordance with regulatory standards, requirements, and policies. Managing and designing the reporting environment, including data sources security, and metadata. Job Qualifications: Master’s degree or PhD in Computer Science, Information management, Statistics or related field, with 5 to 7 years of experience in the Consumer or Healthcare industry manipulating data sets and building predictive models with focus on product development Experience in statistical modelling, machine learning, data mining, unstructured data analytics and natural language processing. Sound understanding of - Bayesian Modelling, Classification Models, Cluster Analysis, Neural Network, Nonparametric Methods, Multivariate Statistics, etc. Strong hands on knowledge of ML techniques like regression algorithms, K-NN, Naïve Bayes, SVM and ensemble techniques like Random forest, AdaBoost etc Having strong knowledge in unsupervised learning algorithms using Neural networks and Deep-Learning Strong knowledge in Data Wrangling and Exploration techniques to identify the patterns, trends and outliners. Deep knowledge and practical experience with data science toolkits, such as NumPy, Pandas, scikit-learn or equivalent Experience with data visualization tools, such as QlikView, Matplotlib, seaborn or equivalent tools. Proficiency in using query languages, such as SQL, PL/SQL Hands on experience in the one or more databases like Hadoop, AWS Redshift, Snowflake etc. Good applied statistics skills, such as distributions, statistical testing, regression, etc. Good ETL scripting and programming skills, such as Python, R or Scala to integrate developed solution into the proposition. A team player capable of working and integrating across cross-functional team for implementing project requirements. Experience in technical requirements gathering and documentation. Ability to work effectively and independently in a fast-paced global collaborative agile team environment with tight deadlines A flexible, pragmatic and collaborative team player with innate ability to engage with stakeholders at all levels in the organization. A self-starter with high levels of drive, energy, resilience and a desire for professional excellence with a passion for data and data science How We Work Together We believe that we are better together than apart. For our office-based teams, this means working in-person at least 3 days per week. this role is an office role. About Philips We are a health technology company. We built our entire company around the belief that every human matters, and we won't stop until everybody everywhere has access to the quality healthcare that we all deserve. Do the work of your life to help the lives of others. Learn more about our business. Discover our rich and exciting history. Learn more about our purpose. If you’re interested in this role and have many, but not all, of the experiences needed, we encourage you to apply. You may still be the right candidate for this or other opportunities at Philips. Learn more about our culture of impact with care here.
Posted 1 week ago
4.0 years
0 Lacs
India
Remote
Hi, I'm Christy Anne, your Recruiter and guide to joining CSG! We are excited to learn more about you and your unique background. We are looking for a Job Profile Software Dev Engineer II: To be considered for this role, candidates should meet the following criteria: Leverage CSG’s Xponent to build operational marketing databases for customers across a range of verticals. Develop custom extensions to the standard data model where needed, to accommodate customer specific data sources that are not represented within the standard data model, to support analytics, reporting and marketing activation use cases Productize custom-developed artifacts into the Xponent standard data model, where applicable Create and maintain database artifacts, technical documents (functional specs, design document, data model) for all custom development Design and implement test procedures to ensure data quality Provide feedback and ideas on improvements to the Xponent feature set and usability- with the aim of continuously improving the implementation process and results. Is this opportunity right for you? We’re looking for candidates who: Bachelor’s degree in technology (Computer Science, Information Science) 4+ years’ experience in developing ETL pipelines in Database Technologies (knowledge of Redshift/Snowflake preferred) Understanding of Snowflake Architecture, Snowflake Cloud Development & Snowflake Admin Management is preferred if experienced in snowflake Strong knowledge of RDBMS (MySQL, SQL Server etc.) and database design concepts is a must. Strong knowledge of Python programming and Shell Script is must. Experience in Cloud platforms (AWS/Azure) is a must. Knowledge NoSQL Databases (MongoDB, Cassandra etc.) is preferred. Ability to work collaboratively in a fast-paced environment. Excellent client-facing abilities with strong communication skills CSGer Perks & Benefits Work from Home, in-office, or hybrid Employee Belonging Groups Healthcare: Dental, Medical, and Vision Paid Vacation, Volunteer, and Holiday Time Off And so much more! View More Benefits (https://www.csgi.com/careers/) If you would like to be considered for employment opportunities with CSG and need special assistance due to a disability or accommodation for a disability throughout any aspect of the application process, please call us at +1 (402) 431-7440 or email us at accommodations@csgi.com. CSG provides accommodations for persons with disabilities in employment, including during the hiring process and any interview and/or testing processes. Our Story CSG empowers companies to build unforgettable experiences, making it easier for people and businesses to connect with, use and pay for the services they value most. For over 40 years, CSG's technologies and people have helped some of the world's most recognizable brands solve their toughest business challenges and evolve to meet the demands of today's digital economy. By channeling the power of all, we make ordinary customer and employee experiences extraordinary. Our people [CSGers] are fearlessly committed and connected, high on integrity and low on ego, making us the easiest company to do business with and the best place to work. We power a culture of integrity, innovation, and impact across our locations, representing the most authentic version of ourselves to build a better future together. That's just who we are. Learn more about CSG Inclusion & Impact here Our Guiding Principles Impact Always help and empower others, whether they’re colleagues or customers. When our employees set their minds to something, great things happen. Integrity Do what’s right for our customers and our people while being authentic. We treat everyone with trust and respect—that’s just who we are. Inspiration Be bold in the way you think and passionate about the work you do. Test out innovative ideas without the fear of failure. Location(s): IN.Bangalore.Remote Location(s): India Remote
Posted 1 week ago
5.0 years
0 Lacs
India
Remote
Data Engineer Job Type: 6 Months, Contract-to-Hire Location: Remote iO Associates is seeking a skilled Data Engineer to design and optimize cloud-based data pipelines for a fast-growing analytics firm specializing in eCommerce. You'll work with cutting-edge technologies like Python, PySpark, and DBT to transform raw data into actionable insights for global brands. Required: 5+ years' experience in data engineering with strong Python/PySpark skills. Expertise in SQL, data modeling, and cloud platforms (AWS or GCP preferred). Develop and Maintain DBT Models, build modular, tested, and well-documented DBT models to transform raw data into analytics-ready datasets. Optimize SQL Transformations write efficient SQL logic within DBT models to improve performance and maintainability across large datasets. Excellent communication skills to bridge technical and business needs. Key Responsibilities: Design, build, and maintain scalable ETL pipelines for high-volume eCommerce data. Implement efficient data warehousing solutions (Redshift, BigQuery, Snowflake). Develop robust data models and ensure data quality through validation processes. Optimize pipeline performance and troubleshoot issues in cloud environments (AWS/GCP). Collaborate with analytics teams to deliver reliable datasets for business intelligence. Why Apply? Work with cutting-edge data technologies in a high-growth environment. Remote-first culture with flexible work arrangements. Opportunity to build solutions powering data-driven decisions for global brands. Collaborative team that values innovation and professional growth. Join us to shape the future of eCommerce analytics!
Posted 1 week ago
5.0 - 10.0 years
12 - 20 Lacs
Chennai
Work from Office
Location: Chennai, India Experience: 5+ years Work Mode: Full-time (9am-6:30pm), In-office (Monday to Friday) Department: Asign Data Sciences About Us: At Asign, we are revolutionizing the art sector with our innovative digital solutions. We are a passionate and dynamic startup dedicated to enhancing the art experience through technology. Join us in creating cutting-edge products that empower artists and art enthusiasts worldwide. Role Overview We are looking for an experienced Data Engineer with a strong grasp of ELT architecture and experience to help us build and maintain robust data pipelines. This is a hands-on role for someone passionate about structured data, automation, and scalable infrastructure. The ideal candidate will be responsible for sourcing data, ingesting, transforming, storing, and making data accessible and reliable for data analysis, machine learning, and reporting. You will play a key role in maintaining and evolving our data architecture and ensuring that our data flows efficiently and securely. Key Responsibilities: Design, develop, and maintain efficient and scalable ELT data pipelines. Work closely with the data science and backend teams to understand data needs and transform raw inputs into structured datasets. Integrate multiple data sources including APIs, web pages, spreadsheets, and databases into a central warehouse. Monitor, test, and continuously improve data flows for reliability and performance. Create documentation and establish best practices for data governance, lineage, and quality. Collaborate with product and tech teams to plan data models that support business and AI/ML applications. Required Skills: Minimum 5 years of hands-on experience in data engineering. Solid understanding and experience with ELT pipelines and modern data stack tools. Practical knowledge of one or more orchestrators (Dagster, Airflow, Prefect, etc.). Proficiency in Python and SQL. Experience working with APIs and data integration from multiple sources. Familiarity with one or more cloud data warehouses (e.g., Snowflake, BigQuery, Redshift). Strong problem-solving and debugging skills. Qualifications: Must-have: Bachelors/Masters degree in Computer Science, Engineering, Statistics, or a related field Proven experience (5+ years) in data engineering, data integration, and data management Hands-on experience in data sourcing tools and frameworks (e.g. Scrapy, Beautiful Soup, Selenium, Playwright) Proficiency in Python and SQL for data manipulation and pipeline development Experience with cloud-based data platforms (AWS, Azure, or GCP) and data warehouse tools (e.g. Redshift, BigQuery, Snowflake) Familiarity with workflow orchestration tools (e.g. Airflow, Prefect, Dagster) Strong understanding of relational and non-relational databases (PostgreSQL, MongoDB, etc.) Solid understanding of data modeling, ETL best practices, and data governance principles Systems knowledge and experience working with Docker. Strong and creative problem-solving skills and the ability to think critically about data engineering solutions. Effective communication and collaboration skills Ability to work independently and as part of a team in a fast-paced, dynamic environment. Good-to-have: Experience working with APIs and third-party data sources Familiarity with version control (Git) and CI/CD processes Exposure to basic machine learning concepts and working with data science teams Experience handling large datasets and working with distributed data systems Why Join Us? Innovative Environment: Be part of a forward-thinking team that is dedicated to pushing the boundaries of art and technology. Career Growth: Opportunities for professional development and career advancement. Creative Freedom: Work in a role that values creativity and encourages new ideas. Company Culture: Enjoy a dynamic, inclusive, and supportive work environment.
Posted 1 week ago
4.0 years
0 Lacs
Andhra Pradesh, India
On-site
A career in our Advisory Acceleration Centre is the natural extension of PwC’s leading class global delivery capabilities. We provide premium, cost effective, high quality services that support process quality and delivery capability in support for client engagements. Years of Experience: Candidates with 4+ years of hands on experience Position Requirements Must Have : Experience in architecting and delivering highly scalable, distributed, cloud-based enterprise data solutions Strong expertise in end-to-end implementation of Cloud data engineering solutions like Enterprise Data lake, Data hub in AWS Proficient in Lambda or Kappa Architectures Should be aware of Data Management concepts and Data Modelling Strong AWS hands-on expertise with a programming background preferably Python/Scala Good knowledge of Big Data frameworks and related technologies - Experience in Hadoop and Spark is mandatory Strong experience in AWS compute services like AWS EMR, Glue and storage services like S3, Redshift & Dynamodb Good experience with any one of the AWS Streaming Services like AWS Kinesis, AWS SQS and AWS MSK Troubleshooting and Performance tuning experience in Spark framework - Spark core, Sql and Spark Streaming Strong understanding of DBT ELT Tool, and usage of DBT macros etc Good knowledge of Application DevOps tools (Git, CI/CD Frameworks) - Experience in Jenkins or Gitlab with rich experience in source code management like Code Pipeline, Code Build and Code Commit Experience with AWS CloudWatch, AWS Cloud Trail, AWS Account Config, AWS Config Rules Good knowledge in AWS Security and AWS Key management Strong understanding of Cloud data migration processes, methods and project lifecycle Good analytical & problem-solving skills Good communication and presentation skills Good analytical & problem-solving skills Good communication and presentation skills
Posted 1 week ago
25.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
eInfochips (An Arrow Company): eInfochips, an Arrow company (A $27.9 B, NASDAQ listed (ARW); Ranked #133 on the Fortune List), is a leading global provider of product engineering and semiconductor design services. 25+ years of proven track record, with a team of over 2500+ engineers, the team has been instrumental in developing over 500+ products and 40M deployments in 140 countries. Company’s service offerings include Silicon Engineering, Embedded Engineering, Hardware Engineering & Digital Engineering services. eInfochips services 7 of the top 10 semiconductor companies and is recognized by NASSCOM, Zinnov and Gartner as a leading Semiconductor service provider. What You Will Be Doing: Extensive work experience in the following services: EC2, S3, VPC, IAM, Lambda, Organizations, RDS, AWS Backup, Workspaces, AWS Systems Manager, AWS Secrets Manager and CloudWatch Working knowledge on the following services: API Gateway, ElastiCache, OpenSearch Cognito, EMR, ECR, SES, SNS, Glue, Kinesis, Redshift and Dynamo DB Ability to migrate containerized services to AWS ECS and AWS EKS Previous experience of deploying pipelines using AWS Code pipeline. Deploying applications in AWS using EC2 instances and Elastic Beanstalk. Ability to write CloudFormation templates depending on requirement. Creating new IAM roles and policies as per client requirement Recommend for best practices in AWS by providing Security and Cost Reports Strong knowledge in following AWS Security services: Config, CloudTrail, Security Hub, Guard Duty, and Inspector Good skills in Bash scripting and AWS CLI Knowing Terraform is an added advantage Preferable if candidate is AWS Certified Solutions Architect – Professional What We Are Looking For: At least 5+ years of AWS Cloud experience. 5+ years of experience working with business, applications, and infrastructure teams to design and implement IAAS, PAAS, SAAS cloud and hybrid cloud solutions. Monitoring, troubleshooting, and resolving infrastructure issues in AWS Cloud, performing infrastructure procedures such as managing network connectivity, creating scripts to provision resources and manage accounts, creating/maintaining AMIs automatically and documenting operational procedures. Working closely with Development, Architecture, Security and other Operations teams to complete assignments. Working across multiple phases of the software development lifecycle (SDLC) including build, test, operations using an Agile methodology and tools Implementing projects to migrate workloads from Ground to AWS Cloud, and provisioning new accounts and services on AWS Cloud Plan, engineer and design the infrastructure. Develop and configure the infrastructure from conceptualization through stabilization using various computer platforms to include test, debug, performance analyze and document infrastructure components. Implement the infrastructure by analyzing the current system environment and infrastructure, using technical tools and utilities, performing complex product customization, and developing implementation and verification procedures to ensure successful installation of systems hardware/software. Implement test plans (including functionality, integrity, and stress testing: test matrices: and definition of test methodologies and strategy. Perform routine infrastructure analysis, and evaluation on resource requirements necessary to maintain and/or expand service levels. Strong customer service and people skills. Why Join Us? Opportunity to work on cutting-edge technologies. Lead a high-performing team in a fast-paced, dynamic environment. Location: Ahmedabad Interested candidates can share resume on arti.bhimani1@einfochips.com
Posted 1 week ago
0 years
7 - 8 Lacs
India
On-site
Job Summary: We are looking for a skilled and motivated Data Engineer to join our growing data team in Pune. The ideal candidate will be responsible for building, maintaining, and optimizing data pipelines and architectures that support business intelligence, analytics, and data science initiatives. Key Responsibilities: Design, develop, and maintain ETL/ELT pipelines for structured and unstructured data. Build and optimize data architectures using tools like Spark, Hadoop, Kafka , etc. Work with relational and NoSQL databases such as SQL Server, PostgreSQL, MongoDB, Cassandra . Collaborate with Data Analysts, Scientists, and other stakeholders to understand data requirements. Implement data quality, security, and governance best practices. Automate and monitor data workflows using tools like Apache Airflow , Luigi , etc. Work on cloud-based data solutions using AWS, Azure , or Google Cloud Platform (GCP) . Optimize data delivery for scalability and performance. Required Skills: Strong programming skills in Python, SQL (Scala or Java is a plus). Experience with big data tools like Hadoop, Spark, Hive, Kafka. Hands-on with ETL tools and data warehouse concepts . Experience with cloud services (AWS Redshift, Azure Data Factory, GCP BigQuery, etc.). Understanding of data modeling , data lakes , and data APIs . Good problem-solving and debugging skills. Strong communication and teamwork abilities. Job Type: Full-time Pay: ₹65,000.00 - ₹70,000.00 per month Benefits: Health insurance Schedule: Day shift Supplemental Pay: Performance bonus Work Location: In person
Posted 1 week ago
3.0 years
5 - 8 Lacs
Bengaluru
On-site
- 3+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience - Experience with data visualization using Tableau, Quicksight, or similar tools - Experience with data modeling, warehousing and building ETL pipelines - Experience in Statistical Analysis packages such as R, SAS and Matlab - Experience using SQL to pull data from a database or data warehouse and scripting experience (Python) to process data for modeling Come build the future of smart security with us. Are you interested in helping shape the future of devices and services designed to keep people close to what’s important? About Ring Since its founding in 2013, Ring has been on a mission to make neighborhoods safer. From the first-ever video doorbell, to the award-winning DIY Ring Alarm system, Ring’s smart home security product line, as well as the Neighbors app, offer users affordable whole-home and neighborhood security. At Ring, we are committed to making home and neighborhood security accessible and effective for everyone -- while working hard to bring communities together. Ring is an Amazon company. For more information, visit www.ring.com. With Ring, you’re always home. Ring is looking for an insightful and analytical Business Intelligence Engineer with strong business and technical skills to join our Business Intelligence team. In this role, you will partner with product management, engineering, quality assurance and other BI teams that power Ring. Your work will be instrumental to achieving its mission, be highly visible to Ring / Amazon leadership, and will drive key strategic company goals. The Business Intelligence Engineer, Ring Decision Sciences Platform BI team will develop models and tools, conduct statistical analyses, evaluate large data sets, and create tailored models and dashboards. Additionally, you will be instrumental in the creation of a reliable and scalable infrastructure for ongoing reporting and analytics. You will be structuring ambiguous problems and designing analytics across various disciplines, resulting in actionable recommendations ranging from strategic planning, product strategy/launches, and engineering improvements to marketing campaign optimization, customer servicing trending, and competitive research. Key job responsibilities * Enable decision-making by retrieving and aggregating data from multiple sources to present it in a digestible and actionable format * Work with the ios and Android development and product teams to identify gaps and trends. * Analyze large data sets using a variety of database query and visualization tools * Provide technical expertise in extracting, integrating, and analyzing critical data * Anticipate, identify, structure, and solve critical problems * Design and develop key performance metrics and indicators using standardized and custom reports * Perform ad hoc analysis to quickly solve time sensitive operational issues and business cases. * Clearly communicate any potential data discrepancies and/or reporting downtime, including specific root cause, steps to resolution, and resolution date to a large end-user base * Partner with subject matter experts to document and translate business requirements into technical requirements * Manage multiple projects and proactively communicates issues, priorities, and objectives * Clearly communicate any potential data discrepancies and/or reporting downtime, including specific root cause, steps to resolution, and resolution date to a large end-user base * Partner with BI architects to provide valuable inputs to remodel the existing data warehouse. A day in the life As you lead the Business Intelligence Engineering (BIE) efforts for an upcoming device launch, your day involves collaborating closely with Product, Program, Firmware, and various other engineering teams. Having already identified the key signals to analyze device performance based on feature sets, you spend time aligning on proper definitions to analyze these features, meeting with engineering teams to instrument appropriate signals, maintaining data pipelines, and refining comprehensive dashboards. Throughout the day, you monitor platform performance while advancing initiatives to improve Ring's analytics through AI workflow implementation. About the team The Ring Decision Sciences Platform is responsible for the data strategy, architecture, governance, science, and software services Ring teams use to inform business strategy or power experiences with data. The central Data Science and Analytics team (within Decision Sciences and the team where this role is based) is responsible for core business metrics, shared data models, AI/ML models, business intelligence dashboards, and business analysis/science support. Experience with AWS solutions such as EC2, DynamoDB, S3, and Redshift Experience in data mining, ETL, etc. and using databases in a business environment with large-scale, complex datasets Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Posted 2 weeks ago
5.0 - 7.0 years
5 - 9 Lacs
Bengaluru
On-site
Job Title Data Scientist I/II Job Description At Philips, data is in the center of everything we do. We believe that, every data item has a story of its own. If you have a penchant for unravelling the hidden patterns, then you are the person we are looking for! To build our next generation smarts, you will be working with experts in a variety of fields, including clinical & regulatory specialists, UX designers, dental and skin care professionals, system architects, product management to integrate new technologies and refine system performance. You will help build learning systems, leveraging data sets consisting of user interactions and actions, collected from hundreds of thousands of consumer devices, per day to model, analyse, and predict user behaviours. While some of our algorithms run on resource constrained devices, others require large clusters on our cloud infrastructure. As data scientist you will investigate, discern, and interpret data sets to recognize user behaviour and predict trends. Job Responsibilities · Use predictive modeling to increase and optimize customer experiences, revenue generation, campaign optimization and other business outcomes · Work with product management to develop data use cases and embed predictive models in workflows on resource constrained platforms and cloud enabled. · Selecting features, building and optimizing classifiers using machine learning and deep learning techniques · Collaborates with Data Engineers to enhance data collection and ingestion/curation techniques to include information that is relevant for building analytic systems · Processing, cleansing, and verifying the integrity of data used for analysis · Develop processes and tools to monitor and analyze model performance and data accuracy. Life cycle management of predictive models. · Adherence to compliance procedures in accordance with regulatory standards, requirements, and policies. Managing and designing the reporting environment, including data sources security, and metadata. Job Qualifications: Master’s degree or PhD in Computer Science, Information management, Statistics or related field, with 5 to 7 years of experience in the Consumer or Healthcare industry manipulating data sets and building predictive models with focus on product development Experience in statistical modelling, machine learning, data mining, unstructured data analytics and natural language processing. Sound understanding of - Bayesian Modelling, Classification Models, Cluster Analysis, Neural Network, Nonparametric Methods, Multivariate Statistics, etc. Strong hands on knowledge of ML techniques like regression algorithms, K-NN, Naïve Bayes, SVM and ensemble techniques like Random forest, AdaBoost etc Having strong knowledge in unsupervised learning algorithms using Neural networks and Deep-Learning Strong knowledge in Data Wrangling and Exploration techniques to identify the patterns, trends and outliners. Deep knowledge and practical experience with data science toolkits, such as NumPy, Pandas, scikit-learn or equivalent Experience with data visualization tools, such as QlikView, Matplotlib, seaborn or equivalent tools. Proficiency in using query languages, such as SQL, PL/SQL Hands on experience in the one or more databases like Hadoop, AWS Redshift, Snowflake etc. Good applied statistics skills, such as distributions, statistical testing, regression, etc. Good ETL scripting and programming skills, such as Python, R or Scala to integrate developed solution into the proposition. A team player capable of working and integrating across cross-functional team for implementing project requirements. Experience in technical requirements gathering and documentation. Ability to work effectively and independently in a fast-paced global collaborative agile team environment with tight deadlines A flexible, pragmatic and collaborative team player with innate ability to engage with stakeholders at all levels in the organization. A self-starter with high levels of drive, energy, resilience and a desire for professional excellence with a passion for data and data science How we work together We believe that we are better together than apart. For our office-based teams, this means working in-person at least 3 days per week. this role is an office role. About Philips We are a health technology company. We built our entire company around the belief that every human matters, and we won't stop until everybody everywhere has access to the quality healthcare that we all deserve. Do the work of your life to help the lives of others. Learn more about our business . Discover our rich and exciting history . Learn more about our purpose . If you’re interested in this role and have many, but not all, of the experiences needed, we encourage you to apply. You may still be the right candidate for this or other opportunities at Philips. Learn more about our culture of impact with care here . #LI-EU #LI-Hybrid #LI-PHILIN
Posted 2 weeks ago
0.0 - 3.0 years
7 - 10 Lacs
Bengaluru
On-site
JLL empowers you to shape a brighter way . Our people at JLL and JLL Technologies are shaping the future of real estate for a better world by combining world class services, advisory and technology for our clients. We are committed to hiring the best, most talented people and empowering them to thrive, grow meaningful careers and to find a place where they belong. Whether you’ve got deep experience in commercial real estate, skilled trades or technology, or you’re looking to apply your relevant experience to a new industry, join our team as we help shape a brighter way forward. Data Engineer Find your next move at JLL and b uild a fulfilling career At JLL, we value what makes you unique, and we’re committed to giving you the opportunity, knowledge, and tools to own your success. E xplore opportunities to advance your career from within , whether you’re looking to move up, broaden your experience or deepen your expertise . JLL Technologies Enterprise Data team is a newly established central organization that oversees JLL’s data strategy. We are seeking data professionals to work with our colleagues at JLL around the globe in providing solutions, developing new products, and building enterprise reporting & analytics capability to reshape the business of Commercial Real Estate using the power of data, and we are just getting started on that journey!. We are looking for a Data Engineer who is a self-starter to work in a diverse and fast-paced environment and can join our Enterprise Data team. This is a role that is responsible for designing and developing data solutions that are strategic for the business and built on the latest technologies and patterns. This a global role that requires partnering with the broader JLLT team at the country, regional, and global levels by utilizing in-depth knowledge of data, infrastructure, technologies, and data engineering experience. Sound like you? To apply you need to be: Requirements: Bachelor's degree in Computer Science, Data Engineering, or a related field. (A master's degree is a plus.) 0- 3 years of experience in data engineering or full-stack development, with a focus on cloud-based environments. Strong expertise in SQL and PySpark, with a proven track record of working on large-scale data projects. Experience with cloud platforms (any 1) such as Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP). Experience in working with databases especially SQL server databases. Experience handling unstructured data, working in a data lake environment, leveraging data streaming and developing data pipelines driven by events/queues Proficiency in designing and implementing data pipelines, ETL processes, and workflow automation. Familiarity with data warehousing concepts, dimensional modelling, and data governance best practices. Strong problem-solving skills and ability to analyze complex data processing issues. Excellent communication and interpersonal skills to collaborate effectively with cross-functional teams. Attention to detail and a commitment to delivering high-quality, reliable data solutions. Ability to adapt to evolving technologies and work effectively in a fast-paced, dynamic environment. Preferred Qualifications: Experience with managing big data technologies (e.g., Spark, Python, Serverless Stack, API, etc.). Familiarity with cloud-based data warehousing platforms (e.g., AWS Redshift, Google BigQuery, Snowflake, etc.). Knowledge of data visualization tools (e.g., Tableau, Power BI) for creating meaningful data reports and dashboards is a plus. What you can expect from us: You’ll join an entrepreneurial, inclusive culture. One where we succeed together – across the desk and around the globe. Where like-minded people work naturally together to achieve great things. Our Total Rewards program reflects our commitment to helping you achieve your ambitions in career, recognition, well-being, benefits and pay. Join us to develop your strengths and enjoy a fulfilling career full of varied experiences. Keep those ambitions in sight and imagine where JLL can take you... A pply today! Location: On-site –Bengaluru, KA Scheduled Weekly Hours: 40 If this job description resonates with you, we encourage you to apply even if you don’t meet all of the requirements. We’re interested in getting to know you and what you bring to the table! JLL Privacy Notice Jones Lang LaSalle (JLL), together with its subsidiaries and affiliates, is a leading global provider of real estate and investment management services. We take our responsibility to protect the personal information provided to us seriously. Generally the personal information we collect from you are for the purposes of processing in connection with JLL’s recruitment process. We endeavour to keep your personal information secure with appropriate level of security and keep for as long as we need it for legitimate business or legal reasons. We will then delete it safely and securely. For additional details please see our career site pages for each country. . Jones Lang LaSalle (“JLL”) is an Equal Opportunity Employer and is committed to working with and providing reasonable accommodations to individuals with disabilities. If you need a reasonable accommodation because of a disability for any part of the employment process – including the online application and/or overall selection process – you may email us at accomodationrequest@am.jll.com . This email is only to request an accommodation. Please direct any other general recruiting inquiries to our Contact Us page > I want to work for JLL.
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
31458 Jobs | Dublin
Wipro
16542 Jobs | Bengaluru
EY
10788 Jobs | London
Accenture in India
10711 Jobs | Dublin 2
Amazon
8660 Jobs | Seattle,WA
Uplers
8559 Jobs | Ahmedabad
IBM
7988 Jobs | Armonk
Oracle
7535 Jobs | Redwood City
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi
Capgemini
6091 Jobs | Paris,France