Home
Jobs

2002 Dynamodb Jobs - Page 29

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Location : Ahmedabad Experience : 4 to 7 years Job Description At Brilworks, we are passionate about delivering innovative software solutions. We are looking for an experienced Lead MERN Stack Developer who not only has strong technical expertise but also excels in managing and mentoring a team of developers. If you are driven by challenges and enjoy fostering a collaborative and growth-oriented team environment, we want to hear from you!\ Role & Responsibilities Team Leadership & Management : Lead and mentor a team of 5 to 7 developers, ensuring they meet project objectives and personal growth goals. Provide technical guidance, conduct code reviews, and ensure adherence to best practices. Facilitate effective communication within the team and with stakeholders, including clients. Oversee sprint planning, task delegation, and tracking progress in an Agile environment. Promote a culture of accountability, ownership, and continuous learning within the team. Project Delivery Translate client requirements and Jira tickets into actionable tasks and deliverables. Collaborate with clients and stakeholders to provide updates, clarify requirements, and ensure alignment with goals. Ensure the team delivers high-quality, scalable, and maintainable code. Technical Responsibilities Architect and develop robust and scalable applications using the MERN stack (MongoDB, Express, React, Node.js). Ensure responsive and pixel-perfect UI implementation from Figma designs. Manage state effectively in React applications (e.g., Redux, React Query). Build and maintain RESTful APIs and optionally work with GraphQL APIs. Implement and enforce automated testing practices, including unit testing and end-to-end testing (e.g., Cypress). Establish CI/CD pipelines for efficient deployment and testing processes. Optimize applications for performance, scalability, and security. Must-Have 5+ years of hands-on experience in MERN stack development. Proficiency in React.js, including state management and component design. Strong knowledge of Node.js and Express.js for backend development. Experience with REST API development and integration. Ability to convert Figma designs into responsive React components. Expertise in writing unit tests to ensure code quality. Solid understanding of Agile development methodologies. Good-to-Have Experience with GraphQL and building GraphQL APIs. Knowledge of Next.js and Nest.js frameworks. Familiarity with AWS services like S3, Cognito, DynamoDB, etc. Experience with CI/CD pipeline setup and management. Knowledge of Storybook for UI development and testing. Proficiency in Cypress for end-to-end testing. Soft Skills Strong leadership and team management abilities. Excellent problem-solving skills and the ability to make critical decisions under pressure. Clear and concise communication, both written and verbal. Ability to manage multiple priorities and meet deadlines in a fast-paced environment. (ref:hirist.tech) Show more Show less

Posted 2 weeks ago

Apply

6.0 years

0 Lacs

Kakori, Uttar Pradesh, India

Remote

Job Title : Full Stack Developer (Immediate Joiner Preferred) Experience : 6+ Years Location : Bangalore, Lucknow, or Remote Availability : Immediate Joiners Preferred About Us UsefulBI is a data engineering and analytics solutions company delivering scalable cloud-native platforms and intelligent insights. We are looking for an experienced Full Stack Developer to join our growing team and build high-performance, cloud-native applications. Job Summary We are seeking a skilled and proactive Full Stack Developer with 6+ years of experience in both backend and frontend development. The ideal candidate will have strong expertise in AWS Lambda, DynamoDB, Terraform, Postgres, Python, and Angular. You will be responsible for building, deploying, and maintaining scalable and secure web applications. Key Responsibilities Design, develop, and maintain scalable full stack applications using Python and Angular. Develop and deploy AWS serverless components, especially Lambda functions. Build and manage infrastructure using Terraform (IaC). Design efficient data models and write optimized queries for DynamoDB and PostgreSQL. Collaborate with cross-functional teams to define and deliver high-quality solutions. Ensure responsive design and cross-browser compatibility for frontend applications. Implement unit tests and participate in code reviews for continuous improvement. Troubleshoot, debug, and upgrade existing software. Required Skills & Experience 6+ years of full stack development experience. Strong expertise in Python for backend development. Proficient in Angular (version 8+) for frontend development. Hands-on experience with AWS Lambda and DynamoDB. Experience using Terraform for infrastructure as code. Strong understanding of relational (PostgreSQL) and NoSQL (DynamoDB) databases. Experience in RESTful API development and integration. Familiarity with CI/CD pipelines and Git-based workflows. Excellent problem-solving and communication skills. Preferred Qualifications AWS Experience with Agile/Scrum methodologies Exposure to microservices and event-driven Join Us ? Work on cutting-edge cloud-native data platforms Opportunity to make an immediate impact Flexible location options (Bangalore, Lucknow, or Hybrid) A collaborative and supportive team environment (ref:hirist.tech) Show more Show less

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

Remote

Working with Us Challenging. Meaningful. Life-changing. Those aren't words that are usually associated with a job. But working at Bristol Myers Squibb is anything but usual. Here, uniquely interesting work happens every day, in every department. From optimizing a production line to the latest breakthroughs in cell therapy, this is work that transforms the lives of patients, and the careers of those who do it. You'll get the chance to grow and thrive through opportunities uncommon in scale and scope, alongside high-achieving teams. Take your career farther than you thought possible. Bristol Myers Squibb recognizes the importance of balance and flexibility in our work environment. We offer a wide variety of competitive benefits, services and programs that provide our employees with the resources to pursue their goals, both at work and in their personal lives. Read more careers.bms.com/working-with-us . Key Responsibilities Accountable for the development and support high-quality software products used by Pharmaceutical Development and Global supplies within GPS. Collaborate with Business and IT Plan functions to develop solutions for business problems including defining business requirements, provide project timelines and budget, develop acceptance criteria, testing, training, and change management plans. Contributes heavily towards developing a plan for development activities and translates those into actionable projects. These include releases, required modifications and discretionary enhancements to support the application life cycle. Collaborate directly with the business clients, IT Business Partners, and other IT functions on delivery of digital capabilities. Take complete ownership of releases from design till deployment and successful production run. This includes coordination with Ops team to deploy applications to various environments. Qualifications & Experience Have a strong commitment to a career in technology with a passion for healthcare. Strong communication still, ability to understand the needs of the business and commitment to deliver the best user experience and adoption. 3+ years of software development experience in full SDLC process, involving Analysis, Design, Development, Testing, and production. Proven experience as a full stack developer or similar role Learn, design, and implement new technologies. Have experience leading and mentoring small teams of highly skilled technical developers. Experience in designing and implementation of business-critical applications within AWS ecosystem. Experience in Cloud platforms such as AWS, Azure, GCP. Experience as a Python/node.js developer Strong knowledge of front-end technologies such as HTML, CSS, JavaScript, and React.js Strong knowledge of relational database technologies such as MySQL, SQL Server, Oracle, and PostgreSQL Strong knowledge of NoSQL databases, such as MongoDB, Amazon DynamoDB, and Cassandra Knowledge of source code repositories like SVN, GitHub, Bitbucket. Knowledge of design and implementation of N- Tier application in both cloud and on-prem environments. Ideal candidates would also have Have a strong commitment to a career in technology with a passion for healthcare. May lead initiatives related to continuous improvement or implementation of new technologies. Works independently on most deliverables. Participates in decision making and brings a variety of strong views and perspective to achieve team objectives. Knowledge of Software Development Lifecycle (SDLC) and computer systems validation (CSV) Ability to quickly learn new technologies and incorporate them into a solution. Knowledge of Project Management skills, and experience with agile and scrum methodologies. Able to collaborate across multiple functional teams. #HYDIT If you come across a role that intrigues you but doesn't perfectly line up with your resume, we encourage you to apply anyway. You could be one step away from work that will transform your life and career. Uniquely Interesting Work, Life-changing Careers With a single vision as inspiring as Transforming patients' lives through science™ , every BMS employee plays an integral role in work that goes far beyond ordinary. Each of us is empowered to apply our individual talents and unique perspectives in a supportive culture, promoting global participation in clinical trials, while our shared values of passion, innovation, urgency, accountability, inclusion and integrity bring out the highest potential of each of our colleagues. On-site Protocol BMS has an occupancy structure that determines where an employee is required to conduct their work. This structure includes site-essential, site-by-design, field-based and remote-by-design jobs. The occupancy type that you are assigned is determined by the nature and responsibilities of your role Site-essential roles require 100% of shifts onsite at your assigned facility. Site-by-design roles may be eligible for a hybrid work model with at least 50% onsite at your assigned facility. For these roles, onsite presence is considered an essential job function and is critical to collaboration, innovation, productivity, and a positive Company culture. For field-based and remote-by-design roles the ability to physically travel to visit customers, patients or business partners and to attend meetings on behalf of BMS as directed is an essential job function. BMS is dedicated to ensuring that people with disabilities can excel through a transparent recruitment process, reasonable workplace accommodations/adjustments and ongoing support in their roles. Applicants can request a reasonable workplace accommodation/adjustment prior to accepting a job offer. If you require reasonable accommodations/adjustments in completing this application, or in any part of the recruitment process, direct your inquiries to adastaffingsupport@bms.com . Visit careers.bms.com/ eeo -accessibility to access our complete Equal Employment Opportunity statement. BMS cares about your well-being and the well-being of our staff, customers, patients, and communities. As a result, the Company strongly recommends that all employees be fully vaccinated for Covid-19 and keep up to date with Covid-19 boosters. BMS will consider for employment qualified applicants with arrest and conviction records, pursuant to applicable laws in your area. If you live in or expect to work from Los Angeles County if hired for this position, please visit this page for important additional information https //careers.bms.com/california-residents/ Any data processed in connection with role applications will be treated in accordance with applicable data privacy policies and regulations. Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Join us as a "AWS Cloud Engineer" at Barclays, where you'll spearhead the evolution of our digital landscape, driving innovation and excellence. You'll harness cutting-edge technology to revolutionise our digital offerings, ensuring unapparelled customer experiences. You may be assessed on the key critical skills relevant for success in role, such as risk and control, change and transformations, business acumen, strategic thinking, and digital technology and as well as job-specific skillsets. To be successful as a "AWS Cloud Engineer", you should have experience with: Basic/ Essential Qualifications Experience on AWS Cloud technology for data processing Good understanding of AWS architecture and features as below Computer services like EC2, Lambda, Auto Scaling, VPC, EC2 Storage and container services like ECS, S3, DynamoDB, RDS Management & Governance KMS, IAM, CloudFormation, CloudWatch, CloudTrail Analytics services as Glue, Athena, Crawler, Lake Formation, Redshift Delivery knowledge for data processing components in larger End to End projects Desirable Skillsets/ Good To Have Strong AWS solution and Implementation knowledge Ability to collaborate across teams to deliver complex systems and components and manage stakeholder’s expectations well Broad and solid understanding of the concepts and roles behind data processing application build delivery and design Experienced with planning, estimating, organising, and working on multiple projects. Experience in data mapping and technical flow design This role will be based out of Pune. Purpose of the role To build and maintain the systems that collect, store, process, and analyse data, such as data pipelines, data warehouses and data lakes to ensure that all data is accurate, accessible, and secure. Accountabilities Build and maintenance of data architectures pipelines that enable the transfer and processing of durable, complete and consistent data. Design and implementation of data warehoused and data lakes that manage the appropriate data volumes and velocity and adhere to the required security measures. Development of processing and analysis algorithms fit for the intended data complexity and volumes. Collaboration with data scientist to build and deploy machine learning models. Analyst Expectations Will have an impact on the work of related teams within the area. Partner with other functions and business areas. Takes responsibility for end results of a team’s operational processing and activities. Escalate breaches of policies / procedure appropriately. Take responsibility for embedding new policies/ procedures adopted due to risk mitigation. Advise and influence decision making within own area of expertise. Take ownership for managing risk and strengthening controls in relation to the work you own or contribute to. Deliver your work and areas of responsibility in line with relevant rules, regulation and codes of conduct. Maintain and continually build an understanding of how own sub-function integrates with function, alongside knowledge of the organisations products, services and processes within the function. Demonstrate understanding of how areas coordinate and contribute to the achievement of the objectives of the organisation sub-function. Make evaluative judgements based on the analysis of factual information, paying attention to detail. Resolve problems by identifying and selecting solutions through the application of acquired technical experience and will be guided by precedents. Guide and persuade team members and communicate complex / sensitive information. Act as contact point for stakeholders outside of the immediate function, while building a network of contacts outside team and external to the organisation. All colleagues will be expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence and Stewardship – our moral compass, helping us do what we believe is right. They will also be expected to demonstrate the Barclays Mindset – to Empower, Challenge and Drive – the operating manual for how we behave. Back to nav Share job X(Opens in new tab or window) Facebook(Opens in new tab or window) LinkedIn(Opens in new tab or window) Show more Show less

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

Remote

Description Want to join the Earth’s most customer centric company? Do you like to dive deep to understand problems? Are you someone who likes to challenge Status Quo? Do you strive to excel at goals assigned to you? If yes, we have opportunities for you. Global Operations – Artificial Intelligence (GO-AI) at Amazon is looking to hire candidates who can excel in a fast-paced dynamic environment. Are you somebody that likes to use and analyze big data to drive business decisions? Do you enjoy converting data into insights that will be used to enhance customer decisions worldwide for business leaders? Do you want to be part of the data team which measures the pulse of innovative machine vision-based projects? If your answer is yes, join our team. GO-AI is looking for a motivated individual with strong skills and experience in resource utilization planning, process optimization and execution of scalable and robust operational mechanisms, to join the GO-AI Ops DnA team. In this position you will be responsible for supporting our sites to build solutions for the rapidly expanding GO-AI team. The role requires the ability to work with a variety of key stakeholders across job functions with multiple sites. We are looking for an entrepreneurial and analytical program manager, who is passionate about their work, understands how to manage service levels across multiple skills/programs, and who is willing to move fast and experiment often. Key job responsibilities Ability to maintain and refine straightforward ETL and write secure, stable, testable, maintainable code with minimal defects and automate manual processes. Proficiency in one or more industry analytics visualization tools (e.g. Excel, Tableau/Quicksight/PowerBI) and, as needed, statistical methods (e.g. t-test, Chi-squared) to deliver actionable insights to stakeholders. Building and owning small to mid-size BI solutions with high accuracy and on time delivery using data sets, queries, reports, dashboards, analyses or components of larger solutions to answer straightforward business questions with data incorporating business intelligence best practices, data management fundamentals, and analysis principles. Good understanding of the relevant data lineage: including sources of data; how metrics are aggregated; and how the resulting business intelligence is consumed, interpreted and acted upon by the business where the end product enables effective, data-driven business decisions. Having high responsibility for the code, queries, reports and analyses that are inherited or produced and having analyses and code reviewed periodically. Effective partnering with peer BIEs and others in your team to troubleshoot, research root causes, propose solutions, by either take ownership for their resolution or ensure a clear hand-off to the right owner. About The Team The Global Operations – Artificial Intelligence (GO-AI) team is an initiative, which remotely handles exceptions in the Amazon Robotic Fulfillment Centers Globally. GO-AI seeks to complement automated vision based decision-making technologies by providing remote human support for the subset of tasks which require higher cognitive ability and cannot be processed through automated decision making with high confidence. This team provides end-to-end solutions through inbuilt competencies of Operations and strong central specialized teams to deliver programs at Amazon scale. It is operating multiple programs and other new initiatives in partnership with global technology and operations teams. Basic Qualifications 5+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience Experience with data visualization using Tableau, Quicksight, or similar tools Experience with data modeling, warehousing and building ETL pipelines Experience in Statistical Analysis packages such as R, SAS and Matlab Experience using SQL to pull data from a database or data warehouse and scripting experience (Python) to process data for modeling Preferred Qualifications Experience with AWS solutions such as EC2, DynamoDB, S3, and Redshift Experience in data mining, ETL, etc. and using databases in a business environment with large-scale, complex datasets Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI HYD 16 SEZ Job ID: A3009412 Show more Show less

Posted 2 weeks ago

Apply

2.0 - 4.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Responsibilities AWS Cloud Software Engineer Reporting Relationship This role will report to Delivery Manager/Senior Delivery Manager Experience Level : 2-4 years Role Overview Function as a AWS Cloud Software Engineer from technical & support perspective with more focus on day to day operations & service excellence including Incident & Problem Management -SEV (Severity) management, SLA & Support metrics) & technical product enhacements to bring overall operational efficiency. Strong knowledge in AWS Cloud skills & Python development, Cloud migration including good understanding of software Engineering concepts, CI/CD pipeline tools & integrations. Key Responsibilities Experience in Kanban & Scrum methodologies is preferred. Responsible for day-to-day operations within Technology Operations including SEV management, On Call Support & SLA compliance & Incident reductions Good Understanding on software Engineering frameworks and standard procedures. Documentation of key processes and applications, be part of Knowledge transitions & keep focus on continues process improvement. Assist the team on process enhancements, build dashboards & system health improvement initiatives Work with diverse teams & provide assistance in areas of issue resolutions, deep dive analysis & troubleshooting. Strong knowledge on AWS Cloud skills & Python Development. Service Excellence As part of On-Call Support , report & handle SEV1, incident & Problem Management with key stakeholders partners for key apps. Work with leads to look at ways to Improve support protocols aimed at issue resolution & maintain SLA compliance Provide L1 , L2 support Ensure timely communication and alerts to key stakeholders with regards to SEV, application downtimes & ETA’s. Security & Environment Management Support & remediate critical and high security vulnerabilities, ensure code deployments & pipeline migrations as per Engineering standards. Begin to deliver scheduled AVA (Application Vulnerability) tests, DR (Disaster Recovery) tests & ADO to GitHub migrations Operational Efficiency - Prepare documentations for key apps and processes , including centralized knowledge base for troubleshooting, processes, and solutions Get actively involved in Training, self-development & knowledge sharing Good verbal and written communication skills. Strong interpersonal skills including mentoring, coaching, collaborating, and team building. Qualifications Education : Graduate - Bachelor’s degree (any stream) Additional Information Skill set: Strong knowledge on AWS & Python Development. Exposure to Java, Informatica, MuleSoft and .NET is good to have. Good Python knowledge (including frameworks like Addo or Django) with understanding of data types, error handling, best coding practices. Database - SQL(Amazon Redshift, Amazon RDS), NoSQL(Amazon DynamoDB), Postgres, JSON, file Good experience working with Amazon S3, AWS Glue, AWS Lambda, CDK, IAM, Amazon Athena Exposure to Tools like JIRA/Confluence/Service Now Well versed with DevOps practices (CI, CD, etc.). Knowledge of Azure DevOps & shell scripting is a plus Incident Management, Problem Management, Root Cause Analysis & Continues Improvement initiatives to drive measurable Outcomes Competencies Strong engineering mindset. Ability and willingness to work across various Cloud Architecture / SysOps / DevOps initiatives. Ability to deal with ambiguity. Ability and willingness to work in a fast-paced, agile environment. High level understanding of systems and processes supported by the system. Sound technical skills, high aptitude, positive attitude, strong inter-personal skills, excellent communication and time management skills. Show more Show less

Posted 2 weeks ago

Apply

2.0 - 4.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Responsibilities Good understanding of business processes and needs Develop and manage AWS development and migration programs. Build Data Integration scripts using Python to integrate with upstream/downstream systems (within AWS Cloud Services ecosystem) Develop application code for programs while following coding standards Resolve queries/issues and provide Application/functional support. Build holistic understanding of applications & underlying domain being supported Recommend changes to improve established Data Integration processes Provides solutions or alternatives to avoid potential problems Good communication skills, ability to persuade and clearly identify audiences. Utilizes good judgment when and how to communicate. Understands and adheres to development methodology, standards and Principles. Identifies potential process improvements. Begins to establish business and customer expectations and relationships Creating technical documentation around business requirements with clarity, completeness and specificity. 2-4 Years’ experience, prefer Python along with a statically typed language like Java or C# 1-2 Years’ experience migrating applications to AWS, prefer experience with serverless technologies Understanding and familiarity with common software development patterns, better practices, and anti-patterns Experience with DevOps practices (TDD, CI, CD, etc.) Exposure with working with and integrating multiple vendor software applications Experience with databases (we use SQL Server) Provide development expertise in broad range of cloud implementations. Abilities like problem solving, analytical, time management & decision making, self-motivated Must have an engineering mindset, with a strong desire for continual improvement of self, team, and organization Strong technical skills. Knowledge of SDLC. Good aptitude, positive attitude, strong reasoning and communication skills. Must be a good team player. Good analytical skills and research oriented. Azure DevOps AWS Services: CDK or Yaml, DynamoDB, SNS, SQS, Quicksight, ,API Gateway, Cognito Qualifications Graduate - Bachelor’s degree (any stream) AWS Developer certification would be an added advantage Additional Information Strong engineering mindset. Ability and willingness to work across various Cloud Architecture / SysOps / DevOps initiatives. Ability to deal with ambiguity. Ability and willingness to work in a fast-paced, agile environment. High level understanding of systems and processes supported by the system. Sound technical skills, high aptitude, positive attitude, strong inter-personal skills, excellent communication and time management skills. Show more Show less

Posted 2 weeks ago

Apply

3.0 - 5.0 years

0 - 3 Lacs

Bengaluru

Work from Office

Mid-Level Python Engineer (Pipeline, Elasticsearch, DynamoDB) Location: Bangalore Experience: 3 + years Designation Member of Technical Staff Theres the typical job. Then theres a career at Alphastream. Where we'll challenge you to defy routine. To explore the far reaches of the possible. To travel unchartered paths. And to be a part of something far bigger than yourself. Because around here, changing the world just comes with the job description. Job Summary: We are looking for a Mid-Level Python Engineer with 35 years of experience to support and enhance our existing data pipelines and backend systems. The ideal candidate will have strong hands-on expertise in Python, Elasticsearch/OpenSearch, and DynamoDB, and will be responsible for maintaining and scaling our ETL workflows and search infrastructure. You should be comfortable working in a fast-paced environment, supporting production systems, and collaborating with cross-functional teams to build reliable and scalable data solutions. Responsibilities: Pipeline Development & Maintenance Build, extend, and maintain Python-based data ingestion and transformation pipelines (Airflow, AWS Lambda, or similar). Ensure pipelines are performant, well-tested, and monitored end-to-end. DynamoDB Engineering Design and optimize table schemas, GSIs, and data access patterns. Implement CRUD operations and batch processes using AWS SDK (boto3). Monitor capacity, handle throttling, and optimize cost/performance trade-offs Elasticsearch/Search Engineering Define index mappings, analyzers, and ingest pipelines. Write and optimize search queries (DSL, aggregations). Implement relevance tuning and performance optimizations. Integrate with Python applications (e.g., using official Elasticsearch client). Production Support & Monitoring Troubleshoot pipeline failures, search performance issues, and data inconsistencies. Instrument services with logging, metrics (CloudWatch, Prometheus), and alerting. • Drive continuous improvement: automate manual tasks, improve runbooks, and share learnings. Collaboration & Documentation Work closely with cross-functional teams to gather requirements and iterate on solutions. Write clear, concise documentation for pipeline workflows, data models, and search configurations Requirements: Experience: 35 years of professional software engineering experience. Minimum 2 years working with Python in production environments. Technical Skills: • Python: Strong skills in core language features, packaging, virtual environments, and testing frameworks (pytest/unittest). DynamoDB: Design, operation, performance tuning, and AWS SDK (boto3). Elasticsearch/OpenSearch: Index design, query DSL, performance tuning, and Python client integration. AWS: Familiarity with AWS services (Lambda, S3, IAM, CloudWatch). ETL/Orchestration: Experience with batch and streaming pipelines (Airflow, AWS Glue, Lambda + Kinesis, etc.). Soft Skills: • Strong problem-solving and debugging skills. • Clear verbal and written communicaion Self-starter who can work independently and collaboratively. Benefits: Competitive salary and benefits package. Opportunities for professional growth and career development. • Dynamic and collaborative work environment. Cutting-edge technologies and projects. If you are a talented Senior Full Stack Developer looking for an exciting opportunity to make a difference, we'd love to hear from you! Apply now to join our team Attitude: • Fail fast mentality and thrive to succeed based on the learnings Collaboration with Business stake holders, Product Management, Business Analysts, Development (UI & Engineering) teams, Domain experts, Senior Financial Analysts • Quick learner with minimal guidance Good to have: Good interpersonal skills Finance background Preferably from a Data driven company Who We AreAlphastream.ai envisions a dynamic future for the financial world, where innovation is propelled by state-of-the-art AI technology and enriched by a profound understanding of credit and fixedincome research. Our mission is to empower asset managers, research firms, hedge funds, banks, and investors with smarter, faster, and curated data. We provide accurate, timely information, analytics, and tools across simple to complex financial and non-financial data, enhancing decision-making. With a focus on bonds, loans,financials and sustainability, we offer near realtime data via APIs and PaaS (Platform as a Service) solutions that act as the bridge between our offerings and seamless workflow integration. To learn more about us: https://alphastream.ai/ What we offer "At Alphastream.ai we offer a dynamic and inclusive workplace where your skills are valued and your career can flourish. Enjoy competitive compensation, a comprehensive benefits package, and opportunities for professional growth. Immerse yourself in an innovative work environment, maintain a healthy work-life balance, and contribute to a diverse and inclusive culture. Join us to work with cutting-edge technology, and be part of a team that recognizes and rewards your achievements, all while fostering a fun and engaging workplace culture." Disclaimer: Alphastream.ai is an equal opportunities employer. We work to provide a supportive and inclusive environment where all individuals can maximize their full potential. Our skilled and creative workforce is comprised of individuals drawn from a broad cross section of all communities in which we operate and who reflect a variety of backgrounds, talents, perspectives, and experiences. Our strong commitment to a culture of inclusion is evident through our constant focus on recruiting, developing, and advancing individuals based on their skills and talents.

Posted 2 weeks ago

Apply

2.0 - 3.0 years

4 - 5 Lacs

Pune

Work from Office

The Data Engineer supports, develops, and maintains a data and analytics platform to efficiently process, store, and make data available to analysts and other consumers. This role collaborates with Business and IT teams to understand requirements and best leverage technologies for agile data delivery at scale. Note:- Even though the role is categorized as Remote, it will follow a hybrid work model. Key Responsibilities: Implement and automate deployment of distributed systems for ingesting and transforming data from various sources (relational, event-based, unstructured). Develop and operate large-scale data storage and processing solutions using cloud-based platforms (e.g., Data Lakes, Hadoop, HBase, Cassandra, MongoDB, DynamoDB). Ensure data quality and integrity through continuous monitoring and troubleshooting. Implement data governance processes, managing metadata, access, and data retention. Develop scalable, efficient, and quality data pipelines with monitoring and alert mechanisms. Design and implement physical data models and storage architectures based on best practices. Analyze complex data elements and systems, data flow, dependencies, and relationships to contribute to conceptual, physical, and logical data models. Participate in testing and troubleshooting of data pipelines. Utilize agile development technologies such as DevOps, Scrum, and Kanban for continuous improvement in data-driven applications. External Qualifications and Competencies Qualifications, Skills, and Experience: Must-Have: 2-3 years of experience in data engineering with expertise in Azure Databricks and Scala/Python. Hands-on experience with Spark (Scala/PySpark) and SQL. Strong understanding of SPARK Streaming, SPARK Internals, and Query Optimization. Proficiency in Azure Cloud Services. Agile Development experience. Experience in Unit Testing of ETL pipelines. Expertise in creating ETL pipelines integrating ML models. Knowledge of Big Data storage strategies (optimization and performance). Strong problem-solving skills. Basic understanding of Data Models (SQL/NoSQL) including Delta Lake or Lakehouse. Exposure to Agile software development methodologies. Quick learner with adaptability to new technologies. Nice-to-Have: Understanding of the ML lifecycle. Exposure to Big Data open-source technologies. Experience with clustered compute cloud-based implementations. Familiarity with developing applications requiring large file movement in cloud environments. Experience in building analytical solutions. Exposure to IoT technology. Competencies: System Requirements Engineering: Translates stakeholder needs into verifiable requirements. Collaborates: Builds partnerships and works collaboratively with others. Communicates Effectively: Develops and delivers clear communications for various audiences. Customer Focus: Builds strong customer relationships and delivers customer-centric solutions. Decision Quality: Makes timely and informed decisions to drive progress. Data Extraction: Performs ETL activities from various sources using appropriate tools and technologies. Programming: Writes and tests computer code using industry standards, tools, and automation. Quality Assurance Metrics: Applies measurement science to assess solution effectiveness. Solution Documentation: Documents and communicates solutions to enable knowledge transfer. Solution Validation Testing: Ensures configuration changes meet design and customer requirements. Data Quality: Identifies and corrects data flaws to support governance and decision-making. Problem Solving: Uses systematic analysis to identify and resolve issues effectively. Values Differences: Recognizes and values diverse perspectives and cultures. Additional Responsibilities Unique to this Position Education, Licenses, and Certifications: College, university, or equivalent degree in a relevant technical discipline, or equivalent experience required. This position may require licensing for compliance with export controls or sanctions regulations. Work Schedule: Work primarily with stakeholders in the US, requiring a 2-3 hour overlap during EST hours as needed.

Posted 2 weeks ago

Apply

3.0 - 5.0 years

5 - 7 Lacs

Pune

Work from Office

Please note even though the GPP mentions Remote, this is a Hybrid role. Key Responsibilities: Implement and automate deployment of distributed systems for ingesting and transforming data from various sources (relational, event-based, unstructured). Continuously monitor and troubleshoot data quality and integrity issues. Implement data governance processes and methods for managing metadata, access, and retention for internal and external users. Develop reliable, efficient, scalable, and quality data pipelines with monitoring and alert mechanisms using ETL/ELT tools or scripting languages. Develop physical data models and implement data storage architectures as per design guidelines. Analyze complex data elements and systems, data flow, dependencies, and relationships to contribute to conceptual, physical, and logical data models. Participate in testing and troubleshooting of data pipelines. Develop and operate large-scale data storage and processing solutions using distributed and cloud-based platforms (e.g., Data Lakes, Hadoop, Hbase, Cassandra, MongoDB, Accumulo, DynamoDB). Use agile development technologies, such as DevOps, Scrum, Kanban, and continuous improvement cycles, for data-driven applications. External Qualifications and Competencies Qualifications: College, university, or equivalent degree in a relevant technical discipline, or relevant equivalent experience required. This position may require licensing for compliance with export controls or sanctions regulations. Competencies: System Requirements Engineering: Translate stakeholder needs into verifiable requirements and establish acceptance criteria. Collaborates: Build partnerships and work collaboratively with others to meet shared objectives. Communicates Effectively: Develop and deliver multi-mode communications that convey a clear understanding of the unique needs of different audiences. Customer Focus: Build strong customer relationships and deliver customer-centric solutions. Decision Quality: Make good and timely decisions that keep the organization moving forward. Data Extraction: Perform ETL activities from various sources and transform them for consumption by downstream applications and users. Programming: Create, write, and test computer code, test scripts, and build scripts using industry standards and tools. Quality Assurance Metrics: Apply measurement science to assess whether a solution meets its intended outcomes. Solution Documentation: Document information and solutions based on knowledge gained during product development activities. Solution Validation Testing: Validate configuration item changes or solutions using best practices. Data Quality: Identify, understand, and correct flaws in data to support effective information governance. Problem Solving: Solve problems using systematic analysis processes and industry-standard methodologies. Values Differences: Recognize the value that different perspectives and cultures bring to an organization. Additional Responsibilities Unique to this Position Skills and Experience Needed: Must-Have: 3-5 years of experience in data engineering with a strong background in Azure Databricks and Scala/Python. Hands-on experience with Spark (Scala/PySpark) and SQL. Experience with SPARK Streaming, SPARK Internals, and Query Optimization. Proficiency in Azure Cloud Services. Agile Development experience. Unit Testing of ETL. Experience creating ETL pipelines with ML model integration. Knowledge of Big Data storage strategies (optimization and performance). Critical problem-solving skills. Basic understanding of Data Models (SQL/NoSQL) including Delta Lake or Lakehouse. Quick learner. Nice-to-Have: Understanding of the ML lifecycle. Exposure to Big Data open source technologies. Experience with SPARK, Scala/Java, Map-Reduce, Hive, Hbase, and Kafka. SQL query language proficiency. Experience with clustered compute cloud-based implementations. Familiarity with developing applications requiring large file movement for a cloud-based environment. Exposure to Agile software development. Experience building analytical solutions. Exposure to IoT technology. Work Schedule: Most of the work will be with stakeholders in the US, with an overlap of 2-3 hours during EST hours on a need basis.

Posted 2 weeks ago

Apply

7.0 years

0 Lacs

Hār, Himachal Pradesh, India

On-site

Role : Senior Software Engineer - Full Stack Location : Gurgaon / Hybrid Skills : React, Angular, JavaScript , TypeScript, .Net , C# , Kotlin Senior Software Engineer - Full Stack (Gurugram Based , Backend Heavy) Shift Timings - General Yrs of experience :- 7+ yrs Joining: Immediate joiners Location: Gurgaon / Hybrid The Opportunity We are looking for key contributors to our industry-leading front-end websites. You'll be working on products which have evolved tremendously over the past several years to become the global market leader. You'll be using the most current technologies and best practices to accomplish our goals. A typical day involves: Creating new end-to-end systems Building advanced architectures Adding new features to high-uptime, frequently published websites and apps Developing fast and reliable automated testing systems Working in a culture that continually seeks to improve quality, tools, and efficiency What You'll Need To Succeed (Must) 7+ years of experience developing web applications in client-side frameworks like React or Angular Strong understanding of object-oriented JavaScript, TypeScript Hands-on experience in .Net, C#, Kotlin, or Java (Backend) B.S. in Computer Science or quantitative field; M.S. preferred Familiarity with agile methodologies, analytics, A/B testing, feature flags, Continuous Delivery, Trunk-based Development Excellent HTML/CSS skills – you know how to make data both functional and visually appealing Hands-on experience with CI/CD solutions like GitLab Passion for new technologies and best tools available Strong communication and coordination skills Excellent analytical thinking and problem-solving ability Proficiency in English It’s Great If You Have Experience designing physical architecture at scale, including resilient and highly available systems Knowledge of: NoSQL technologies: Cassandra, Scylla DB, Elasticsearch, Redis, DynamoDB, etc. Queueing systems: Kafka, RabbitMQ, SQS, Azure Service Bus, etc. Experience with Containers, Docker, and ideally Kubernetes (K8s) CI/CD expertise (additional tools beyond GitLab are a plus) Proficiency in modern coding and design practices (Clean Code, SOLID principles, TDD) Experience working on high-traffic applications with large user bases Background in data-driven environments with Big Data analysis Led teams or greenfield projects solving complex system challenges Experience with global projects serving international markets and distributed data centre’s with localized UIs and data Show more Show less

Posted 2 weeks ago

Apply

8.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

At Nielsen, we are passionate about our work to power a better media future for all people by providing powerful insights that drive client decisions and deliver extraordinary results. Our talented, global workforce is dedicated to capturing audience engagement with content - wherever and whenever it’s consumed. Together, we are proudly rooted in our deep legacy as we stand at the forefront of the media revolution. When you join Nielsen, you will join a dynamic team committed to excellence, perseverance, and the ambition to make an impact together. We champion you, because when you succeed, we do too. We enable your best to power our future. This role will be part of a team that develops software that processes data captured every day from over a quarter of a million Computer and Mobile devices worldwide. Measuring panelists activities as they surf the Internet via Browsers, or utilizing Mobile App's download from Apples and Googles store. The Nielsen software meter used to capture this usage data has been optimized to be unobtrusive yet gather many biometric data points that the backend system can use to identify who is using the device, and also detect fraudulent behavior. As an Engineering Manager, you will be a cross functional team of developers, and DevOps Engineers, using a Scrum/Agile team management approach. Provide technical expertise and guidance to team members and help develop designs for complex applications. Ability to plan tasks and project phases as well as review, comment and approve the analysis, proposed design and test strategy done by members of the team. Responsibilities Oversee the development of scalable, reliable, and cost-effective software solutions with an emphasis on quality, best-practice coding standards, and cost-effectiveness Aid with driving business unit financials, and ensures budgets and schedules meet corporate requirements Participates in corporate development of methods, techniques and evaluation criteria for projects, programs, and people Has overall control of planning, staffing, budgeting, managing expense priorities, for the team they lead Provide training, coaching, and sharing technical knowledge with less experienced staff People manager duties, including annual reviews, career guidance, and compensation planning Rapidly identify technical issues as they emerge, and asses their impact to the business Provide day-to-day work direction to a large team of developers Collaborate effectively with Data Science to understand, translate, and integrate data methodologies into the product Collaborate with product owners to translate complex business requirements into technical solutions, providing leadership in the design and architecture processes Stay informed about the latest technology and methodology by participating in industry forums, having an active peer network, and engaging actively with customers Cultivate a team environment focused on continuous learning, where innovative technologies are developed and refined through collaborative effort Key Skills Bachelor's degree in computer science, engineering or relevant 8+ years of experience in information technology solutions development and 2+ years managerial experience Proven experience in leading and managing software development teams Development background in Java AWS Cloud based environment for high-volume data processing Experience with Data Warehouses, ETL, and/or Data Lakes Experience with Databases such as Postgres, DynamoDB, or RedShift Good understanding of CI/CD principles and tools. GitLab a plus Must have the ability to provide solutions utilizing best practices for resilience, scalability, cloud optimization and security Excellent project management skills Other desirable skills Knowledge of networking principles and security best practices AWS Certification is a plus Experience with MS Project or Smartsheets Experience with Airflow, Python, Lambda, Prometheus, Grafana, & OpsGeni a bonus Exposure to the Google Cloud Platform (GCP) useful Please be aware that job-seekers may be at risk of targeting by scammers seeking personal data or money. Nielsen recruiters will only contact you through official job boards, LinkedIn, or email with a nielsen.com domain. Be cautious of any outreach claiming to be from Nielsen via other messaging platforms or personal email addresses. Always verify that email communications come from an @ nielsen.com address. If you're unsure about the authenticity of a job offer or communication, please contact Nielsen directly through our official website or verified social media channels. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, protected veteran status or other characteristics protected by law. Show more Show less

Posted 2 weeks ago

Apply

1.0 - 6.0 years

3 - 8 Lacs

Bengaluru

Work from Office

Responsibilities Design and develop new features to meet evolving business and technical needs Maintain and enhance existing functionality to ensure reliability and performance Collaborate directly with customers to gather requirements and understand business objectives Stay up to date with the latest technologies and apply them to influence project decisions and outcomes Requirements 1+ year of experience in developing commercial applications on .NET Good understanding of the Software Development Lifecycle Understanding of C#, including .NET 6/8 and .NET Framework 4.8 Good knowledge and experience with Azure (Azure Functions, VMs, Cosmos DB, Azure SQL) or AWS (EC2, Lambda, S3, DynamoDB) Skills in front-end web development (React, Angular, TypeScript) Substantial knowledge of relational and non-relational databases Good knowledge of Event-Driven Architecture (CQRS & Event Sourcing), Domain-Driven Design, and Microservices Experience working with CI/CD (Azure DevOps, AWS CodePipeline) Experience with testing tools and techniques Good spoken English (at least B1 level according to CEFR)

Posted 2 weeks ago

Apply

2.0 - 3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Requirements Minimum of 2-3 years of FullStack Software Development experience in building large-scale mission-critical applications. Strong foundation in computer science, with strong competencies in data structures, algorithms, and software design optimized for building highly distributed and parallelized systems. Proficiency in one or more programming languages - Java and Python. Strong hands-on experience in MEAN, MERN, Core Java, J2EE technologies, Microservices, Spring, Hibernate, SQL, REST APIs. Experience in web development using one of the technologies, like Angular or React, etc. Experience with one or more of the following database technologies: SQL Server, Postgres, MySQL, and NoSQL such as HBase, MongoDB, and DynamoDB. Strong problem-solving skills to deep dive, brainstorm, and choose the best solution approach. Experience with AWS Services like EKS, ECS, S3 EC2 RDS, Redshift, and Github/Stash, CI/CD Pipelines, Maven, Jenkins, Security Tools, Kubernetes/VMs/Linux, Monitoring, Alerting, etc. Experience in Agile development is a big plus. Excellent presentation, collaboration, and communication skills required. Result-oriented and experienced in leading broad initiatives and teams. Knowledge of Big Data technologies like Hadoop and Hive, Spark, Kafka, etc. would be an added advantage. Bachelor's or Master's degree in mathematics, Computer Science. 1-4 years of experience as a FullStackEngineer. Proven analytic skills and designing scalable applications. This job was posted by Vivek Chhikara from Protium. Show more Show less

Posted 2 weeks ago

Apply

15.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

We are seeking a highly skilled and experienced Application Architect with a strong background in designing and architecting both user interfaces and backend Java microservices, with significant exposure to Amazon Web Services (AWS). As an Application Architect, you will be responsible for defining the architectural vision, ensuring scalability, performance, and maintainability of our applications. You will collaborate closely with engineering teams, product managers, and other stakeholders to deliver robust and innovative solutions. Responsibilities Architectural Design and Vision: Define and communicate the architectural vision and principles for both frontend and backend systems. Design scalable, resilient, and secure application architectures leveraging Java microservices and cloud-native patterns on AWS. Develop and maintain architectural blueprints, guidelines, and standards. Evaluate and recommend technologies, frameworks, and tools for both UI and backend development. Ensure alignment of architectural decisions with business goals and technical strategy. UI Architecture and Development Guidance: Provide architectural guidance and best practices for developing modern and responsive user interfaces (e.g., using React, Angular, Vue.js). Define UI architecture patterns, component design principles, and state management strategies. Ensure UI performance, accessibility, and user experience considerations are integrated into the architecture. Collaborate with UI developers and designers to ensure technical feasibility and optimal implementation of UI designs. Backend Microservices Architecture and Development Guidance: Design and architect robust and scalable backend systems using Java and microservices architecture. Define API contracts, data models, and integration patterns for microservices. Ensure the security, reliability, and performance of backend services. Provide guidance to backend Java developers on best practices, coding standards, and architectural patterns. AWS Cloud Architecture and Deployment: Design and implement cloud-native solutions on AWS, leveraging services such as EC2, ECS/EKS, Lambda, S3, RDS, DynamoDB, API Gateway, etc. Define infrastructure-as-code (IaC) strategies using tools like CloudFormation or Terraform. Architect for high availability, fault tolerance, and disaster recovery on AWS. Optimize cloud costs and ensure efficient resource utilization. Implement security best practices and compliance standards within the AWS environment. Collaboration and Communication: Collaborate effectively with engineering managers, product managers, QA, DevOps, and other stakeholders. Communicate architectural decisions and trade-offs clearly and concisely to both technical and non-technical audiences. Facilitate technical discussions and resolve architectural challenges. Mentor and guide engineering teams on architectural best practices and technology adoption. Technology Evaluation and Adoption: Research and evaluate new technologies and trends in UI frameworks, Java ecosystems, and AWS services. Conduct proof-of-concepts and feasibility studies for new technologies. Define adoption strategies for new technologies within the organization. Performance and Scalability: Design systems with a focus on performance, scalability, and maintainability. Identify and address potential performance bottlenecks and scalability limitations. Define and implement monitoring and alerting strategies for applications and infrastructure. Qualifications Bachelor's or Master's degree in Computer Science, Engineering, or a related field. 15 + years of experience in software development with a strong focus on Java. 5 + years of experience in designing and architecting complex applications, including both UI and backend systems. Deep understanding of microservices architecture principles and best practices. Strong expertise in Java and related frameworks (e.g., Spring Boot, Jakarta EE). Solid experience with modern UI frameworks (e.g., React, Angular, Vue.js) and their architectural patterns. Significant hands-on experience with Amazon Web Services (AWS) and its core services. Experience with containerization technologies (e.g., Docker, Kubernetes) and orchestration on AWS (ECS/EKS). Proficiency in designing and implementing RESTful APIs and other integration patterns. Understanding of database technologies (both relational and NoSQL) and their integration with microservices on AWS. Experience with infrastructure-as-code (IaC) tools like CloudFormation or Terraform. Strong understanding of security best practices for both UI and backend applications in a cloud environment. Excellent communication, presentation, and interpersonal skills. Proven ability to lead technical discussions and influence architectural decisions. Preferred Qualifications Experience with event-driven architectures and messaging systems (e.g., Kafka, SQS). Familiarity with CI/CD pipelines and DevOps practices on AWS. Experience with performance testing and optimization techniques. Knowledge of different architectural patterns (e.g., CQRS, Event Sourcing). Experience in [Mention any specific domain or industry relevant to your company]. AWS certifications (e.g., AWS Certified Solutions Architect – Associate/Professional). Show more Show less

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Responsibilities Design, develop, and maintain scalable and high-performance web and mobile applications. Work across the stack with React, React Native, Golang, and Node.js . Architect and optimize APIs and microservices to ensure reliability, scalability, and security. Deploy, monitor, and manage cloud infrastructure using Kubernetes and AWS. Collaborate with product managers, designers, and other engineers to build seamless user experiences. Conduct code reviews, mentor junior developers, and promote best practices in software development. Continuously improve system performance, observability, and developer productivity. Troubleshoot and resolve production issues, ensuring uptime and reliability. Requirements 5+ years of experience as a Full Stack Engineer, working on production-grade applications. Strong proficiency in React.js and React Native for front-end development. Experience with Golang and Node.js for backend development. Solid understanding of microservices architecture and API development. Experience with Kubernetes, Docker, and cloud platforms (AWS). Knowledge of databases (SQL and NoSQL) such as PostgreSQL and DynamoDB. Familiarity with CI/CD pipelines and DevOps practices. Strong problem-solving and analytical skills. Built offline-first applications. Excellent communication and teamwork abilities. Nice-to-Have Experience in the pos or payments industry. Knowledge of GraphQL and gRPC. Familiarity with event-driven architecture (Kafka, RabbitMQ, etc. ). Exposure to performance tuning and high-traffic system optimizations. This job was posted by Adarsha Kumari from Oolio. Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Join us as a Software Engineer This is an opportunity for a driven Software Engineer to take on an exciting new career challenge Day-to-day, you'll build a wide network of stakeholders of varying levels of seniority It’s a chance to hone your existing technical skills and advance your career We're offering this role at associate level What You'll Do In your new role, you’ll engineer and maintain innovative, customer centric, high performance, secure and robust solutions. You’ll be working within a feature team and using your extensive experience to engineer software, scripts and tools that are often complex, as well as liaising with other engineers, architects and business analysts across the platform. You’ll Also Be Producing complex and critical software rapidly and of high quality which adds value to the business Working in permanent teams who are responsible for the full life cycle, from initial development, through enhancement and maintenance to replacement or decommissioning Collaborating to optimise our software engineering capability Designing, producing, testing and implementing our working code Working across the life cycle, from requirements analysis and design, through coding to testing, deployment and operations The Skills You'll Need You’ll need at least five years of experience in data sourcing including real time data integration , and a certification in AWS cloud. You’ll Also Need Experience in AWS Cloud, Airflow, and associated data migration from on premise to cloud with knowledge on databases like Snowflake, AWS Data Lake, PostgreSQL, Oracle, MongoDB and AWS DynamoDB, Experience in multiple programming languages or Low Code toolsets, Kafka and Stream sets Experience of DevOps, Testing and Agile methodology and associated toolsets A background in solving highly complex, analytical and numerical problems Experience of implementing programming best practice, especially around scalability, automation, virtualisation, optimisation, availability and performance Show more Show less

Posted 2 weeks ago

Apply

10.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Role Overview We are looking for a Senior Backend Engineer with deep expertise in Python and scalable system architecture. This is a hands-on individual contributor (IC) role where you’ll design and develop high-performance, cloud-native backend services for enterprise-scale platforms. You’ll work closely with cross-functional teams to deliver robust, production-grade solutions. Key Responsibilities Design and build distributed, microservices-based systems using Python Develop RESTful APIs, background workers, schedulers, and scalable data pipelines Lead architecture discussions, technical reviews, and proof-of-concept initiatives Model data using SQL and NoSQL technologies (PostgreSQL, MongoDB, DynamoDB, ClickHouse) Ensure high availability and observability using tools like CloudWatch, Grafana, and Datadog Automate infrastructure and CI/CD workflows using Terraform, GitHub Actions, or Jenkins Prioritize security, scalability, and fault-tolerance across all services Own the entire lifecycle of backend components—from development to production support Document system architecture and contribute to internal knowledge sharing Requirements 10+ years of backend development experience with strong Python proficiency Deep understanding of microservices, Docker, Kubernetes, and cloud-native development (AWS preferred) Expertise in API design, authentication (OAuth2), rate limiting, and best practices Experience with message queues and async systems (Kafka, SQS, RabbitMQ) Strong database knowledge—both relational and NoSQL Familiarity with DevOps tools: Terraform, CloudFormation, GitHub Actions, Jenkins Effective communicator with experience working in distributed, fast-paced teams Show more Show less

Posted 2 weeks ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Hyderabad

Work from Office

We are seeking a Senior DevOps Software Engineer to join the Labs software engineering practice. This role is integral to developing top-tier talent, setting engineering best practices, and evangelizing full-stack development capabilities across the organization. The Senior DevOps Software Engineer will design and implement deployment strategies for AI systems using the AWS stack, ensuring high availability, performance, and scalability of applications. Roles & Responsibilities: Design and implement deployment strategies using the AWS stack, including EKS, ECS, Lambda, SageMaker, and DynamoDB. Configure and manage CI/CD pipelines in GitLab to streamline the deployment process. Develop, deploy, and manage scalable applications on AWS, ensuring they meet high standards for availability and performance. Implement infrastructure-as-code (IaC) to provision and manage cloud resources consistently and reproducibly. Collaborate with AI product design and development teams to ensure seamless integration of AI models into the infrastructure. Monitor and optimize the performance of deployed AI systems, addressing any issues related to scaling, availability, and performance. Lead and develop standards, processes, and best practices for the team across the AI system deployment lifecycle. Stay updated on emerging technologies and best practices in AI infrastructure and AWS services to continuously improve deployment strategies. Familiarity with AI concepts such as traditional AI, generative AI, and agentic AI, with the ability to learn and adopt new skills quickly. Functional Skills: Deep expertise in designing and maintaining CI/CD pipelines and enabling software engineering best practices and overall software product development lifecycle. Ability to implement automated testing, build, deployment, and rollback strategies. Advanced proficiency managing and deploying infrastructure with the AWS cloud platform, including cost planning, tracking and optimization. Proficiency with backend languages and frameworks (Python, FastAPI, Flask preferred). Experience with databases (Postgres/DynamoDB) Experience with microservices architecture and containerization (Docker, Kubernetes). Good-to-Have Skills: Familiarity with enterprise software systems in life sciences or healthcare domains. Familiarity with big data platforms and experience in data pipeline development (Databricks, Spark). Knowledge of data security, privacy regulations, and scalable software solutions. Soft Skills: Excellent communication skills, with the ability to convey complex technical concepts to non-technical stakeholders. Ability to foster a collaborative and innovative work environment. Strong problem-solving abilities and attention to detail. High degree of initiative and self-motivation. Basic Qualifications: Bachelors degree in Computer Science, AI, Software Engineering, or related field. 5+ years of experience in full-stack software engineering.

Posted 2 weeks ago

Apply

4.0 - 7.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Company Description Experian Consumer Services (ECS) is looking for Senior Full Stack Engineers in Hyderabad, India to work alongside our UK colleagues to deliver business outcomes for the UK&I region. Background This is an incredibly exciting time for the Experian UKI Region, as we look to build our presence out in Hyderabad and embark on a technology transformation programme to meet our global aspiration to significantly scale our business over the next five years. This an opportunity to join us on this journey and be part of a collaborative team that uses Agile principles to deliver business value. Our unique culture and agile ways of working offer a great opportunity to those seeking to join a talented set of diverse problem solvers to design, build and maintain our products. We pride ourselves in excellence, adopting best practices and holding ourselves to the highest standards. Job Description The Role: Experian ECS are building a New Growth domain to help us meet a wider range of consumers financial needs throughout their financial lives. To reach our strategic ambition we must expand our offerings to areas most aligned with what our consumers want. As a Engineer in the New Growth team, you will be responsible for developing the features and core services that power the applications and solutions our customers rely on. Working closely with other Developers, QA engineers, Architects and Product Owners you will grow to understand the domain before bringing your own ideas to solve real business problems. Responsibilities As a member of our agile team, you’ll have a passion for building and shipping high-performance, robust, and efficient AWS based services that you can be proud of. You'll be responsible for feature delivery for our New Growth initiative. Design, develop, and maintain robust applications using .Net and React. Utilize strong analytical skills to solve complex technical problems. Collaborate with cross-functional teams to deliver high-quality software solutions. Develop and maintain full-stack applications, ensuring seamless integration and functionality. Implement unit testing and acceptance test automation to ensure software reliability. Work with the existing CI/CD pipeline and support the team with this process. Stay updated with modern technologies and best practices to continuously improve development processes. Mentor junior engineers and provide technical leadership. Lead architectural design and decision-making processes to ensure scalable and efficient solutions. Define and enforce best practices for software development and architecture. Evaluate and integrate new technologies to enhance system capabilities and performance. Qualifications 4 -7 years of experience in software development, with extensive expertise in .Net and React. Good knowledge of microservice architecture delivered on .NET Core, Node.js & React, hosted using AWS technologies such as CloudFront, S3, Fargate, EC2, Lambda, SNS, SQS & DynamoDB Experience of developing outstanding Flutter applications for iOS and Android Good with feedback, continually looking to improve and develop. Strong knowledge of algorithms, data structures, and software analytics. Excellent communication skills and ability to work in a fast-paced environment. AWS certification is preferred. Familiarity with full-stack development and unit testing. Experience with acceptance test automation. Quick learner with the ability to adapt to new technologies. We expect you to have good experience in software engineering, with a proven track record of building mission-critical, high-volume transaction web-based software systems. Additional Information Our uniqueness is that we celebrate yours. Experian's culture and people are important differentiators. We take our people agenda very seriously and focus on what matters; DEI, work/life balance, development, authenticity, collaboration, wellness, reward & recognition, volunteering... the list goes on. Experian's people first approach is award-winning; World's Best Workplaces™ 2024 (Fortune Top 25), Great Place To Work™ in 24 countries, and Glassdoor Best Places to Work 2024 to name a few. Check out Experian Life on social or our Careers Site to understand why. Experian is proud to be an Equal Opportunity and Affirmative Action employer. Innovation is an important part of Experian's DNA and practices, and our diverse workforce drives our success. Everyone can succeed at Experian and bring their whole self to work, irrespective of their gender, ethnicity, religion, colour, sexuality, physical ability or age. If you have a disability or special need that requires accommodation, please let us know at the earliest opportunity. Experian Careers - Creating a better tomorrow together Find out what its like to work for Experian by clicking here Show more Show less

Posted 2 weeks ago

Apply

5.0 - 7.0 years

8 - 10 Lacs

Hyderabad

Work from Office

ABOUT THE ROLE We are seeking a Senior Full-Stack Software Engineer to join the Labs software engineering practice. This role is integral to developing top-tier talent, setting engineering best practices, and evangelizing full-stack development capabilities across the organization. The Senior Full-Stack Software Engineer will play a key role in turning AI concepts into products, working closely with product managers and AI and software engineers and architects. This is a hands-on, cross-functional role that blends modern software development with AI integration in a rapid innovation and prototyping operating model. Roles & Responsibilities: Design, develop, and maintain microservices to ensure the software is modular, scalable, and maintainable. Create and manage RESTful APIs to facilitate seamless communication between different software components and external systems. Apply and advocate for best practices in software development, including code reviews, unit testing, continuous integration, and continuous deployment. Implement and manage deployments using Docker to ensure consistent and efficient application delivery across different environments. Design, implement, and maintain database schemas, ensuring efficient data storage, retrieval, and manipulation. Develop user-friendly and responsive front-end applications using modern web technologies to provide a seamless user experience. Work closely with product managers, designers, and other engineers to deliver high-quality software solutions that meet business requirements. Identify and resolve software issues and bugs promptly to ensure smooth operation and minimal downtime. Stay updated with emerging technologies, industry trends, and best practices in software development to continuously improve skills and knowledge. Promotes code quality through reviews, static analysis tools, and adherence to team standards and best practices. Provide guidance and mentorship to junior engineers, fostering a collaborative and growth-oriented team environment. Functional Skills: Deep understanding of software engineering best practices and overall software product development lifecycle, including version control, CI/CD, TDD, and agile methodologies. Strong grasp of OOP, design patterns, and clean code principles with a focus on maintainability and testability. Proficiency with backend languages and frameworks (Python, FastAPI, Flask preferred). Proficiency in JavaScript and modern web technologies, including React, Angular, and Node.js Experience with databases (Postgres/DynamoDB) Experience managing and deploying infrastructure in at least one cloud provider such as AWS (preferred), Azure, or Google Cloud. Experience with microservices architecture and containerization (Docker, Kubernetes). Good-to-Have Skills: Familiarity with enterprise software systems in life sciences or healthcare domains. Familiarity with big data platforms and experience in data pipeline development (Databricks, Spark). Knowledge of data security, privacy regulations, and scalable software solutions. Soft Skills: Excellent communication skills, with the ability to convey complex technical concepts to non-technical stakeholders. Ability to foster a collaborative and innovative work environment. Strong problem-solving abilities and attention to detail. High degree of initiative and self-motivation. Basic Qualifications: Bachelors degree in Computer Science, AI, Software Engineering, or related field. 5+ years of experience in full-stack software engineering.

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Get to know Okta Okta is The World’s Identity Company. We free everyone to safely use any technology—anywhere, on any device or app. Our Workforce and Customer Identity Clouds enable secure yet flexible access, authentication, and automation that transforms how people move through the digital world, putting Identity at the heart of business security and growth. At Okta, we celebrate a variety of perspectives and experiences. We are not looking for someone who checks every single box - we’re looking for lifelong learners and people who can make us better with their unique experiences. Join our team! We’re building a world where Identity belongs to you. We seek a Software Engineer in Test to join our Client Foundations Team. Okta Device Access extends Okta's Identity and Access Management capabilities to the device sign-in experience. Using the same authenticators used to secure your Okta-protected apps and workforce devices, your users can verify their identity and sign in to their devices with a secure, seamless experience. Windows is our primary platform of focus and the team is constantly exploring new technologies and services while innovating products. We’re pushing the envelope forward...come join us! We are seeking a Software Engineer in Test who is passionate about testing mission-critical Okta Device Access products in a dynamic, agile environment. You will collaborate with the Client Foundations Team in India, sharing our commitment to delivering simple, elegant, and highly usable solutions. At Okta Engineering, we value automated testing, UX design, and an iterative approach to building high-quality, next-generation software. Job Duties and Responsibilities: Review requirements and design specs to develop relative test plans and test cases Automate API tests, end-to-end tests, reliability/scale tests Work with engineering management to scope and plan engineering efforts Communicate and document QE plans for scrum teams to review Review application code, identify bugs and other areas of weakness, architect tools for future coverage Automate all critical features to maintain zero-debt cadence Release features with solid quality Respond to production issues/alerts and customer issues during on-call rotation Be a strong customer advocate with a strong quality DNA. Requirements: Minimum of a Bachelor's degree in software engineering (or related) 5+ years of product testing and test automation experience Experience in quality engineering for enterprise-level software. Experience in XCUI-based automation development. Should be able to write new and maintain existing automated test cases Familiarity with automation tools like Selenium, TestCafe, API (Rest Assured, Karate, etc.), Familiarity with databases such as MySQL, DynamoDB, etc. Expertise in test planning and cross-team collaborative efforts. Experience working with distributed systems at large scale Able to write and review designs and code with other team members Able to deliver well-designed, high-quality code on time Nice to have: Experience with continuous integration/continuous deployment (CI/CD) practices. Experience with Office 365, Google, Salesforce, and Active Directories/LDAP integrations. What you can look forward to as a Full-Time Okta employee! Amazing Benefits Making Social Impact Developing Talent and Fostering Connection + Community at Okta Okta cultivates a dynamic work environment, providing the best tools, technology and benefits to empower our employees to work productively in a setting that best and uniquely suits their needs. Each organization is unique in the degree of flexibility and mobility in which they work so that all employees are enabled to be their most creative and successful versions of themselves, regardless of where they live. Find your place at Okta today! https://www.okta.com/company/careers/. Some roles may require travel to one of our office locations for in-person onboarding. Okta is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, ancestry, marital status, age, physical or mental disability, or status as a protected veteran. We also consider for employment qualified applicants with arrest and convictions records, consistent with applicable laws. If reasonable accommodation is needed to complete any part of the job application, interview process, or onboarding please use this Form to request an accommodation. Okta is committed to complying with applicable data privacy and security laws and regulations. For more information, please see our Privacy Policy at https://www.okta.com/privacy-policy/. Show more Show less

Posted 2 weeks ago

Apply

6.0 - 10.0 years

0 Lacs

Delhi

On-site

Job requisition ID :: 84234 Date: Jun 15, 2025 Location: Delhi Designation: Senior Consultant Entity: What impact will you make? Every day, your work will make an impact that matters, while you thrive in a dynamic culture of inclusion, collaboration and high performance. As the undisputed leader in professional services, Deloitte is where you will find unrivaled opportunities to succeed and realize your full potential Deloitte is where you will find unrivaled opportunities to succeed and realize your full potential. The Team Deloitte’s Technology & Transformation practice can help you uncover and unlock the value buried deep inside vast amounts of data. Our global network provides strategic guidance and implementation services to help companies manage data from disparate sources and convert it into accurate, actionable information that can support fact-driven decision-making and generate an insight-driven advantage. Our practice addresses the continuum of opportunities in business intelligence & visualization, data management, performance management and next-generation analytics and technologies, including big data, cloud, cognitive and machine learning. Learn more about Analytics and Information Management Practice Work you’ll do As a Senior Consultant in our Consulting team, you’ll build and nurture positive working relationships with teams and clients with the intention to exceed client expectations. You’ll: We are seeking a highly skilled Senior AWS DevOps Engineer with 6-10 years of experience to lead the design, implementation, and optimization of AWS cloud infrastructure, CI/CD pipelines, and automation processes. The ideal candidate will have in-depth expertise in Terraform, Docker, Kubernetes, and Big Data technologies such as Hadoop and Spark. You will be responsible for overseeing the end-to-end deployment process, ensuring the scalability, security, and performance of cloud systems, and mentoring junior engineers. Overview: We are seeking experienced AWS Data Engineers to design, implement, and maintain robust data pipelines and analytics solutions using AWS services. The ideal candidate will have a strong background in AWS data services, big data technologies, and programming languages. Exp- 2 to 7 years Location- Bangalore, Chennai, Coimbatore, Delhi, Mumbai, Bhubaneswar. Key Responsibilities: 1. Design and implement scalable, high-performance data pipelines using AWS services 2. Develop and optimize ETL processes using AWS Glue, EMR, and Lambda 3. Build and maintain data lakes using S3 and Delta Lake 4. Create and manage analytics solutions using Amazon Athena and Redshift 5. Design and implement database solutions using Aurora, RDS, and DynamoDB 6. Develop serverless workflows using AWS Step Functions 7. Write efficient and maintainable code using Python/PySpark, and SQL/PostgrSQL 8. Ensure data quality, security, and compliance with industry standards 9. Collaborate with data scientists and analysts to support their data needs 10. Optimize data architecture for performance and cost-efficiency 11. Troubleshoot and resolve data pipeline and infrastructure issues Required Qualifications: 1. bachelor’s degree in computer science, Information Technology, or related field 2. Relevant years of experience as a Data Engineer, with at least 60% of experience focusing on AWS 3. Strong proficiency in AWS data services: Glue, EMR, Lambda, Athena, Redshift, S3 4. Experience with data lake technologies, particularly Delta Lake 5. Expertise in database systems: Aurora, RDS, DynamoDB, PostgreSQL 6. Proficiency in Python and PySpark programming 7. Strong SQL skills and experience with PostgreSQL 8. Experience with AWS Step Functions for workflow orchestration Technical Skills: AWS Services: Glue, EMR, Lambda, Athena, Redshift, S3, Aurora, RDS, DynamoDB , Step Functions Big Data: Hadoop, Spark, Delta Lake Programming: Python, PySpark Databases: SQL, PostgreSQL, NoSQL Data Warehousing and Analytics ETL/ELT processes Data Lake architectures Version control: Github Your role as a leader At Deloitte India, we believe in the importance of leadership at all levels. We expect our people to embrace and live our purpose by challenging themselves to identify issues that are most important for our clients, our people, and for society and make an impact that matters. In addition to living our purpose, Senior Consultant across our organization: Develop high-performing people and teams through challenging and meaningful opportunities Deliver exceptional client service; maximize results and drive high performance from people while fostering collaboration across businesses and borders Influence clients, teams, and individuals positively, leading by example and establishing confident relationships with increasingly senior people Understand key objectives for clients and Deloitte; align people to objectives and set priorities and direction. Acts as a role model, embracing and living our purpose and values, and recognizing others for the impact they make How you will grow At Deloitte, our professional development plan focuses on helping people at every level of their career to identify and use their strengths to do their best work every day. From entry-level employees to senior leaders, we believe there is always room to learn. We offer opportunities to help build excellent skills in addition to hands-on experience in the global, fast-changing business world. From on-the-job learning experiences to formal development programs at Deloitte University, our professionals have a variety of opportunities to continue to grow throughout their career. Explore Deloitte University, The Leadership Centre. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our purpose Deloitte is led by a purpose: To make an impact that matters . Every day, Deloitte people are making a real impact in the places they live and work. We pride ourselves on doing not only what is good for clients, but also what is good for our people and the Communities in which we live and work—always striving to be an organization that is held up as a role model of quality, integrity, and positive change. Learn more about Deloitte's impact on the world Recruiter tips We want job seekers exploring opportunities at Deloitte to feel prepared and confident. To help you with your interview, we suggest that you do your research: know some background about the organization and the business area you are applying to. Check out recruiting tips from Deloitte professionals.

Posted 2 weeks ago

Apply

6.0 - 10.0 years

0 Lacs

Delhi

On-site

Job requisition ID :: 84245 Date: Jun 15, 2025 Location: Delhi Designation: Consultant Entity: What impact will you make? Every day, your work will make an impact that matters, while you thrive in a dynamic culture of inclusion, collaboration and high performance. As the undisputed leader in professional services, Deloitte is where you will find unrivaled opportunities to succeed and realize your full potential Deloitte is where you will find unrivaled opportunities to succeed and realize your full potential. The Team Deloitte’s Technology & Transformation practice can help you uncover and unlock the value buried deep inside vast amounts of data. Our global network provides strategic guidance and implementation services to help companies manage data from disparate sources and convert it into accurate, actionable information that can support fact-driven decision-making and generate an insight-driven advantage. Our practice addresses the continuum of opportunities in business intelligence & visualization, data management, performance management and next-generation analytics and technologies, including big data, cloud, cognitive and machine learning. Learn more about Analytics and Information Management Practice Work you’ll do As a Senior Consultant in our Consulting team, you’ll build and nurture positive working relationships with teams and clients with the intention to exceed client expectations. You’ll: We are seeking a highly skilled Senior AWS DevOps Engineer with 6-10 years of experience to lead the design, implementation, and optimization of AWS cloud infrastructure, CI/CD pipelines, and automation processes. The ideal candidate will have in-depth expertise in Terraform, Docker, Kubernetes, and Big Data technologies such as Hadoop and Spark. You will be responsible for overseeing the end-to-end deployment process, ensuring the scalability, security, and performance of cloud systems, and mentoring junior engineers. Overview: We are seeking experienced AWS Data Engineers to design, implement, and maintain robust data pipelines and analytics solutions using AWS services. The ideal candidate will have a strong background in AWS data services, big data technologies, and programming languages. Exp- 2 to 7 years Location- Bangalore, Chennai, Coimbatore, Delhi, Mumbai, Bhubaneswar. Key Responsibilities: 1. Design and implement scalable, high-performance data pipelines using AWS services 2. Develop and optimize ETL processes using AWS Glue, EMR, and Lambda 3. Build and maintain data lakes using S3 and Delta Lake 4. Create and manage analytics solutions using Amazon Athena and Redshift 5. Design and implement database solutions using Aurora, RDS, and DynamoDB 6. Develop serverless workflows using AWS Step Functions 7. Write efficient and maintainable code using Python/PySpark, and SQL/PostgrSQL 8. Ensure data quality, security, and compliance with industry standards 9. Collaborate with data scientists and analysts to support their data needs 10. Optimize data architecture for performance and cost-efficiency 11. Troubleshoot and resolve data pipeline and infrastructure issues Required Qualifications: 1. bachelor’s degree in computer science, Information Technology, or related field 2. Relevant years of experience as a Data Engineer, with at least 60% of experience focusing on AWS 3. Strong proficiency in AWS data services: Glue, EMR, Lambda, Athena, Redshift, S3 4. Experience with data lake technologies, particularly Delta Lake 5. Expertise in database systems: Aurora, RDS, DynamoDB, PostgreSQL 6. Proficiency in Python and PySpark programming 7. Strong SQL skills and experience with PostgreSQL 8. Experience with AWS Step Functions for workflow orchestration Technical Skills: AWS Services: Glue, EMR, Lambda, Athena, Redshift, S3, Aurora, RDS, DynamoDB , Step Functions Big Data: Hadoop, Spark, Delta Lake Programming: Python, PySpark Databases: SQL, PostgreSQL, NoSQL Data Warehousing and Analytics ETL/ELT processes Data Lake architectures Version control: Github Your role as a leader At Deloitte India, we believe in the importance of leadership at all levels. We expect our people to embrace and live our purpose by challenging themselves to identify issues that are most important for our clients, our people, and for society and make an impact that matters. In addition to living our purpose, Senior Consultant across our organization: Develop high-performing people and teams through challenging and meaningful opportunities Deliver exceptional client service; maximize results and drive high performance from people while fostering collaboration across businesses and borders Influence clients, teams, and individuals positively, leading by example and establishing confident relationships with increasingly senior people Understand key objectives for clients and Deloitte; align people to objectives and set priorities and direction. Acts as a role model, embracing and living our purpose and values, and recognizing others for the impact they make How you will grow At Deloitte, our professional development plan focuses on helping people at every level of their career to identify and use their strengths to do their best work every day. From entry-level employees to senior leaders, we believe there is always room to learn. We offer opportunities to help build excellent skills in addition to hands-on experience in the global, fast-changing business world. From on-the-job learning experiences to formal development programs at Deloitte University, our professionals have a variety of opportunities to continue to grow throughout their career. Explore Deloitte University, The Leadership Centre. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our purpose Deloitte is led by a purpose: To make an impact that matters . Every day, Deloitte people are making a real impact in the places they live and work. We pride ourselves on doing not only what is good for clients, but also what is good for our people and the Communities in which we live and work—always striving to be an organization that is held up as a role model of quality, integrity, and positive change. Learn more about Deloitte's impact on the world Recruiter tips We want job seekers exploring opportunities at Deloitte to feel prepared and confident. To help you with your interview, we suggest that you do your research: know some background about the organization and the business area you are applying to. Check out recruiting tips from Deloitte professionals.

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Summary Position Summary AWS DevSecOps Engineer – CL4 Role Overview : As a DevSecOps Engineer , you will actively engage in your engineering craft, taking a hands-on approach to multiple high-visibility projects. Your expertise will be pivotal in delivering solutions that delight customers and users, while also driving tangible value for Deloitte's business investments. You will leverage your extensive DevSecOps engineering craftsmanship and advanced proficiency across multiple programming languages, DevSecOps tools, and modern frameworks, consistently demonstrating your strong track record in delivering high-quality, outcome-focused CI/CD and automation solutions. The ideal candidate will be a dependable team player, collaborating with cross-functional teams to design, develop, and deploy advanced software solutions. Key Responsibilities : Outcome-Driven Accountability: Embrace and drive a culture of accountability for customer and business outcomes. Develop DevSecOps engineering solutions that solve complex automation problems with valuable outcomes, ensuring high-quality, lean, resilient and secure pipelines with low operating costs, meeting platform/technology KPIs. Technical Leadership and Advocacy: Serve as the technical advocate for DevSecOps modern practices, ensuring integrity, feasibility, and alignment with business and customer goals, NFRs, and applicable automation/integration/security practices—being responsible for designing and maintaining code repos, CI/CD pipelines, integrations (code quality, QE automation, security, etc.) and environments (sandboxes, dev, test, stage, production) through IaC, both for custom and package solutions, including identifying, assessing, and remediating vulnerabilities. Engineering Craftsmanship: Maintain accountability for the integrity and design of DevSecOps pipelines and environments while leading the implementation of deployment techniques like Blue-Green, Canary to minimize down-time and enable A/B testing. Be always hands-on and actively engage with engineers to ensure DevSecOps practices are understood and can be implemented throughout the product development life cycle. Resolve any technical issues from implementation to production operations (e.g., leading triage and troubleshooting production issues). Be self-driven to learn new technologies, experiment with engineers, and inspire the team to learn and drive application of those new technologies. Customer-Centric Engineering: Develop lean, and yet scalable and flexible, DevSecOps automations through rapid, inexpensive experimentation to solve customer needs, enabling version control, security, logging, feedback loops, continuous delivery, etc. Engage with customers and product teams to deliver the right automation, security, and deployment practices. Incremental and Iterative Delivery: Adopt a mindset that favors action and evidence over extensive planning. Utilize a leaning-forward approach to navigate complexity and uncertainty, delivering lean, supportable, and maintainable solutions. Cross-Functional Collaboration and Integration: Work collaboratively with empowered, cross-functional teams including product management, experience, engineering, delivery, infrastructure, and security. Integrate diverse perspectives to make well-informed decisions that balance feasibility, viability, usability, and value. Support a collaborative environment that enhances team synergy and innovation. Advanced Technical Proficiency: Possess intermediary knowledge in modern software engineering practices and principles, including Agile methodologies, DevSecOps, Continuous Integration/Continuous Deployment. Strive to be a role model, leveraging these techniques to optimize solutioning and product delivery, ensuring high-quality outcomes with minimal waste. Demonstrate intermediate level understanding of the product development lifecycle, from conceptualization and design to implementation and scaling, with a focus on continuous improvement and learning. Domain Expertise: Quickly acquire domain-specific knowledge relevant to the business or product. Translate business/user needs into technical requirements and automations. Learn to navigate various enterprise functions such as product, experience, engineering, compliance, and security to drive product value and feasibility. Effective Communication and Influence: Exhibit exceptional communication skills, capable of articulating technical concepts clearly and compellingly. Support teammates and product teams through well-structured arguments and trade-offs supported by evidence, evaluations, and research. Learn to create a coherent narrative that align technical solutions with business objectives. Engagement and Collaborative Co-Creation: Able to engage and collaborate with product engineering teams, including customers as needed. Able to build and maintain constructive relationships, fostering a culture of co-creation and shared momentum towards achieving product goals. Support diverse perspectives and consensus to create feasible solutions. The team : US Deloitte Technology Product Engineering has modernized software and product delivery, creating a scalable, cost-effective model that focuses on value/outcomes by leveraging a progressive and responsive talent structure. As Deloitte’s primary internal development team, Product Engineering delivers innovative digital solutions to businesses, service lines, and internal operations with proven bottom-line results and outcomes. It helps power Deloitte’s success. It is the engine that drives Deloitte, serving many of the world’s largest, most respected companies. We develop and deploy cutting-edge internal and go-to-market solutions that help Deloitte operate effectively and lead in the market. Our reputation is built on a tradition of delivering with excellence. Key Qualifications : A bachelor’s degree in computer science, software engineering, or a related discipline. An advanced degree (e.g., MS) is preferred but not required. Experience is the most relevant factor. Strong software engineering foundation with deep understanding of OOP/OOD, functional programming, data structures and algorithms, software design patterns, code instrumentations, etc. 5+ years proven experience with Python, Bash, PowerShell, JavaScript, C#, and Golang (preferred). 5+ years proven experience with CI/CD tools (Azure DevOps and GitHub Enterprise) and Git (version control, branching, merging, handling pull requests) to automate build, test, and deployment processes. 5+ years of hands-on experience in security tools automation SAST/DAST (SonarQube, Fortify, Mend), monitoring/logging (Prometheus, Grafana, Dynatrace), and other cloud-native tools on AWS, Azure, and GCP. 5+ years of hands-on experience in using Infrastructure as Code (IaC) technologies like Terraform, Puppet, Azure Resource Manager (ARM), AWS Cloud Formation, and Google Cloud Deployment Manager. 2+ years of hands-on experience with cloud native services like Data Lakes, CDN, API Gateways, Managed PaaS, Security, etc. on multiple cloud providers like AWS, Azure and GCP is preferred. Strong understanding of methodologies like, XP, Lean, SAFe to deliver high quality products rapidly. General understanding of cloud providers security practices, database technologies and maintenance (e.g. RDS, DynamoDB, Redshift, Aurora, Azure SQL, Google Cloud SQL) General knowledge of networking, firewalls, and load balancers. Strong preference will be given to candidates with AI/ML and GenAI. Excellent interpersonal and organizational skills, with the ability to handle diverse situations, complex projects, and changing priorities, behaving with passion, empathy, and care. How You will Grow: At Deloitte, our professional development plans focus on helping people at every level of their career to identify and use their strengths to do their best work every day and excel in everything they do. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Professional development From entry-level employees to senior leaders, we believe there’s always room to learn. We offer opportunities to build new skills, take on leadership opportunities and connect and grow through mentorship. From on-the-job learning experiences to formal development programs, our professionals have a variety of opportunities to continue to grow throughout their career. Requisition code: 302803 Show more Show less

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies