Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
14.0 years
0 Lacs
Vadodara, Gujarat, India
On-site
Responsibilities / Tasks Strategic Sourcing & Category Management Lead category strategy development for direct and indirect spend (e.g., raw materials, machined components, Casting ,Forging ). Drive supplier segmentation, risk management, and long-term sourcing agreements (LTAs). Conduct should-cost analysis, benchmarking, and total cost of ownership (TCO) assessments. Manage supplier performance using KPIs (OTD, quality, cost, innovation). Identify and onboard strategic suppliers aligned with cost, quality, and innovation targets. 🔹 Procurement Digitalization & Process Automation Champion the deployment and enhancement of digital procurement platforms (e.g., SAP Ariba, Coupa, Jaggaer, or custom ERP tools). Implement tools for e-sourcing, contract lifecycle management (CLM), supplier collaboration portals, and spend analytics dashboards. Drive automation of P2P (Procure-to-Pay), source-to-contract (S2C), and workflow integration with finance, production, and planning systems. Evaluate and implement AI/ML-based procurement intelligence and supplier scorecards. Lead change management and stakeholder training for digital adoption across global teams. 🔹 Cost Optimization & Value Engineering Lead cross-functional cost reduction initiatives with engineering, quality, and operations teams. Support Design-to-Cost (DTC) and Value Analysis/Value Engineering (VA/VE) programs. Evaluate global sourcing opportunities, including LCC (Low-Cost Country) sourcing. Drive make vs. buy analysis and contribute to capacity expansion strategies. 🔹 Compliance, Sustainability & Governance Ensure compliance with internal policies, legal requirements, and supply chain transparency laws (e.g., RoHS, REACH, ESG reporting). Integrate sustainable procurement practices and develop supplier sustainability scorecards. Lead supplier risk mitigation strategies (geopolitical, financial, logistical, environmental). Education Your Profile / Qualifications Bachelor’s degree in Mechanical/Production/Industrial Engineering, Supply Chain. Master’s in Business Administration (MBA) or Supply Chain Management ( Added advantage) . Experience Minimum 14+ years in strategic sourcing/procurement . Proven experience in implementing or managing Strategic Procurement. Strong background in engineering/manufacturing industries (Food and Pharma Machinery ,automotive, heavy machinery, etc.). Technical Skills Familiarity with data analytics tools (Power BI, Tableau, SQL). Strong analytical, negotiation, and project management skills. Ability to analyze technical drawings and specifications for procurement of engineered items. Expertise in eProcurement platforms (SAP Ariba, Oracle SCM, Coupa, etc.). Soft Skills Strong leadership and stakeholder management skills. Excellent communication and change management capabilities. Business acumen and strategic thinking. Did we spark your interest? Then please click apply above to access our guided application process.
Posted 1 day ago
10.0 years
0 Lacs
Vadodara, Gujarat, India
On-site
Specialist Strategic Buyer Responsibilities / Tasks Evaluating the demand and source a supplier according to demand Generate and implement efficient sourcing and category management strategies Analyzing and calculating procurement costs and developing cost reduction strategies. Driving purchasing decisions based on cost and scenario analysis, as well as market trends. Negotiating contracts with key suppliers, including costs and terms of supply, service, and quality. Collaborating with the Stakeholders & identify and pursue new supplier opportunities. Conducting market research, as well as creating cost estimates and forecasts. Estimating risks and applying risk minimizing techniques, as well as negotiating contracts that comply with industry Conducting market research, Identifying potential new suppliers as well as creating cost estimates and forecasts. Drive cost optimization Implement & practice improved Procurement processes to support the new Business requirement Act independently to determine Key objective by using define Authorization Matrix & Guideline. Your Profile / Qualifications Bachelor/Diploma’s degree in Engineering/ Certificate course in Materials Mngt 10+ years of Procurement experience Proven working experience as a Strategic Procurement manager at a managerial Level. Experience in collecting and analyzing data Understanding of market dynamics and sound business judgement Good understanding Process improvement Good understanding of contract management, Legal & taxation Detail & working knowledge of SAP MM Module. Negotiation skills. Understanding of e-business / e-procurement systems. Business ethics. Ability to communicate effectively through presentations, email, one-on-one, and team discussion Ability as a natural facilitator to enable team-based decisions Time management Agile, Troubleshooting, problem solving. Did we spark your interest? Then please click apply above to access our guided application process.
Posted 1 day ago
8.0 - 10.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. What You Will Do Let’s do this. Let’s change the world. Role Description: We are seeking an experienced Senior Manager, Data Engineering to lead and scale a strong team of data engineers. This role blends technical depth with strategic oversight and people leadership. The ideal candidate will oversee the execution of data engineering initiatives, collaborate with business analysts and multi-functional teams, manage resource capacity, and ensure delivery aligned to business priorities. In addition to technical competence, the candidate will be adept at managing agile operations and driving continuous improvement. Roles & Responsibilities: Possesses strong rapid prototyping skills and can quickly translate concepts into working code. Provide expert guidance and mentorship to the data engineering team, fostering a culture of innovation and standard methodologies. Design, develop, and implement robust data architectures and platforms to support business objectives. Oversee the development and optimization of data pipelines, and data integration solutions. Establish and maintain data governance policies and standards to ensure data quality, security, and compliance. Architect and manage cloud-based data solutions, leveraging AWS or other preferred platforms. Lead and motivate a strong data engineering team to deliver exceptional results. Identify, analyze, and resolve complex data-related challenges. Collaborate closely with business collaborators to understand data requirements and translate them into technical solutions. Stay abreast of emerging data technologies and explore opportunities for innovation. Lead and manage a team of data engineers, ensuring appropriate workload distribution, goal alignment, and performance management. Work closely with business analysts and product collaborators to prioritize and align engineering output with business objectives. What We Expect Of You We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Master’s degree and 8 to 10 years of computer science and engineering preferred, other Engineering field is considered OR Bachelor’s degree and 10 to 14 years of computer science and engineering preferred, other Engineering field is considered OR Diploma and 14 to 18 years of computer science and engineering preferred, other Engineering field is considered Demonstrated proficiency in using cloud platforms (AWS, Azure, GCP) for data engineering solutions. Strong understanding of cloud architecture principles and cost optimization strategies. Proficient on experience in Python, PySpark, SQL. Handon experience with bid data ETL performance tuning. Proven ability to lead and develop strong data engineering teams. Strong problem-solving, analytical, and critical thinking skills to address complex data challenges. Strong communication skills for collaborating with business and technical teams alike. Preferred Qualifications: Experienced with data modeling and performance tuning for both OLAP and OLTP databases Experienced with Apache Spark, Apache Airflow Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops Experienced with AWS, GCP or Azure cloud services Professional Certification: AWS Certified Data Engineer preferred Databricks Certificate preferred Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills. What You Can Expect Of Us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Posted 1 day ago
5.0 - 9.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Let’s do this. Let’s change the world. In this vital role you will play a key role in a regulatory submission content automation initiative which will modernize and digitize the regulatory submission process, positioning Amgen as a leader in regulatory innovation. The initiative leverages state-of-the-art technologies, including Generative AI, Structured Content Management, and integrated data to automate the creation, and management of regulatory content. Roles & Responsibilities: Possesses strong rapid prototyping skills and can quickly translate concepts into working code Contribute to both front-end and back-end development using cloud technology Develop innovative solution using generative AI technologies Ensure code quality and adherence to best practices Create and maintain documentation on software architecture, design, deployment, disaster recovery, and operations Identify and resolve technical challenges effectively Stay updated with the latest trends and advancements Work closely with product team, business team, and other stakeholders Design, develop, and implement applications and modules, including custom reports, interfaces, and enhancements Analyze and understand the functional and technical requirements of applications, solutions and systems and translate them into software architecture and design specifications Develop and execute unit tests, integration tests, and other testing strategies to ensure the quality of the software Identify and resolve software bugs and performance issues Work closely with cross-functional teams, including product management, design, and QA, to deliver high-quality software on time Customize modules to meet specific business requirements Work on integrating with other systems and platforms to ensure seamless data flow and functionality Provide ongoing support and maintenance for applications, ensuring that they operate smoothly and efficiently What We Expect Of You We are all different, yet we all use our unique contributions to serve patients. The [vital attribute] professional we seek is a [type of person] with these qualifications. Basic Qualifications: Master's degree / Bachelor's degree and 5 to 9 years of experience in Computer Science, IT or related field Preferred Qualifications: Functional Skills: Must-Have Skills: Proficiency in Python/PySpark development, Fast API, PostgreSQL, Databricks, DevOps Tools, CI/CD, Data Ingestion. Candidates should be able to write clean, efficient, and maintainable code. Knowledge of HTML, CSS, and JavaScript, along with popular front-end frameworks like React or Angular, is required to build interactive and responsive web applications In-depth knowledge of data engineering concepts, ETL processes, and data architecture principles. Strong understanding of cloud computing principles, particularly within the AWS ecosystem Strong understanding of software development methodologies, including Agile and Scrum Experience with version control systems like Git Hands on experience with various cloud services, understand pros and cons of various cloud service in well architected cloud design principles Strong problem solving, analytical skills; Ability to learn quickly; Excellent communication and interpersonal skills Experienced with API integration, serverless, microservices architecture. Experience in SQL/NoSQL database, vector database for large language models Good-to-Have Skills: Strong understanding of cloud platforms (e.g., AWS, GCP, Azure) and containerization technologies (e.g., Docker, Kubernetes) Experience with monitoring and logging tools (e.g., Prometheus, Grafana, Splunk) Experience with data processing tools like Hadoop, Spark, or similar Experience with integration technologies Soft Skills: Excellent analytical and troubleshooting skills Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation Ability to manage multiple priorities successfully Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills What You Can Expect Of Us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com
Posted 1 day ago
9.0 years
0 Lacs
Gurugram, Haryana, India
Remote
Job Description This is a remote position. Job Description We are seeking a highly experienced and innovative Senior Data Engineer with a strong background in hybrid cloud data integration, pipeline orchestration, and AI-driven data modeling. This role is responsible for designing, building, and optimizing robust, scalable, and production-ready data pipelines across both AWS and Azure platforms, supporting modern data architectures such as CEDM and Data Vault 2.0. Responsibilities Design and develop hybrid ETL/ELT pipelines using AWS Glue and Azure Data Factory (ADF). Process files from AWS S3 and Azure Data Lake Gen2, including schema validation and data profiling. Implement event-based orchestration using AWS Step Functions and Apache Airflow (Astronomer). Develop and maintain bronze → silver → gold data layers using DBT or Coalesce. Create scalable ingestion workflows using Airbyte, AWS Transfer Family, and Rivery. Integrate with metadata and lineage tools like Unity Catalog and OpenMetadata. Build reusable components for schema enforcement, EDA, and alerting (e.g., MS Teams). Work closely with QA teams to integrate test automation and ensure data quality. Collaborate with cross-functional teams including data scientists and business stakeholders to align solutions with AI/ML use cases. Document architectures, pipelines, and workflows for internal stakeholders. Requirements Essential Skills: Job Experience with cloud platforms: AWS (Glue, Step Functions, Lambda, S3, CloudWatch, SNS, Transfer Family) and Azure (ADF, ADLS Gen2, Azure Functions,Event Grid). Skilled in transformation and ELT tools: Databricks (PySpark), DBT, Coalesce, and Python. Proficient in data ingestion using Airbyte, Rivery, SFTP/Excel files, and SQL Server extracts. Strong understanding of data modeling techniques including CEDM, Data Vault 2.0, and Dimensional Modeling. Hands-on experience with orchestration tools such as AWS Step Functions, Airflow (Astronomer), and ADF Triggers. Expertise in monitoring and logging with CloudWatch, AWS Glue Metrics, MS Teams Alerts, and Azure Data Explorer (ADX). Familiar with data governance and lineage tools: Unity Catalog, OpenMetadata, and schema drift detection. Proficient in version control and CI/CD using GitHub, Azure DevOps, CloudFormation, Terraform, and ARM templates. Experienced in data validation and exploratory data analysis with pandas profiling, AWS Glue Data Quality, and Great Expectations. Personal Excellent communication and interpersonal skills, with the ability to engage with teams. Strong problem-solving, decision-making, and conflict-resolution abilities. Proven ability to work independently and lead cross-functional teams. Ability to work in a fast-paced, dynamic environment and handle sensitive issues with discretion and professionalism. Ability to maintain confidentiality and handle sensitive information with attention to detail with discretion. The candidate must have strong work ethics and trustworthiness Must be highly collaborative and team oriented with commitment to excellence. Preferred Skills Job Proficiency in SQL and at least one programming language (e.g., Python, Scala). Experience with cloud data platforms (e.g., AWS, Azure, GCP) and their data and AI services. Knowledge of ETL tools and frameworks (e.g., Apache NiFi, Talend, Informatica). Deep understanding of AI/Generative AI concepts and frameworks (e.g., TensorFlow, PyTorch, Hugging Face, OpenAI APIs). Experience with data modeling, data structures, and database design. Proficiency with data warehousing solutions (e.g., Redshift, BigQuery, Snowflake). Hands-on experience with big data technologies (e.g., Hadoop, Spark, Kafka). Personal Demonstrate proactive thinking Should have strong interpersonal relations, expert business acumen and mentoring skills Have the ability to work under stringent deadlines and demanding client conditions Ability to work under pressure to achieve the multiple daily deadlines for client deliverables with a mature approach Other Relevant Information Bachelor’s in Engineering with specialization in Computer Science or Artificial Intelligence or Information Technology or a related field. 9+ years of experience in data engineering and data architecture. LeewayHertz is an equal opportunity employer and does not discriminate based on race, color, religion, sex, age, disability, national origin, sexual orientation, gender identity, or any other protected status. We encourage a diverse range of applicants. check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#6875E2;border-color:#6875E2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered="">
Posted 1 day ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
TransUnion's Job Applicant Privacy Notice What We'll Bring Experienced data scientist with extensive mathematical programming and theoretical foundation and advanced software development skills to support development of state-of-the-art homegrown analytic platforms. Requirements What You'll Bring: Master’s or PhD degree in statistics, applied mathematics, financial mathematics, computer science, engineering, operations research, or other highly quantitative field; or a Bachelor’s degree in a quantitative field with at least seven (7) years of relevant professional experience. In either case, the candidate will demonstrate a consistent track record of academic excellence. Advanced C++ programming skills, preferably in scientific computing applications. Experience designing and implementing advanced numerical algorithms. Experience with integrating popular machine learning frameworks (XGBoost, LightGBM, H2O). Proficiency with statistical languages such as R or machine learning packages for Python; experience with other programming languages (Scala, Java) and HPC environments (Slurm, Univa, SGE, Torque). Advanced SQL programming skills and experience with big data platforms (Hadoop, Spark, Hive); knowledge of Apache Arrow is a plus. Experience engineering connections between big data frameworks and front end applications (Shiny, Streamlit, Dash, Tableau). Demonstrate interest in industries served by TransUnion, such as financial services, insurance, fraud, and digital marketing. Ability to apply strong project and time management skills to lead multiple projects simultaneously with limited supervision in a collaborative and fast-paced environment. Evidence of strong analytical, critical, and creative thinking and willingness to take initiative in problem-solving. Versatile interpersonal skills with the ability to effectively communicate at multiple levels within and outside the organization Proven ability to operate effectively in a complex and dynamic, matrixed environment. Good verbal and written communication skills. Proven ability to translate technical concepts into actionable recommendations in a manner that is suitable to influence business partners and decision-makers inside and outside the organization towards desired outcomes. Ability to travel 10-20% of the time. Impact You'll Make You will partner with internal and external cross-functional teams to drive the setup and ongoing success of new data science environments in multiple markets. Tasks will include – among others – the development of effective data science workflows, the localization of global tools, and liaising with global SMEs in data science and technology. Lead the development of analytic solutions using languages such as C++, R, Python, SQL, Hive and Spark, formalizing some of these efforts into repeatable process improvements. Assist Global Technology with maintenance of the tools and frameworks used by analysts on the high-performance computing (HPC) cluster and be a lead representative of the Data Science Development team in projects led by Global Technology as a subject matter expert on machine learning and scientific computing. Own data science consulting responsibilities for a variety of regions, working to identify strategies and opportunities to test and adopt TransUnion’s analytic products and services. In this capacity, you will interact directly with TransUnion’s matrix partners and provide an analytic perspective and general support as needed. Contribute to research and innovation initiatives in collaboration with other DSA peers and may lead small analytic research teams or manage research interns on a project basis, as needed. Participate in interviewing and evaluation of new talent, mentoring and training of junior colleagues, fostering a high-performance culture, and cultivating an environment that promotes excellence and reflects the TransUnion brand. has context menu This is a hybrid position and involves regular performance of job responsibilities virtually as well as in-person at an assigned TU office location for a minimum of two days a week. TransUnion Job Title Sr Consultant, Data Science and Analytics
Posted 1 day ago
5.0 - 9.0 years
10 - 18 Lacs
Pune
Work from Office
experience in big data development and data engineering. - Proficiency in Java and experience with Apache Spark. - Experience in API development and integration. - Strong understanding of data engineering principles and big data concepts. Required Candidate profile - Familiarity with big data tools such as Hadoop, HDFS, Hive, HBase, and Kafka. - Experience with SQL and NoSQL databases. - Strong communication and collaboration skills
Posted 1 day ago
5.0 - 7.0 years
9 - 14 Lacs
Noida
Work from Office
Skilled AWS Databricks Platform Administrator to manage and optimize our Databricks environment. The ideal candidate will have strong expertise in user access management, user persona development, and the ability to collaborate with architects to implement configuration changes. This role involves ensuring the security, performance, and reliability of the Databricks platform while supporting users and maintaining compliance with organizational policies. Good experience with SDLC Databricks platform administration is a must Must have security and access control experience, user provisioning Services integration experience Should be able to work with enterprise architects Good to have - API experience Required Skills & Qualifications 5-7 years of experience as a Databricks Administrator or similar role. Strong experience with AWS services (IAM, S3, EC2, Lambda, Glue, etc.). Expertise in Databricks administration, workspace management, and security configurations . Hands-on experience with AD groups, user access management, RBAC, and IAM policies . Experience in developing and managing user personas within enterprise environments. Strong understanding of network security, authentication, and data governance . Proficiency in Python, SQL, and Spark for troubleshooting and automation. Familiarity with Terraform, CloudFormation, or Infrastructure as Code (IaC) is a plus. Knowledge of CI/CD pipelines and DevOps best practices is desirable. Excellent communication and documentation skills . Preferred Certifications AWS Certified Solutions Architect Associate Professional Databricks Certified Data Engineer Administrator Certified Information Systems Security Professional (CISSP) Nice to have Mandatory Competencies Data Science - Databricks Cloud - AWS Cloud - Azure Cloud - AWS Lambda Data on Cloud - AWS S3 Python - Python Database - SQL Big Data - SPARK Beh - Communication and collaboration
Posted 1 day ago
4.0 - 12.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Description – Senior Data Engineer We at Pine Labs are looking for those who share our core belief - “Every Day is Game day”. We bring our best selves to work each day to realize our mission of enriching the world through the power of digital commerce and financial services. Role Purpose We are looking for skilled Senior Data Engineers with 4-12 years of experience to join our growing team. You will design, build, and optimize real-time and batch data pipelines, leveraging AWS cloud technologies and Apache Pinot to enable high-performance analytics for our business. This role is ideal for engineers who are passionate about working with large-scale data and real-time processing. Responsibilities We Entrust You With Data Pipeline Development: Build and maintain robust ETL/ELT pipelines for batch and streaming data using tools like Apache Spark, Apache Flink, or AWS Glue. Develop real-time ingestion pipelines into Apache Pinot using streaming platforms like Kafka or Kinesis. Real-Time Analytics Configure and optimize Apache Pinot clusters for sub-second query performance and high availability. Design indexing strategies and schema structures to support real-time and historical data use cases. Cloud Infrastructure Management Work extensively with AWS services such as S3, Redshift, Kinesis, Lambda, DynamoDB, and CloudFormation to create scalable, cost-effective solutions. Implement infrastructure as code (IaC) using tools like Terraform or AWS CDK. Performance Optimization Optimize data pipelines and queries to handle high throughput and large-scale data efficiently. Monitor and tune Apache Pinot and AWS components to achieve peak performance. Data Governance & Security Ensure data integrity, security, and compliance with organizational and regulatory standards (e.g., GDPR, SOC2). Implement data lineage, access controls, and auditing mechanisms. Collaboration Work closely with data scientists, analysts, and other engineers to translate business requirements into technical solutions. Collaborate in an Agile environment, participating in sprints, standups, and retrospectives. Relevant Work Experience 4-12 years of hands-on experience in data engineering or related roles. Proven expertise with AWS services and real-time analytics platforms like Apache Pinot or similar technologies (e.g., Druid, ClickHouse). Proficiency in Python, Java, or Scala for data processing and pipeline development. Strong SQL skills and experience with both relational and NoSQL databases. Hands-on experience with streaming platforms such as Apache Kafka or AWS Kinesis. Familiarity with big data tools like Apache Spark, Flink, or Airflow. Strong problem-solving skills and a proactive approach to challenges. Excellent communication and collaboration abilities in cross-functional teams. Preferred Qualifications Experience with data lakehouse architectures (e.g., Delta Lake, Iceberg). Knowledge of containerization and orchestration tools (e.g., Docker, Kubernetes). Exposure to monitoring tools like Prometheus, Grafana, or CloudWatch. Familiarity with data visualization tools like Tableau or Superset. What We Offer Competitive compensation based on experience. Flexible work environment with opportunities for growth. Work on cutting-edge technologies and projects in data engineering and analytics. What We Value In Our People You take the shot: You Decide Fast and You Deliver Right You are the CEO of what you do: you show ownership and make things happen You own tomorrow: by building solutions for the merchants and doing the right thing You sign your work like an artist: You seek to learn and take pride in the work you do
Posted 1 day ago
2.0 - 5.0 years
16 - 18 Lacs
Coimbatore
Work from Office
Overview Overview Annalect is currently seeking a Senior Data Engineer to join our Technology team. In this role you will build Annalect products which sit atop cloud-based data infrastructure. We are looking for people who have a shared passion for technology, design & development, data, and fusing these disciplines together to build cool things. In this role, you will work on one or more software and data products in the Annalect Engineering Team. You will participate in technical architecture, design and development of software products as well as research and evaluation of new technical solutions Responsibilities Designing, building, testing, and deploying data transfers across various cloud environments (Azure, GCP, AWS, Snowflake, etc). Developing data pipelines, monitoring, maintaining, and tuning. Write at-scale data transformations in SQL and Python. Perform code reviews and provide leadership and guidance to junior developers. Qualifications Curiosity in learning the business requirements that are driving the engineering requirements. Interest in new technologies and eagerness to bring those technologies and out of the box ideas to the team. 3+ years of SQL experience. 3+ years of professional Python experience. 3+ years of professional Linux experience. Preferred familiarity with Snowflake, AWS, GCP, Azure cloud environments. Intellectual curiosity and drive; self-starters will thrive in this position. Passion for Technology: Excitement for new technology, bleeding edge applications, and a positive attitude towards solving real world challenges. Additional Skills BS BS, MS or PhD in Computer Science, Engineering, or equivalent real-world experience. Experience with big data and/or infrastructure. Bonus for having experience in setting up Petabytes of data so they can be easily accessed. Understanding of data organization, ie partitioning, clustering, file sizes, file formats. Experience working with classical relational databases (Postgres, Mysql, MSSQL). Experience with Hadoop, Hive, Spark, Redshift, or other data processing tools (Lots of time will be spent building and optimizing transformations) Proven ability to independently execute projects from concept to implementation to launch and to maintain a live product. Perks of working at Annalect We have an incredibly fun, collaborative, and friendly environment, and often host social and learning activities such as game night, speaker series, and so much more! Halloween is a special day on our calendar since it is our Founding Day – we go all out with decorations, costumes, and prizes! Generous vacation policy. Paid time off (PTO) includes vacation days, personal days, and a Summer Friday program. Extended time off around the holiday season. Our office is closed between Xmas and New Year to encourage our hardworking employees to rest, recharge and celebrate the season with family and friends. As part of Omnicom, we have the backing and resources of a global billion-dollar company, but also have the flexibility and pace of a “startup” - we move fast, break things, and innovate. Work with modern stack and environment to keep on learning and improving helping to experiment and shape latest technologies
Posted 1 day ago
0 years
0 Lacs
Navi Mumbai, Maharashtra, India
On-site
Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your Role And Responsibilities As a Data Engineer at IBM, you'll play a vital role in the development, design of application, provide regular support/guidance to project teams on complex coding, issue resolution and execution. Your primary responsibilities include: Lead the design and construction of new solutions using the latest technologies, always looking to add business value and meet user requirements. Strive for continuous improvements by testing the build solution and working under an agile framework. Discover and implement the latest technologies trends to maximize and build creative solutions Preferred Education Master's Degree Required Technical And Professional Expertise Experience with Apache Spark (PySpark): In-depth knowledge of Spark’s architecture, core APIs, and PySpark for distributed data processing. Big Data Technologies: Familiarity with Hadoop, HDFS, Kafka, and other big data tools. Data Engineering Skills: Strong understanding of ETL pipelines, data modeling, and data warehousing concepts. Strong proficiency in Python: Expertise in Python programming with a focus on data processing and manipulation. Data Processing Frameworks: Knowledge of data processing libraries such as Pandas, NumPy. SQL Proficiency: Experience writing optimized SQL queries for large-scale data analysis and transformation. Cloud Platforms: Experience working with cloud platforms like AWS, Azure, or GCP, including using cloud storage systems Preferred Technical And Professional Experience Define, drive, and implement an architecture strategy and standards for end-to-end monitoring. Partner with the rest of the technology teams including application development, enterprise architecture, testing services, network engineering, Good to have detection and prevention tools for Company products and Platform and customer-facing
Posted 1 day ago
0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Established in 2004, OLIVER is the world’s first and only specialist in designing, building, and running bespoke in-house agencies and marketing ecosystems for brands. We partner with over 300 clients in 40+ countries and counting. Our unique model drives creativity and efficiency, allowing us to deliver tailored solutions that resonate deeply with audiences. As a part of The Brandtech Group , we're at the forefront of leveraging cutting-edge AI technology to revolutionise how we create and deliver work. Our AI solutions enhance efficiency, spark creativity, and drive insightful decision-making, empowering our teams to produce innovative and impactful results. Job Title: Account Manager Role: Freelancer Duration: 3 months Location: Mumbai, India About the role: OLIVER is a rapidly expanding creative services agency with a twist – we provide our clients with bespoke dedicated agencies that operate from within their offices. We are building a team of individuals who have the ability and confidence to learn on the job and help the client transform their marketing capabilities. We are currently looking for an Account manager to join our expanding team working with UK brands across a wide variety of briefs from social media content, video production, strategy and digital assets. What we want to see is a proven track record of driving multiple complex projects forward, a positive and proactive nature and the ability to bring in new business, supporting the Senior clients onsite. You should have meticulous attention to detail, understand the importance of the profitability of your projects for the agency and be able to demonstrate yourself as a safe pair of hands on the day-to-day management of clients. What you will be doing: Day to day contact for FMCG/ Beauty and personal brands providing excellent client service and supporting the onsite U-Studio team Working with the wider account team and collaborating with the studio including our digital designers and Studio manager. Be accountable for the brief, and works with the client to ensure the team has obtained the right information required to begin work on the project. Ensure you understand how to report and manage operational income for their projects in a timely and accurate manner. Accountable for timely billing and reporting revenue to the Group Account Director Work with the studio to manage timings plans What you need to be great in this role: Excellent client engagement skills with the ability to proactively organise and influence clients and build strong and effective working relationships. Demonstrable account management experience of minimum 3+ yrs and a proven track record of managing global clients and campaigns. The ability to effectively and proactively manage account finances and invoicing. Highly creative with the ability to generate ideas and practically contribute to the design studio output. Proficient in Microsoft Office, excel and other related software. Understanding of how to integrate with a client-side team whilst maintaining a top tier agency service. Passion for and inquisitive about AI and new technologies Understanding and knowledge of AI tools is beneficial, but ability to learn and digest benefits and features of AI tools is critical Req ID: 13785 Our values shape everything we do: Be Ambitious to succeed Be Imaginative to push the boundaries of what’s possible Be Inspirational to do groundbreaking work Be always learning and listening to understand Be Results-focused to exceed expectations Be actively pro-inclusive and anti-racist across our community, clients and creations OLIVER, a part of the Brandtech Group, is an equal opportunity employer committed to creating an inclusive working environment where all employees are encouraged to reach their full potential, and individual differences are valued and respected. All applicants shall be considered for employment without regard to race, ethnicity, religion, gender, sexual orientation, gender identity, age, neurodivergence, disability status, or any other characteristic protected by local laws. OLIVER has set ambitious environmental goals around sustainability, with science-based emissions reduction targets. Collectively, we work towards our mission, embedding sustainability into every department and through every stage of the project lifecycle.
Posted 1 day ago
5.0 - 10.0 years
7 - 12 Lacs
Hyderabad, Bengaluru
Work from Office
Job Summary Synechron is seeking an experienced Big Data Developer with strong expertise in Spark, Scala, and Python to lead and contribute to large-scale data projects. The role involves designing, developing, and implementing robust data solutions that leverage emerging technologies to enhance business insights and operational efficiency. The successful candidate will play a key role in driving innovation, mentoring team members, and ensuring the delivery of high-quality data products aligned with organizational objectives. Software Requirements Required: Apache Spark (latest stable version) Scala (version 2.12 or higher) Python (version 3.6 or higher) Big Data tools and frameworks supporting Spark and Scala Preferred: Cloud platforms such as AWS, Azure, or GCP for data deployment Data processing or orchestration tools like Kafka, Hadoop, or Airflow Data visualization tools for data insights Overall Responsibilities Lead the development and implementation of data pipelines and solutions using Spark, Scala, and Python Collaborate with business and technology teams to understand data requirements and translate them into scalable solutions Mentor and guide junior team members on best practices in big data development Evaluate and recommend new technologies and tools to improve data processing and quality Stay informed about industry trends and emerging technologies relevant to big data and analytics Ensure timely delivery of data projects with high standards of quality, performance, and security Lead technical reviews, code reviews, and provide inputs to improve overall development standards and practices Contribute to architecture design discussions and assist in establishing data governance standards Technical Skills (By Category) Programming Languages: Essential: Spark (Scala), Python Preferred: Knowledge of Java or other JVM languages Data Management & Databases: Experience with distributed data storage solutions (HDFS, S3, etc.) Familiarity with NoSQL databases (e.g., Cassandra, HBase) and relational databases for data integration Cloud Technologies: Preferred: Cloud platforms (AWS, Azure, GCP) for data processing, storage, and deployment Frameworks & Libraries: Spark MLlib, Spark SQL, Spark Streaming Data processing libraries in Python (pandas, PySpark) Development Tools & Methodologies: Version control (Git, Bitbucket) Agile methodologies (Scrum, Kanban) Data pipeline orchestration tools (Apache Airflow, NiFi) Security & Compliance: Understanding of data security best practices and data privacy regulations Experience Requirements 5 to 10 years of hands-on experience in big data development and architecture Proven experience in designing and developing large-scale data pipelines using Spark, Scala, and Python Demonstrated ability to lead technical projects and mentor team members Experience working with cross-functional teams including data analysts, data scientists, and business stakeholders Track record of delivering scalable, efficient, and secure data solutions in complex environments Day-to-Day Activities Develop, test, and optimize scalable data pipelines using Spark, Scala, and Python Collaborate with data engineers, analysts, and stakeholders to gather requirements and translate into technical solutions Lead code reviews, mentor junior team members, and enforce coding standards Participate in architecture design and recommend best practices in big data development Monitor data workflows performance and troubleshoot issues to ensure data quality and reliability Stay updated with industry trends and evaluate new tools and frameworks for potential implementation Document technical designs, data flows, and implementation procedures Contribute to continuous improvement initiatives to optimize data processing workflows Qualifications Bachelors or Masters degree in Computer Science, Information Technology, or a related field Relevant certifications in cloud platforms, big data, or programming languages are advantageous Continuous learning on innovative data technologies and frameworks Professional Competencies Strong analytical and problem-solving skills with a focus on scalable data solutions Leadership qualities with the ability to guide and mentor team members Excellent communication skills to articulate technical concepts to diverse audiences Ability to work collaboratively in cross-functional teams and fast-paced environments Adaptability to evolving technologies and industry trends Strong organizational skills for managing multiple projects and priorities
Posted 1 day ago
1.0 - 6.0 years
8 - 12 Lacs
Chennai
Hybrid
Min 1-3 yrs of exp in data science,NLP&Python Exp in PyTorch,Scikit-learn,and NLP libraries(NLTK,SpaCy,Hugging Face) Help deploy AI/ML solutions on AWS,GCP/Azure Exp in SQL for data manipulation & analysis Exp in Big Data processing Spark,Pandas,Dask
Posted 1 day ago
7.0 - 12.0 years
25 - 40 Lacs
Bengaluru
Hybrid
Role - Data Engineer Experience - 7+ Years Notice - Immediate Skills - AWS (S3, Glue, Lambda, EC2), Spark, Pyspark, Python, Airflow
Posted 1 day ago
5.0 - 8.0 years
4 - 8 Lacs
Bengaluru
Work from Office
Diverse Lynx is looking for Pyspark, Azure databricks to join our dynamic team and embark on a rewarding career journey Develop and maintain big data pipelines using Pyspark Integrate Azure Databricks for scalable data processing Perform data transformation and optimization tasks Collaborate with analysts and data scientists
Posted 1 day ago
4.0 - 6.0 years
12 - 17 Lacs
Chennai
Remote
Role & responsibilities Data Engineer Contract Role Location : Remote Experience: 4 to 6 years Role Summary: The Offshore Technical Resource will support ongoing development and maintenance activities by delivering high-quality technical solutions. This resource will work closely with onshore teams, contribute to system enhancements, bug fixes, and provide timely technical support as part of the Datavant remediation and D365 support efforts. Key Responsibilities : Develop, test, and deploy technical components as per the specifications provided by the onshore team. Provide timely resolution of technical issues and production support tickets Participate in code reviews, ensuring adherence to coding standards and best practices. Contribute to system integrations, data migrations, and configuration tasks as needed. Document technical specifications, procedures, and support guides. Collaborate with QA teams to support testing activities and defect resolution. Maintain effective communication with onshore leads to align on priorities and deliverables. Qualifications: Hands-on expertise in ERP systems (preferably D365) or similar platforms. Proficiency in Datavant, Spark, SQL, and Python for data engineering and remediation tasks. Strong problem-solving and debugging skills.Good verbal and written communication skills. Preferred candidate profile
Posted 1 day ago
12.0 years
10 - 45 Lacs
Pune, Maharashtra, India
On-site
Location: Pune Experience: 8 -12 Years Work Mode: Hybrid Mandatory Skills: Python, Pyspark, SQL, ETL, Data Pipeline, Azure Databricks, Azure DataFactory, Azure Synapse, Airflow, and Architect Designing,Architect. Note-Interview mode -Face 2 Face Overview We are seeking a skilled and motivated Data Engineer with experience in Python, SQL, Azure, and cloud-based technologies to join our dynamic team. The ideal candidate will have a solid background in building and optimizing data pipelines, working with cloud platforms, and leveraging modern data engineering tools like Airflow, PySpark, and Azure Data Engineering. If you are passionate about data and looking for an opportunity to work on cutting-edge technologies, this role is for you! Primary Roles And Responsibilities Developing Modern Data Warehouse solutions using Databricks and AWS/ Azure Stack Ability to provide solutions that are forward-thinking in data engineering and analytics space Collaborate with DW/BI leads to understand new ETL pipeline development requirements. Triage issues to find gaps in existing pipelines and fix the issues Work with business to understand the need in reporting layer and develop data model to fulfill reporting needs Help joiner team members to resolve issues and technical challenges. Drive technical discussion with client architect and team members Orchestrate the data pipelines in scheduler via Airflow Skills And Qualifications Bachelor's and/or master’s degree in computer science or equivalent experience. Must have total 6+ yrs. of IT experience and 3+ years' experience in Data warehouse/ETL projects. Deep understanding of Star and Snowflake dimensional modelling. Strong knowledge of Data Management principles Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture Should have hands-on experience in SQL, Python and Spark (PySpark) Candidate must have experience in AWS/ Azure stack Desirable to have ETL with batch and streaming (Kinesis). Experience in building ETL / data warehouse transformation processes Experience with Apache Kafka for use with streaming data / event-based data Experience with other Open-Source big data products Hadoop (incl. Hive, Pig, Impala) Experience with Open Source non-relational / NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J) Experience working with structured and unstructured data including imaging & geospatial data. Experience working in a Dev/Ops environment with tools such as Terraform, CircleCI, GIT. Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning and troubleshoot Databricks Certified Data Engineer Associate/Professional Certification (Desirable). Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects Should have experience working in Agile methodology Strong verbal and written communication skills. Strong analytical and problem-solving skills with a high attention to detail. Skills: data pipeline,azure,design,data,azure datafactory,architect,data lakes,data warehouse,pyspark,skills,airflow,pipelines,azure databricks,etl,data engineering,python,architect designing,sql,azure synapse
Posted 1 day ago
0 years
0 Lacs
India
Remote
Company: MARS Management – India’s Hip-Hop Creative Solutions Agency Location: Remote / Hybrid ( Lucknow ferred) Duration: 3 Months Stipend: ₹2,000–₹3,500/month (Performance-based bonuses possible) About us: MARS Management is a leading youth-led agency focused on Indian hip-hop culture. We build branded content, curate rap events, and manage underground talent across India. Our flagship properties include ORBIT by MARS, Lucknow Hip-Hop, and event IPs include UP Wala Rap Cypher & The Underground Antriksh. We manage a community of 400+ Artists from 30+ Remote cities across whole UP. Sccessfully worked with/for Brands like Thums Up, Hero, Bacardi, Red FM etc as well as Artists like Seedhe Maut, Raga, DG, Bella, Panther, MC Insane & So more. Role Description We’re looking for a smart, organized, and music-enthusiastic intern who can manage brand communication, pitch to new collaborators, and support growth through external outreach and strategic use of social media. You’ll work closely with the content and events team, helping MARS connect with brands, colleges, and potential collaborators, while also contributing to our digital presence. > Responsibilities: Manage professional communication with brands, artists, and collaborators via email/DM Identify and pitch to youth-focused brands or colleges for events, collabs, and partnerships Support social media strategy by writing captions, engaging with followers, and tracking responses Assist in planning digital campaigns or reels that can spark audience and brand interest Contribute to growth plans and outreach documents that support business development goals Stay updated on Music industry trends and tools to keep content fresh and relevant. Collaborate with Management to align the community's voice with broader goals. > Requirements: Strong Passion for hip-hop culture, Indie music, and community building. Strong communication and writing skills (English or Hindi) Comfort with social platforms like Instagram & LinkedIn Knowledge of Music industry trends and awareness of the artists. Interest in brand strategy & outreach. Familiarity with Google Docs, Sheets, and Canva is helpful Bonus: Past outreach/Media experience > Perks: Internship Certificate + Letter of Recommendation Performance bonus if you help close a brand/college collaboration Direct access to real events, artists, and the strategy team Flexible working hours + supportive creative team Proper Guidance & learning throughout the work, from industry experts. Opportunity to be part of the core team of a growing community. Firsthand exposure to the music industry and networking with artists. Future opportunity for freelance or extended roles Candidates can also send their CV/Resumes directly at hr@marsmanagement.in or 91-8527185529 with a 2–3 line statement about why you're a good fit for this role.
Posted 1 day ago
1.0 - 6.0 years
7 - 11 Lacs
Bengaluru
Work from Office
Working at Target means helping all families discover the joy of everyday life. We bring that vision to life through our values and culture. Learn more about Target here. Insert Pyramid & Team Overviews Add paragraph describing the scope of work About you: Four-year degree or equivalent experience 1+ year as a Data Analyst with strong academic performance in a quantitative field; or strong equivalent experience [add any specific analyst experience needed here] Intermediate SQL experience writing complex queries Solid problem solving, analytical skills, data curiosity, data mining, Data creation and consolidation Support conclusions with a clear, understandable story that leverages descriptive statistics, basic inferential statistics, and data visualizations Willingness to ask questions about business objectives and the measurement needs for a project workstream, and be able to measure objectives & key results Excellent communication skills with the ability to speak to both business and technical teams, and translate ideas between them Knowledge of AB Testing methods, time series, S&OP planning, Forecasting models including statistical analysis Experience in analytics tools such asSQL, Excel, Hadoop, Hive, Spark, Python, R, Domo, Adobe Analytics (or Google Analytics) and/or equivalent technologies
Posted 1 day ago
4.0 - 9.0 years
7 - 11 Lacs
Bengaluru
Work from Office
Data Analyst, Food Supply Chain ABOUT US Target is an iconic brand, a Fortune 50 company and one of Americas leading retailers. At Target, we have a visionto become the best - the best culture and brand, the best place for growth and the company with the best reputation. We offer an inclusive, collaborative and energetic work environment that rewards those who perform. We deliver engaging, innovative and on-trend experiences for our team members and our guests. We invest in our team members' futures by developing leaders and providing a breadth of opportunities for professional development. It takes the best to become the best, and we are committed to building a team that does the right thing for our guests, shareholders, team members and communities. PRIMARY FUNCTION Data Analyst, Food Supply Chain to join our team. As a part of the F&B Data Analytics team, our analysts work closely with business owners as well as technology and data product teams staffed with product owners and engineers. They support all F&B strategic initiatives with data, reporting, visualization, insights, analysis, analytics and automation. F&B teams rely on this team of analysts to bring data to support decision making. ABOUT THE JOB As a Data Analyst you will support all business areas within Food Supply Chain of Target with critical data analysis that helps field and leadership to make profitable decisions. Become a data expert or business analyst and utilize tools like decision trees, clustering, regression, time series, structural equation modeling, linear programming, SQL and OLAP. Use your skills, experience and talents to be a part of groundbreaking thinking and visionary goals. Interface with the Target business representatives and leaders to validate business requirements/requests for reporting solutions. Determine best methods in order to gather data and present information. Build reporting solutions to meet business needs. Communicate with project/team manager to share knowledge and findings. Document design and requirements for reporting solutions. Core responsibilities of this job are described within this job description. Job duties may change at any time due to business need. ABOUT YOU: 4-year degree or equivalent experience 1+ year as a Data Analyst with strong academic performance in a quantitative field; or strong equivalent experience Intermediate SQL experience writing complex queries Solid problem solving, analytical skills, data curiosity, data mining, Data creation and consolidation Support conclusions with a clear, understandable story that leverages descriptive statistics, basic inferential statistics, and data visualizations Willingness to ask questions about business objectives and the measurement needs for a project workstream, and be able to measure objectives & key results Excellent communication skills with the ability to speak to both business and technical teams, and translate ideas between them Knowledge of AB Testing methods, time series, regression models including statistical analysis Experience in analytics tools such asSQL, Excel, Hadoop, Hive, Spark, Python, R, Domo and/or equivalent technologies Useful Links- Life at Target- https://india.target.com/ Benefits- https://india.target.com/life-at-target/workplace/benefits Culture- https://india.target.com/life-at-target/belonging
Posted 1 day ago
5.0 - 10.0 years
16 - 20 Lacs
Pune
Work from Office
Job Title: Senior / Lead Data Engineer Company: Synechron Technologies Locations: Pune or Chennai Experience: 5 to 12 years : Synechron Technologies is seeking an accomplished Senior or Lead Data Engineer with expertise in Java and Big Data technologies. The ideal candidate will have a strong background in Java Spark, with extensive experience working with big data frameworks such as Spark, Hadoop, HBase, Couchbase, and Phoenix. You will lead the design and development of scalable data solutions, ensuring efficient data processing and deployment in a modern technology environment. Key Responsibilities: Lead the development and optimization of large-scale data pipelines using Java and Spark. Design, implement, and maintain data infrastructure leveraging Spark, Hadoop, HBase, Couchbase, and Phoenix. Collaborate with cross-functional teams to gather requirements and develop robust data solutions. Lead deployment automation and management using CI/CD tools including Jenkins, Bitbucket, GIT, Docker, and OpenShift. Ensure the performance, security, and reliability of data processing systems. Provide technical guidance to team members and participate in code reviews. Stay updated on emerging technologies and leverage best practices in data engineering. Qualifications & Skills: 5 to 14 years of experience as a Data Engineer or similar role. Strong expertise in Java programming and Apache Spark. Proven experience with Big Data technologiesSpark, Hadoop, HBase, Couchbase, and Phoenix. Hands-on experience with CI/CD toolsJenkins, Bitbucket, GIT, Docker, OpenShift. Solid understanding of data modeling, ETL workflows, and data architecture. Excellent problem-solving, communication, and leadership skills. S YNECHRONS DIVERSITY & INCLUSION STATEMENT Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity, Equity, and Inclusion (DEI) initiative Same Difference is committed to fostering an inclusive culture promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more. All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicants gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law . Candidate Application Notice
Posted 1 day ago
0.0 years
10 - 15 Lacs
Pune
Work from Office
: Job Title Full Stack Developer with Java, SQL, React, Python LocationPune, India Corporate TitleVP Role Description Technology underpins our entire business. Our Technology, Data and Innovation (TDI) strategy is focused on strengthening engineering expertise, introducing an agile delivery model, as well as modernising the bank's IT infrastructure. We continue to invest and build a team of visionary tech talent, providing you with the training, freedom and opportunity to do pioneering work. As an [Engineer] you will develop and deliver significant components of engineering solutions to satisfy complex and diverse business goals. You will engage and partner with the business whilst working within a broader creative, collaborative and innovative team, with a strong desire to make an impact. You will be joining the dbSleuth Team within Regulatory & Cross Product IT delivering Trader and Counterparty surveillance across all business sections of Deutsche Bank. We are an engineering focused organization, striving for the highest quality architecture, design and code across our teams. You will help to build our surveillance systems, working in a fast-paced, agile environment. Our workload for new deliveries is high, using, React for UI development, Python/Spark/Scala for services, Hadoop Big Data and data science for anomaly detection using machine learning and statistical risk models. What well offer you 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Accident and Term life Insurance Your key responsibilities Provide leadership within a delivery team, Modelling, Coding & testing, and collaborating to understand requirements, create stories, design solutions, implement them and help test them. Help create a culture of learning and continuous improvement within your team and be accountable for successful delivery of a regulatory critical workstream Employ a range of techniques to analyse problems and evaluate multiple solutions against engineering, business & strategic criteria Identify and resolve barriers to business deliveries implementing solutions which iteratively deliver value Design solutions using common design patterns with a range of design tools & techniques Conduct peer reviews to ensure designs are fit for purpose, extensible & re-usable Design & build solutions which are secure & controlled Your skills and experience Analytical thinker, team player and possess strong communication skills Enable experimentation and fast learning approaches to creating business solutions Familiar in the use of solution design tools Understand key elements of security, risk & control Track record in identifying and making improvements to the delivery process Working with very large datasets using technologies such as Python, React JS and SQL and utilizing a good understanding of UI functioning & infrastructure. Utilizing Data Modelling tools, Domain Driven design and a strong knowledge of SQL and advanced data analysis to deliver good quality code within enterprise scale development (CI/CD) Experience with development utilising SDLC tools - Git, JIRA, Artifactory, Jenkins/TeamCity, OpenShift How well support you
Posted 1 day ago
15.0 - 20.0 years
35 - 40 Lacs
Pune
Work from Office
: Job TitleLead Engineer LocationPune, India Role Description Engineer is responsible for managing or performing work across multiple areas of the bank's overall IT Platform/Infrastructure including analysis, development, and administration. It may also involve taking functional oversight of engineering delivery for specific departments. Work includes: Planning and developing entire engineering solutions to accomplish business goals Building reliability and resiliency into solutions with appropriate testing and reviewing throughout the delivery lifecycle Ensuring maintainability and reusability of engineering solutions Ensuring solutions are well architected and can be integrated successfully into the end-to-end business process flow Reviewing engineering plans and quality to drive re-use and improve engineering capability Participating in industry forums to drive adoption of innovative technologies, tools and solutions in the Bank What we'll offer you: As part of our flexible scheme, here are just some of the benefits that you'll enjoy 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Accident and Term life Insurance Your Key Responsibilities: The candidate is expected to; Hands-on engineering lead involved in analysis, design, design/code reviews, coding and release activities Champion engineering best practices and guide/mentor team to achieve high performance. Work closely with Business stakeholders, Tribe lead, Product Owner, Lead Architect to successfully deliver the business outcomes. Acquire functional knowledge of the business capability being digitized/re-engineered. Demonstrate ownership, inspire others, innovative thinking, growth mindset and collaborate for success. Your Skills & Experience: Minimum 15 years of IT industry experience in Full stack development Expert in Java, Spring Boot, NodeJS, SQL/PLSQL, ReactJS, Strong experience in Big data processing Apache Spark, Hadoop, Bigquery, DataProc, Dataflow etc Strong experience in Kubernetes, OpenShift container platform Experience with Databases Oracle, PostgreSQL, MongoDB, Redis/hazelcast, should understand data modeling, normalization, and performance optimization Experience in message queues (RabbitMQ/IBM MQ, JMS) and Data streaming i.e. Kafka, Pub-sub etc Experience of working on public cloud GCP preferred, AWS or Azure Knowledge of various distributed/multi-tiered architecture styles Micro-services, Data mesh, Integration patterns etc Experience on modern software product delivery practices, processes and tooling and BIzDevOps skills such as CI/CD pipelines using Jenkins, Git Actions etc Experience on designing solutions, based on DDD and implementing Clean / Hexagonal Architecture efficient systems that can handle large-scale operation Experience on leading teams and mentoring developers Focus on quality experience with TDD, BDD, Stress and Contract Tests Proficient in working with APIs (Application Programming Interfaces) and understand data formats like JSON, XML, YAML, Parquet etc Key Skills: Java Spring Boot NodeJS SQL/PLSQL ReactJS Advantageous: Having prior experience in Banking/Finance domain Having worked on hybrid cloud solutions preferably using GCP Having worked on product development How we'll support you: About us and our teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment.
Posted 1 day ago
11.0 - 16.0 years
25 - 30 Lacs
Pune
Work from Office
: Job TitleService Operations - Production Engineer Support, AVP LocationPune, India Role Description You will be operating within Corporate Bank Production domain or in Corporate Banking subdivisions, as a AVP - Production Support Engineer. In this role, you will be accountable to drive a culture of proactive continual improvement into the Production environment through application, user request support, troubleshooting and resolving the errors in production environment. Automation of manual work, monitoring improvements and platform hygiene. Training and Mentoring new and existing team members, supporting the resolution of issues and conflicts and preparing reports and meetings. Candidate should have experience in all relevant tools used in the Service Operations environment and has specialist expertise in one or more technical domains and ensures that all associated Service Operations stakeholders are provided with an optimum level of service in line with Service Level Agreements (SLAs) / Operating Level Agreements (OLAs). Ensure all the BAU support queries from business are handled on priority and within agreed SLA and also to ensure all application stability issues are well taken care off. Support the resolution of incidents and problems within the team. Assist with the resolution of complex incidents. Ensure that the right problem-solving techniques and processes are applied Embrace a Continuous Service Improvement approach to resolve IT failings, drive efficiencies and remove repetition to streamline support activities, reduce risk, and improve system availability. Be responsible for your own engineering delivery and using data and analytics, drive a reduction in technical debt across the production environment with development and infrastructure teams. Act as a Production Engineering role model to enhance the technical capability of the Production Support teams to create a future operating model embedded with engineering culture. Train and Mentor team members to grow to the next role Bring in the culture of innovation engineering and automation mindset Deutsche Banks Corporate Bank division is a leading provider of cash management, trade finance and securities finance. We complete green-field projects that deliver the best Corporate Bank - Securities Services products in the world. Our team is diverse, international, and driven by shared focus on clean code and valued delivery. At every level, agile minds are rewarded with competitive pay, support, and opportunities to excel. You will work as part of a cross-functional agile delivery team. You will bring an innovative approach to software development, focusing on using the latest technologies and practices, as part of a relentless focus on business value. You will be someone who sees engineering as team activity, with a predisposition to open code, open discussion and creating a supportive, collaborative environment. You will be ready to contribute to all stages of software delivery, from initial analysis right through to production support. What well offer you 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Accident and Term life Insurance Your key responsibilities Lead by example to drive a culture of proactive continual improvement into the Production environment through automation of manual work, monitoring improvements and platform hygiene. Carry out technical analysis of the Production platform to identify and remediate performance and resiliency issues. Engage in the Software Development Lifecycle (SDLC) to enhance Production Standards and controls. Update the RUN Book and KEDB as & when required Participate in all BCP and component failure tests based on the run books Understand flow of data through the application infrastructure. It is critical to understand the dataflow so as to best provide operational support Event monitoring and management via a 24x7 workbench that is both monitoring and regularly probing the service environment and acting on instruction of a run book. Drive knowledge management across the supported applications and ensure full compliance. Works with team members to identify areas of focus, where training may improve team performance, and improve incident resolution. Your skills and experience Recent experience of applying technical solutions to improve the stability of production environments Working experience of some of the following technology skills: Technologies/Frameworks: Shell Scripting and/or Python JAVA 8/OpenJDK 11 (at least) - for debugging Familiarity with Spring Boot framework Unix Troubleshooting skills Hadoop framework stack Oracle 12c/19c - for pl/sql, familiarity with OEM tooling to review AWR reports and parameters No-SQL MQ Knowledge ITIL v3 Certified (must) Configuration Mgmt Tooling : Ansible Operating System/Platform: RHEL 7.x (preferred), RHEL6.x OpenShift (as we move towards Cloud computing and the fact that Fabric is dependent on OpenShift) CI/CD: Jenkins (preferred) Team City APM Tooling: Splunk Geneos NewRelic Prometheus-Grafana Other platforms: Scheduling Ctrl-M is a plus, AIRFLOW, CRONTAB or Autosys, etc Methodology: Micro-services architecture SDLC Agile Fundamental Network topology TCP, LAN, VPN, GSLB, GTM, etc Distributed systems experience on cloud platforms such as Azure, GCP is a plus familiarity with containerization/Kubernetes Tools: ServiceNow Jira Confluence BitBucket and/or GIT Oracle, SQL Plus Familiarity with simple Unix Tooling putty, mPutty, exceed (PL/)SQL Developer Good understanding of ITIL Service Management framework such as Incident, Problem, and Change processes. Ability to self-manage a book of work and ensure clear transparency on progress with clear, timely, communication of issues. Excellent troubleshooting and problem solving skills. Excellent communication skills, both written and verbal, with attention to detail. Ability to work in virtual teams and in matrix structures Experience | Exposure (Recommended): 11+ yrs experience in IT in large corporate environments, specifically in the area of controlled production environments or in Financial Services Technology in a client-facing function Service Operations, development experience within a global operations context Global Transaction Banking Experience is a plus. Experience of end-to-end Level 2,3,4 management and good overview of Production/Operations Management overall Experience of supporting complex application and infrastructure domains ITIL / best practice service context. ITIL foundation is plus. Good analytical and problem-solving skills Added advantage if knowing following technologies. ETL Flow and Pipelines. Knowledge of Bigdata, SPARK, HIVE etc. Hands on exp on Splunk/New Relic for creating dashboards along with alerts/rules setups Understanding of messaging systems like SWIFT. MQ messages Understanding Trade life cycles specially for back office How well support you
Posted 1 day ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
23751 Jobs | Dublin
Wipro
12469 Jobs | Bengaluru
EY
8625 Jobs | London
Accenture in India
7339 Jobs | Dublin 2
Uplers
7127 Jobs | Ahmedabad
Amazon
6778 Jobs | Seattle,WA
IBM
6514 Jobs | Armonk
Oracle
6388 Jobs | Redwood City
Muthoot FinCorp (MFL)
5532 Jobs | New Delhi
Capgemini
4741 Jobs | Paris,France