Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 - 7.0 years
9 - 13 Lacs
Hyderabad
Work from Office
Location -HyderabadExperiecne -4-7 Yrs.Skills Required Strong programming skills in Python, Java, SpringBoot, or Scala.Experience with ML frameworks like TensorFlow, PyTorch, XGBoost, TensorFlow or LightGBM.Familiarity with information retrieval techniques (BM25, vector search, learning to rank).Knowledge of embedding models, user/item vectorization, or session based personalization.Experience with large scale distributed systems (e.g., Spark, Kafka, Kubernetes).Hands on experience with real time ML systems.Background in NLP, graph neural networks, or sequence modeling.Experience with A/B testing frameworks and metrics like NDCG, MAP, or CTR.
Posted 1 day ago
1.0 - 3.0 years
2 - 4 Lacs
Vijayawada, Visakhapatnam, Guntur
Work from Office
Free Lance work (Commission based Sales Executives) in any town in Andhra Pradesh. If you are a working/retired teacher or Principal or Educational Consultant, you can join with us as Academic coordinator. Mentor s Academy offers you an opportunity to interact with schools in the process of Introducing Spark IIT foundation program. Mentor s Academy works with the schools in introducing Spark IIT foundation program to the students, student assessment and training program to the teachers in objective orientation. Building a list of schools in your town/ city, periodically visiting them and making student counseling in the schools. Coordinating between the Schools and central office to ensure smooth progress of the Spark IIT foundation program, assessmeynt test and other activities. Following-up on Spark IIT foundation program in schools and get feedback from students and teachers. Reporting on a regular basis to the central office. Desired Profiles: Good communication skills and an attitude of accepting challenges. Teachers (Full time or Part time) with an orientation towards IIT foundation are preferred. Ready to work on a freelance basis. Most of the remuneration is paid in the form of commissions so the potential to earn is unlimited. A quick learner with a deep personal motivation towards self-improvement. Willing to work the field and travel within / surrounding areas of the city/ town. Having a vehicle would help. A strong desire to do somethingdifferent, and a liking for education. Prior experience in interacting with schools will definitely help.
Posted 1 day ago
12.0 - 15.0 years
10 - 14 Lacs
Bengaluru
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : O9 Solutions Good to have skills : NAMinimum 12 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to ensure that project goals are met, facilitating discussions to address challenges, and guiding the team in implementing effective solutions. You will also engage in strategic planning and decision-making processes, ensuring that the applications align with organizational objectives and user needs. Your role will require a balance of technical expertise and leadership skills to drive project success and foster a collaborative environment. Roles & Responsibilities:-Play the integration architect role on o9 implementation projects.-Engage with client stakeholders to understand data requirements, carry out fit-gap analysis, design integration solution. -Review and analyze data provided by client along with its technical & functional intent and inter dependencies.-Guide integration team to build and deploy effective solutions to validate and transform customer data for integrated business planning and analytics. Technical Experience:-Experience of implementing ETL solutions to integrate systems in client environment-Should have played Integration Architect / Sr Integration Consultant / Sr Integration Developer role in at least 2 implementation projects.-Strong experience on SQL, PySpark, Python, Spark SQL and ETL tools.-Proficiency in database (SQL Server, Oracle etc).-Knowledge of DDL, DML, stored procedures.-Strong collaborator- team player- and individual contributor.-Strong communication skills with comfort in speaking with business stakeholders.-Strong problem solver with ability to manage and lead the team to push the solution and drive progress.Professional Experience:-Proven ability to work creatively and analytically in a problem-solving environment.-Proven ability to build, manage and foster a team-oriented environment.-Desire to work in an information systems environment.-Excellent communication written and oral and interpersonal skills. Additional Information:-The candidate should have minimum 12 years of experience in O9 Solutions.-This position is based at our Bengaluru office.-A 15 years full time education is required - BTech/BE/MCA-Open to travel - short / long term Qualification 15 years full time education
Posted 1 day ago
4.0 - 5.0 years
5 - 9 Lacs
Noida
Work from Office
Responsibilities Data Architecture: Develop and maintain the overall data architecture, ensuring scalability, performance, and data quality. AWS Data Services: Expertise in using AWS data services such as AWS Glue, S3, SNS, SES, Dynamo DB, Redshift, Cloud formation, Cloud watch, IAM, DMS, Event bridge scheduler etc. Data Warehousing: Design and implement data warehouses on AWS, leveraging AWS Redshift or other suitable options. Data Lakes: Build and manage data lakes on AWS using AWS S3 and other relevant services. Data Pipelines: Design and develop efficient data pipelines to extract, transform, and load data from various sources. Data Quality: Implement data quality frameworks and best practices to ensure data accuracy, completeness, and consistency. Cloud Optimization: Optimize data engineering solutions for performance, cost-efficiency, and scalability on the AWS cloud. Qualifications Bachelors degree in computer science, Engineering, or a related field. 4-5 years of experience in data engineering roles, with a focus on AWS cloud platforms. Strong understanding of data warehousing and data lake concepts. Proficiency in SQL and at least one programming language ( Python/Pyspark ). Good to have - Experience with any big data technologies like Hadoop, Spark, and Kafka. Knowledge of data modeling and data quality best practices. Excellent problem-solving, analytical, and communication skills. Ability to work independently and as part of a team. Preferred Qualifications Certifications in AWS Certified Data Analytics - Specialty or AWS Certified Solutions Architect - Data. Mandatory Competencies Big Data - Big Data - Pyspark Beh - Communication and collaboration Cloud - AWS - AWS Lambda,AWS EventBridge, AWS Fargate Database - Sql Server - SQL Packages Data Science and Machine Learning - Data Science and Machine Learning - Python
Posted 1 day ago
13.0 - 21.0 years
35 - 40 Lacs
Bengaluru
Work from Office
About The Role Function Software Engineering, Backend DevelopmentResponsibilities: You will work on building the biggest neo-banking app of India You will own the design process, implementation of standard software engineering methodologies while improving performance, scalability and maintainability You will be translating functional and technical requirements into detailed design and architecture You will be collaborating with UX designers and product owners for detailed product requirements You will be part of a fast growing engineering group You will be responsible for mentoring other engineers, defining our tech culture and helping build a fast growing team Requirements: 2-6 years of experience in product development, design and architecture Hands on expertise in at least one of the following programming languages Java, Python NodeJS and Go Hands on expertise in SQL and NoSQL databases Expertise in problem solving, data structure and algorithms Deep understanding and experience in object oriented design Ability in designing and architecting horizontally scalable software systems Drive to constantly learn and improve yourself and processes surrounding you Mentoring, collaborating and knowledge sharing with other engineers in the team Self-starter Strive to write the optimal code possible day in day out What you will get:
Posted 1 day ago
2.0 - 6.0 years
7 - 11 Lacs
Bengaluru
Work from Office
About The Role This is an Internal document. Job TitleSenior Data Engineer About The Role As a Senior Data Engineer, you will play a key role in designing and implementing data solutions @Kotak811. You will be responsible for leading data engineering projects, mentoring junior team members, and collaborating with cross-functional teams to deliver high-quality and scalable data infrastructure. Your expertise in data architecture, performance optimization, and data integration will be instrumental in driving the success of our data initiatives. Responsibilities 1. Data Architecture and Designa. Design and develop scalable, high-performance data architecture and data models. b. Collaborate with data scientists, architects, and business stakeholders to understand data requirements and design optimal data solutions. c. Evaluate and select appropriate technologies, tools, and frameworks for data engineering projects. d. Define and enforce data engineering best practices, standards, and guidelines. 2. Data Pipeline Development & Maintenancea. Develop and maintain robust and scalable data pipelines for data ingestion, transformation, and loading for real-time and batch-use-cases b. Implement ETL processes to integrate data from various sources into data storage systems. c. Optimise data pipelines for performance, scalability, and reliability. i. Identify and resolve performance bottlenecks in data pipelines and analytical systems. ii. Monitor and analyse system performance metrics, identifying areas for improvement and implementing solutions. iii. Optimise database performance, including query tuning, indexing, and partitioning strategies. d. Implement real-time and batch data processing solutions. 3. Data Quality and Governancea. Implement data quality frameworks and processes to ensure high data integrity and consistency. b. Design and enforce data management policies and standards. c. Develop and maintain documentation, data dictionaries, and metadata repositories. d. Conduct data profiling and analysis to identify data quality issues and implement remediation strategies. 4. ML Models Deployment & Management (is a plus) This is an Internal document. a. Responsible for designing, developing, and maintaining the infrastructure and processes necessary for deploying and managing machine learning models in production environments b. Implement model deployment strategies, including containerization and orchestration using tools like Docker and Kubernetes. c. Optimise model performance and latency for real-time inference in consumer applications. d. Collaborate with DevOps teams to implement continuous integration and continuous deployment (CI/CD) processes for model deployment. e. Monitor and troubleshoot deployed models, proactively identifying and resolving performance or data-related issues. f. Implement monitoring and logging solutions to track model performance, data drift, and system health. 5. Team Leadership and Mentorshipa. Lead data engineering projects, providing technical guidance and expertise to team members. i. Conduct code reviews and ensure adherence to coding standards and best practices. b. Mentor and coach junior data engineers, fostering their professional growth and development. c. Collaborate with cross-functional teams, including data scientists, software engineers, and business analysts, to drive successful project outcomes. d. Stay abreast of emerging technologies, trends, and best practices in data engineering and share knowledge within the team. i. Participate in the evaluation and selection of data engineering tools and technologies. Qualifications1. 3-5 years" experience with Bachelor's Degree in Computer Science, Engineering, Technology or related field required 2. Good understanding of streaming technologies like Kafka, Spark Streaming. 3. Experience with Enterprise Business Intelligence Platform/Data platform sizing, tuning, optimization and system landscape integration in large-scale, enterprise deployments. 4. Proficiency in one of the programming language preferably Java, Scala or Python 5. Good knowledge of Agile, SDLC/CICD practices and tools 6. Must have proven experience with Hadoop, Mapreduce, Hive, Spark, Scala programming. Must have in-depth knowledge of performance tuning/optimizing data processing jobs, debugging time consuming jobs. 7. Proven experience in development of conceptual, logical, and physical data models for Hadoop, relational, EDW (enterprise data warehouse) and OLAP database solutions. 8. Good understanding of distributed systems 9. Experience working extensively in multi-petabyte DW environment 10. Experience in engineering large-scale systems in a product environment
Posted 1 day ago
6.0 years
0 Lacs
India
On-site
About the Role The Managed Services Team is responsible for the maintenance of configurations implemented for Managed Fusion customers. These solutions were generally implemented by Professional Services or Partner teams and fall under the Managed Services team’s maintenance after go live. We collaborate with Support teams as first line contact for the customers, Cloud engineering team for Infrastructure maintenance, and Customer Success for ensuring proper communication with the customer for new feature requests, upgrades and general requests that require more coordination. We continuously define and update best practices with the latest ideas based on lessons learned by the team and discussions with our partner teams. About You You understand common Search technology and functionality, including basic features for managing the search experience. You have some basic knowledge of features like Synonyms, Spell Checking, Stop words, Boosting and Blocking, Relevancy, etc. You understand the concept of ingesting data into a search index (via connectors, data feeds or web crawlers), as well as possible manipulation of that data during ingestion. You understand the concept of searching for that data via queries and manipulations of the response from the search. You are able to write the Javascript code necessary to modify these manipulations. You are able to troubleshoot possible issues in the ingestion or query pipelines. Responsibilities Act as a senior technical expert, providing in-depth support and proactive maintenance for a diverse customer base with varying levels of technical expertise. Troubleshoot and resolve complex ingestion, indexing, and system performance issues related to Lucene/Solr, Lucidworks Fusion, and cloud-based infrastructure, utilizing Java stack traces, heap dumps, profiler snapshots, and performance diagnostics. Identify, reproduce, and document product issues, working closely with engineering teams to influence product improvements and drive faster resolutions. Lead incident response and root cause analysis for high-impact technical issues, ensuring timely and effective resolutions. Participate in design and implementation of automation tools and monitoring frameworks to improve efficiency and scalability in cloud-based managed services. Mentor and guide junior engineers, fostering a culture of continuous learning and knowledge sharing within the team. Provide thought leadership in troubleshooting methodologies, optimizing system performance, and enhancing overall reliability. Collaborate with global teams and customers across multiple time zones, ensuring seamless communication and issue resolution. Clearly communicate with customers through ticketing systems like Zendesk to provide expert support. Skills & QualificationsBachelor’s or Master’s degree in a relevant field (Computer Science, Engineering, etc.) or equivalent experience. 6+ years of experience in technical support, search technology, or software development, with a strong focus on search platforms. Expertise in Lucene/Solr, Lucidworks Fusion, or Elasticsearch and the ability to troubleshoot complex search-related issues is highly desirable. Proficiency in Java and scripting languages (JavaScript, Python preferred) for debugging, automation, and tool development. Experience working with cloud platforms (GCP, AWS, or Azure) and containerization technologies like Kubernetes. Ability to analyze logs, stack traces, profiler snapshots, and performance metrics to diagnose system issues effectively. Experience working with connectors, web crawlers, and API integrations. Strong knowledge of HTML, XML, JSON, REST APIs, and tools like Postman. Excellent communication skills, with experience handling enterprise-level support interactions via email and ticketing systems (Zendesk, Jira, etc.). Prior experience leading incident management, root cause analysis, and customer escalations. Exposure to related open-source technologies (Solr, Tika, Nashorn, Spark, AI, etc.) is a plus. Prior experience working in global support environments that require collaboration across multiple time zones. Experience working with international clients and understanding regional nuances in enterprise support expectations. Willingness to participate in a 24x7 on-call rotation to help support the services you develop; we take an end-to-end ownership approach to what we build!
Posted 1 day ago
15.0 years
0 Lacs
India
On-site
Job Summary As part of the leadership team for Data business, the role will be responsible for building and growing the Databricks capability within the organization. The role entails driving technical strategy, innovation, and solution delivery on the Databricks Unified Data Analytics platform. This leader will work closely with clients, delivery teams, technology partners, and internal stakeholders to define and deliver scalable, high-performance solutions using Databricks. 🔧 Key Responsibilities: Serve as the Subject Matter Expert (SME) in Databricks for internal teams and external clients. Lead the growth and maturity of the [technology] practice — define standards, processes, tools, and roadmaps. Collaborate with sales and solutioning teams to drive pre-sales activities — including client presentations, solution architecture, PoCs, and RFP responses. Act as a trusted advisor to clients, guiding them on architecture, best practices, and value realization. Build and mentor a team of engineers and consultants within the practice. Represent the company in thought leadership initiatives — webinars, blogs, conferences, etc. Work cross-functionally with delivery, engineering, and marketing teams to expand practice capabilities. Stay updated on product releases, industry trends, and customer use cases. ✅ Required Qualifications: 12–15 years of experience in Data Engineering, Analytics, or AI/ML , with 3–5 years of focused experience on Databricks Proficiency in architectural best practices in cloud around user management, data privacy, data security, performance and other non-functional requirements Proven experience delivering large-scale data and AI/ML workloads on Databricks Deep knowledge of Spark, Delta Lake, Python/Scala, SQL, and data pipeline orchestration Experience with MLOps, Feature Store, Unity Catalog, and model lifecycle management Certification preferred (e.g., Lakehouse Fundamentals, Data Engineer Associate/ Professional) Experience integrating Databricks with cloud platforms (Azure, AWS, or GCP) and BI tools Strong background in solution architecture, presales, and client advisory Excellent communication, stakeholder engagement, and leadership skills Exposure to data governance, security, compliance, and cost optimization in cloud analytics
Posted 1 day ago
10.0 - 15.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
TCS Hiring !!! Role : Java Architecture Exp: 10-15 Years Location: Hyd, Chennai, Kolkata, Pune, Bangalore JD Must-Have** (Ideally should not be more than 3-5) Core Java & Advanced Java (J2EE, Multi-threading, Collections, Concurrency) Microservices Architecture (Spring Boot, RESTful APIs, gRPC) Cloud Technologies (AWS, Azure, or GCP) DevOps & CI/CD (Docker, Kubernetes, Jenkins, Git) Database Management (PostgreSQL, MySQL, MongoDB, Redis) Messaging Queues & Streaming (Kafka, RabbitMQ) Security & Authentication (OAuth, JWT, SAML) Testing & Debugging (JUnit, Postman, SonarQube) Good-to-Have Big Data Technologies (Hadoop, Spark, ELK Stack) Experience in Network Function Virtualization (NFV) & Software-Defined Networking (SDN) Knowledge of Billing & CRM Systems (e.g., Amdocs, Ericsson, Netcracker)
Posted 1 day ago
15.0 - 20.0 years
5 - 9 Lacs
Chennai
Work from Office
Project Role : Application Developer Project Role Description : Design
Posted 1 day ago
3.0 - 8.0 years
3 - 7 Lacs
Hyderabad
Work from Office
Project Role : Application Support Engineer Project Role Description : Act as software detectives
Posted 1 day ago
8.0 - 13.0 years
5 - 9 Lacs
Mumbai
Work from Office
Project Role : Application Developer Project Role Description : Design
Posted 1 day ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Title: Big Data Engineer - Scala About Us “Capco, a Wipro company, is a global technology and management consulting firm. Awarded with Consultancy of the year in the British Bank Award and has been ranked Top 100 Best Companies for Women in India 2022 by Avtar & Seramount . With our presence across 32 cities across globe, we support 100+ clients across banking, financial and Energy sectors. We are recognized for our deep transformation execution and delivery. WHY JOIN CAPCO? You will work on engaging projects with the largest international and local banks, insurance companies, payment service providers and other key players in the industry. The projects that will transform the financial services industry. MAKE AN IMPACT Innovative thinking, delivery excellence and thought leadership to help our clients transform their business. Together with our clients and industry partners, we deliver disruptive work that is changing energy and financial services. #BEYOURSELFATWORK Capco has a tolerant, open culture that values diversity, inclusivity, and creativity. CAREER ADVANCEMENT With no forced hierarchy at Capco, everyone has the opportunity to grow as we grow, taking their career into their own hands. DIVERSITY & INCLUSION We believe that diversity of people and perspective gives us a competitive advantage. MAKE AN IMPACT Job Title: Big Data Engineer - Scala Job Description Preferred Skills: =================================== Strong skills in - Messaging Technologies like Apache Kafka or equivalent, Programming skill Scala, Spark with optimization techniques, Python Should able to write the query through Jupyter Notebook Orchestration tool like NiFi, AirFlow Design and implement intuitive, responsive UIs that allow issuers to better understand data and analytics Experience with SQL & Distributed Systems. Strong understanding of Cloud architecture. Ensure a high-quality code base by writing and reviewing performance, well-tested code Demonstrated experience building complex products. Knowledge of Splunk or other alerting and monitoring solutions. Fluent in the use of Git, Jenkins. Broad understanding of Software Engineering Concepts and Methodologies is required.
Posted 1 day ago
15.0 - 20.0 years
18 - 22 Lacs
Hyderabad
Work from Office
Project Role : Data Platform Architect Project Role Description : Architects the data platform blueprint and implements the design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have skills : Microsoft Azure Data Services Good to have skills : Microsoft Azure Databricks, Python (Programming Language), Microsoft SQL ServerMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Platform Architect, you will be responsible for architecting the data platform blueprint and implementing the design, which includes various data platform components. Your typical day will involve collaborating with Integration Architects and Data Architects to ensure seamless integration between systems and data models, while also addressing any challenges that arise during the implementation process. You will engage in discussions with stakeholders to gather requirements and provide insights that drive the overall architecture of the data platform, ensuring it meets the needs of the organization effectively. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate knowledge sharing sessions to enhance team capabilities.- Develop and maintain documentation related to data architecture and design. Professional & Technical Skills: - Must To Have Skills: Proficiency in Microsoft Azure Data Services.- Good To Have Skills: Experience with Microsoft Azure Databricks, Python (Programming Language), Microsoft SQL Server.- Strong understanding of data modeling techniques and best practices.- Experience with cloud-based data storage solutions and data processing frameworks.- Familiarity with data governance and compliance standards. Additional Information:- The candidate should have minimum 7.5 years of experience in Microsoft Azure Data Services.- This position is based at our Hyderabad office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 1 day ago
5.0 - 8.0 years
30 - 45 Lacs
Hyderabad
Work from Office
Job Description We are seeking a Sr. Data Engineer who has strong design, developments skills and upkeeps scalability, availability and excellence when building the next generation of our data pipelines and platform. You are an expert in various data processing technologies and data stores, appreciate the value of clear communication and collaboration, and devote to continual capacity planning and performance fine-tuning for emerging business growth. As the Senior Data Engineer, you will be mentoring Junior engineers in the team. Good to have: Experience in Web Services, API integration, Data exchanges with third parties is preferred. Experience in Snowflake is a big plus. Experience in NoSQL technologies (MongoDB, FoundationDB, Redis) is a plus. We would appreciate candidates who can demonstrate business-side functional understanding and effectively communicate the business context alongside their technical expertise. Job Requirements Must have 5+ years of experience in Data Engineering field, with a proven track record of exposure in Big Data technologies such as Hadoop, Amazon EMR, Hive, Spark. Expertise in SQL technologies and at least one major Data Warehouse technology (Snowflake, RedShift, BigQuery etc.). Must have experience in building data platform designing and building data model, integrate data from many sources, build ETL and data-flow pipelines, and support all parts of the data platform. Programming proficiency in Python and Scala, with experience writing modular, reusable, and testable code, including robust error handling and logging in data engineering applications. Hands-on experience with AWS cloud services, particularly in areas such as S3, Lambda, Glue, EC2, RDS, and IAM. Experience with orchestration tools such as Apache Airflow, for scheduling, monitoring, and managing data pipelines in a production environment. Familiarity with CI/CD practices, automated deployment pipelines, and version control systems (e.g., Git, GitHub/GitLab), ensuring reliable and repeatable data engineering workflows. Data Analysis skill – can make arguments with data and proper visualization. Energetic, enthusiastic, detail-oriented, and passionate about producing high-quality analytics deliverable. Must have experience in developing application with high performance and low latency. Ability to take ownership of initiatives and drive them independently from conception to delivery, including post-deployment monitoring and support. Strong communication and interpersonal skills with the ability to build relationships with stakeholders, understand business requirements, and translate them into technical solutions. Comfortable working cross-functionally in a multi-team environment, collaborating with data analysts, product managers, and engineering teams to deliver end-to-end data solutions.
Posted 1 day ago
8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Title: Sr. Data Engineer About Us “Capco, a Wipro company, is a global technology and management consulting firm. Awarded with Consultancy of the year in the British Bank Award and has been ranked Top 100 Best Companies for Women in India 2022 by Avtar & Seramount . With our presence across 32 cities across globe, we support 100+ clients across banking, financial and Energy sectors. We are recognized for our deep transformation execution and delivery. WHY JOIN CAPCO? You will work on engaging projects with the largest international and local banks, insurance companies, payment service providers and other key players in the industry. The projects that will transform the financial services industry. MAKE AN IMPACT Innovative thinking, delivery excellence and thought leadership to help our clients transform their business. Together with our clients and industry partners, we deliver disruptive work that is changing energy and financial services. #BEYOURSELFATWORK Capco has a tolerant, open culture that values diversity, inclusivity, and creativity. CAREER ADVANCEMENT With no forced hierarchy at Capco, everyone has the opportunity to grow as we grow, taking their career into their own hands. DIVERSITY & INCLUSION We believe that diversity of people and perspective gives us a competitive advantage. MAKE AN IMPACT Job Summary Position: Sr Consultant Location: Capco Locations (Bengaluru/ Chennai/ Hyderabad/ Pune/ Mumbai/ Gurugram) Band: M3/M4 (8 to 14 years) Role Description Job Title: Senior Consultant - Data Engineer Responsibilities Design, build and optimise data pipelines and ETL processes in Azure Databricks ensuring high performance, reliability, and scalability. Implement best practices for data ingestion, transformation, and cleansing to ensure data quality and integrity. Work within clients best practice guidelines as set out by the Data Engineering Lead Work with data modellers and testers to ensure pipelines are implemented correctly. Collaborate as part of a cross-functional team to understand business requirements and translate them into technical solutions. Role Requirements Strong Data Engineer with experience in Financial Services Knowledge of and experience building data pipelines in Azure Databricks Demonstrate a continual desire to implement “strategic” or “optimal” solutions and where possible, avoid workarounds or short term tactical solutions Work within an Agile team Experience/Skillset 8+ years experience in data engineering Good skills in SQL, Python and PySpark Good knowledge of Azure Databricks (understanding of delta tables, Apache Spark, Unity Catalog) Experience writing, optimizing, and analyzing SQL and PySpark code, with a robust capability to interpret complex data requirements and architect solutions Good knowledge of SDLC Familiar with Agile/Scrum ways of working Strong verbal and written communication skills Ability to manage multiple priorities and deliver to tight deadlines WHY JOIN CAPCO? You will work on engaging projects with some of the largest banks in the world, on projects that will transform the financial services industry. We offer: A work culture focused on innovation and creating lasting value for our clients and employees Ongoing learning opportunities to help you acquire new skills or deepen existing expertise A flat, non-hierarchical structure that will enable you to work with senior partners and directly with clients A diverse, inclusive, meritocratic culture We Offer A work culture focused on innovation and creating lasting value for our clients and employees Ongoing learning opportunities to help you acquire new skills or deepen existing expertise A flat, non-hierarchical structure that will enable you to work with senior partners and directly with clients
Posted 1 day ago
0 years
0 Lacs
Greater Kolkata Area
On-site
Join our Team About this opportunity: We are seeking a highly skilled, hands-on AI Architect - GenAI to lead the design and implementation of production-grade, cloud-native AI and NLP solutions that drive business value and enhance decision-making processes. The ideal candidate will have a robust background in machine learning, generative AI, and the architecture of scalable production systems. As an AI Architect, you will play a key role in shaping the direction of advanced AI technologies and leading teams in the development of cutting-edge solutions. What you will do: Architect and design AI and NLP solutions to address complex business challenges and support strategic decision-making. Lead the design and development of scalable machine learning models and applications using Python, Spark, NoSQL databases, and other advanced technologies. Spearhead the integration of Generative AI techniques in production systems to deliver innovative solutions such as chatbots, automated document generation, and workflow optimization. Guide teams in conducting comprehensive data analysis and exploration to extract actionable insights from large datasets, ensuring these findings are communicated effectively to stakeholders. Collaborate with cross-functional teams, including software engineers and data engineers, to integrate AI models into production environments, ensuring scalability, reliability, and performance. Stay at the forefront of advancements in AI, NLP, and Generative AI, incorporating emerging methodologies into existing models and developing new algorithms to solve complex challenges. Provide thought leadership on best practices for AI model architecture, deployment, and continuous optimization. Ensure that AI solutions are built with scalability, reliability, and compliance in mind. The skills you bring: Minimum of experience in AI, machine learning, or a similar role, with a proven track record of delivering AI-driven solutions. Hands-on experience in designing and implementing end-to-end GenAI-based solutions, particularly in chatbots, document generation, workflow automation, and other generative use cases. Expertise in Python programming and extensive experience with AI frameworks and libraries such as TensorFlow, PyTorch, scikit-learn, and vector databases. Deep understanding and experience with distributed data processing using Spark. Proven experience in architecting, deploying, and optimizing machine learning models in production environments at scale. Expertise in working with open-source Generative AI models (e.g., GPT-4, Mistral, Code-Llama, StarCoder) and applying them to real-world use cases. Expertise in designing cloud-native architectures and microservices for AI/ML applications. Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply? Click Here to find all you need to know about what our typical hiring process looks like. Encouraging a diverse and inclusive organization is core to our values at Ericsson, that's why we champion it in everything we do. We truly believe that by collaborating with people with different experiences we drive innovation, which is essential for our future growth. We encourage people from all backgrounds to apply and realize their full potential as part of our Ericsson team. Ericsson is proud to be an Equal Opportunity Employer. learn more. Primary country and city: India (IN) || Kolkata Req ID: 770049
Posted 1 day ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Join our Team About this opportunity: We are seeking a highly skilled, hands-on AI Architect - GenAI to lead the design and implementation of production-grade, cloud-native AI and NLP solutions that drive business value and enhance decision-making processes. The ideal candidate will have a robust background in machine learning, generative AI, and the architecture of scalable production systems. As an AI Architect, you will play a key role in shaping the direction of advanced AI technologies and leading teams in the development of cutting-edge solutions. What you will do: Architect and design AI and NLP solutions to address complex business challenges and support strategic decision-making. Lead the design and development of scalable machine learning models and applications using Python, Spark, NoSQL databases, and other advanced technologies. Spearhead the integration of Generative AI techniques in production systems to deliver innovative solutions such as chatbots, automated document generation, and workflow optimization. Guide teams in conducting comprehensive data analysis and exploration to extract actionable insights from large datasets, ensuring these findings are communicated effectively to stakeholders. Collaborate with cross-functional teams, including software engineers and data engineers, to integrate AI models into production environments, ensuring scalability, reliability, and performance. Stay at the forefront of advancements in AI, NLP, and Generative AI, incorporating emerging methodologies into existing models and developing new algorithms to solve complex challenges. Provide thought leadership on best practices for AI model architecture, deployment, and continuous optimization. Ensure that AI solutions are built with scalability, reliability, and compliance in mind. The skills you bring: Minimum of experience in AI, machine learning, or a similar role, with a proven track record of delivering AI-driven solutions. Hands-on experience in designing and implementing end-to-end GenAI-based solutions, particularly in chatbots, document generation, workflow automation, and other generative use cases. Expertise in Python programming and extensive experience with AI frameworks and libraries such as TensorFlow, PyTorch, scikit-learn, and vector databases. Deep understanding and experience with distributed data processing using Spark. Proven experience in architecting, deploying, and optimizing machine learning models in production environments at scale. Expertise in working with open-source Generative AI models (e.g., GPT-4, Mistral, Code-Llama, StarCoder) and applying them to real-world use cases. Expertise in designing cloud-native architectures and microservices for AI/ML applications. Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply? Click Here to find all you need to know about what our typical hiring process looks like. Encouraging a diverse and inclusive organization is core to our values at Ericsson, that's why we champion it in everything we do. We truly believe that by collaborating with people with different experiences we drive innovation, which is essential for our future growth. We encourage people from all backgrounds to apply and realize their full potential as part of our Ericsson team. Ericsson is proud to be an Equal Opportunity Employer. learn more. Primary country and city: India (IN) || Kolkata Req ID: 770049
Posted 1 day ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Description YOUR IMPACT Are you passionate about developing mission-critical, high quality software solutions, using cutting-edge technology, in a dynamic environment? OUR IMPACT We are Compliance Engineering, a global team of more than 300 engineers and scientists who work on the most complex, mission-critical problems. We build and operate a suite of platforms and applications that prevent, detect, and mitigate regulatory and reputational risk across the firm. have access to the latest technology and to massive amounts of structured and unstructured data. leverage modern frameworks to build responsive and intuitive UX/UI and Big Data applications. Compliance Engi neering is looking to fill several big data software engineering roles Your first deliverable and success criteria will be the deployment, in 2025, of new complex data pipelines and surveillance models to detect inappropriate trading activity. How You Will Fulfill Your Potential As a member of our team, you will: partner globally with sponsors, users and engineering colleagues across multiple divisions to create end-to-end solutions, learn from experts, leverage various technologies including; Java, Spark, Hadoop, Flink, MapReduce, HBase, JSON, Protobuf, Presto, Elastic Search, Kafka, Kubernetes be able to innovate and incubate new ideas, have an opportunity to work on a broad range of problems, including negotiating data contracts, capturing data quality metrics, processing large scale data, building surveillance detection models, be involved in the full life cycle; defining, designing, implementing, testing, deploying, and maintaining software systems across our products. Qualifications A successful candidate will possess the following attributes: A Bachelor's or Master's degree in Computer Science, Computer Engineering, or a similar field of study. Expertise in java, as well as proficiency with databases and data manipulation. Experience in end-to-end solutions, automated testing and SDLC concepts. The ability (and tenacity) to clearly express ideas and arguments in meetings and on paper. Experience in the some of following is desired and can set you apart from other candidates : developing in large-scale systems, such as MapReduce on Hadoop/Hbase, data analysis using tools such as SQL, Spark SQL, Zeppelin/Jupyter, API design, such as to create interconnected services, knowledge of the financial industry and compliance or risk functions, ability to influence stakeholders. About Goldman Sachs Goldman Sachs is a leading global investment banking, securities and investment management firm that provides a wide range of financial services to a substantial and diversified client base that includes corporations, financial institutions, governments and individuals. Founded in 1869, the firm is headquartered in New York and maintains offices in all major financial centers around the world. Same Posting Description for Internal and External Candidates
Posted 1 day ago
6.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
We're on the hunt for a Senior Backend Engineer who's ready to make a significant impact. This isn't just any role; you'll be the architect behind the search experiences for millions of users across our entire platform. Imagine building the very systems that connect people with the content they love – that's the power you'll wield here. What You'll Be Building & Leading The Future of Search: You'll own the end-to-end design and implementation of our search platform. This means everything from crafting distributed indexing pipelines and robust storage infrastructure to building lightning-fast web services. You'll set the technical roadmap and make strategic decisions that shape how our search evolves. Get ready to build and scale systems that handle massive data volumes and intense query traffic. Search That Delights: Your work will directly enhance how relevant our search results are. You'll achieve this through advanced signal engineering and optimizing our data models. We're looking for someone who can prototype and deploy innovative solutions for even the most complex internal and external search challenges. You'll also team up with our ML experts to seamlessly integrate cutting-edge ranking algorithms and recommendation systems. Elevating Our Engineering: Beyond the code, you'll be a key leader. You'll guide code reviews, champion engineering best practices, and mentor your peers, elevating technical capabilities across the entire organization. Expect to partner closely with our Product, Design, and other cross-functional teams to bring truly user-focused solutions to life. What Will Make You Successful Here Core Technical Prowess: You have 6+ years of hands-on experience building scalable applications, ideally in Python, JavaScript, Java, or similar. You're no stranger to search technologies like Elasticsearch, Apache Solr, or OpenSearch . Orchestrating data pipelines with Apache Kafka, Kinesis, or AWS Lambda is second nature to you, and you're comfortable with distributed computing frameworks like Apache Spark, Apache Flink, or Ray . Advanced Capabilities: A background in AI/ML, Natural Language Processing, or Ranking Systems is a huge plus. You possess strong computer science fundamentals, covering algorithms, networking, and operating systems. You have a proven track record of building customer-facing search solutions that perform at scale, and you live by SOLID principles, writing code that's both maintainable and testable. Leadership & Communication: You're an excellent communicator, capable of clearly explaining complex technical concepts to anyone, regardless of their technical background. You have a bias for action, coupled with a strong product mindset and genuine empathy for users. You're adept at driving operational excellence and proactively managing technical risks. Write to shruthi.s@careerxperts.com to get connected.
Posted 1 day ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Join our Team About this opportunity: We are seeking a highly skilled, hands-on AI Architect - GenAI to lead the design and implementation of production-grade, cloud-native AI and NLP solutions that drive business value and enhance decision-making processes. The ideal candidate will have a robust background in machine learning, generative AI, and the architecture of scalable production systems. As an AI Architect, you will play a key role in shaping the direction of advanced AI technologies and leading teams in the development of cutting-edge solutions. What you will do: Architect and design AI and NLP solutions to address complex business challenges and support strategic decision-making. Lead the design and development of scalable machine learning models and applications using Python, Spark, NoSQL databases, and other advanced technologies. Spearhead the integration of Generative AI techniques in production systems to deliver innovative solutions such as chatbots, automated document generation, and workflow optimization. Guide teams in conducting comprehensive data analysis and exploration to extract actionable insights from large datasets, ensuring these findings are communicated effectively to stakeholders. Collaborate with cross-functional teams, including software engineers and data engineers, to integrate AI models into production environments, ensuring scalability, reliability, and performance. Stay at the forefront of advancements in AI, NLP, and Generative AI, incorporating emerging methodologies into existing models and developing new algorithms to solve complex challenges. Provide thought leadership on best practices for AI model architecture, deployment, and continuous optimization. Ensure that AI solutions are built with scalability, reliability, and compliance in mind. The skills you bring: Minimum of experience in AI, machine learning, or a similar role, with a proven track record of delivering AI-driven solutions. Hands-on experience in designing and implementing end-to-end GenAI-based solutions, particularly in chatbots, document generation, workflow automation, and other generative use cases. Expertise in Python programming and extensive experience with AI frameworks and libraries such as TensorFlow, PyTorch, scikit-learn, and vector databases. Deep understanding and experience with distributed data processing using Spark. Proven experience in architecting, deploying, and optimizing machine learning models in production environments at scale. Expertise in working with open-source Generative AI models (e.g., GPT-4, Mistral, Code-Llama, StarCoder) and applying them to real-world use cases. Expertise in designing cloud-native architectures and microservices for AI/ML applications. Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply? Click Here to find all you need to know about what our typical hiring process looks like. Encouraging a diverse and inclusive organization is core to our values at Ericsson, that's why we champion it in everything we do. We truly believe that by collaborating with people with different experiences we drive innovation, which is essential for our future growth. We encourage people from all backgrounds to apply and realize their full potential as part of our Ericsson team. Ericsson is proud to be an Equal Opportunity Employer. learn more. Primary country and city: India (IN) || Kolkata Req ID: 770049
Posted 1 day ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Title: Python Developer Experience: 6–8 Years Location: Hyderabad, Telangana Work Mode: Onsite Work Type: Contract About the Role: We are seeking a highly skilled and experienced Python Developer to join our dynamic team. The ideal candidate will have a strong background in Python, SQL, DBT, and PySpark, with hands-on experience in designing and developing scalable data pipelines and transformation workflows. Key Responsibilities: Design, develop, and maintain data pipelines using Python, SQL, DBT, and PySpark. Build and optimize data models and transformation workflows in DBT. Ensure data quality, integrity, and consistency across systems. Implement best practices in coding, testing, and deployment of data engineering solutions. Monitor and troubleshoot data pipeline performance and reliability. Required Skills: Strong proficiency in Python for data engineering and scripting. Advanced knowledge of SQL for data manipulation and querying. Hands-on experience with DBT for data transformation and modeling. Expertise in PySpark for distributed data processing. Preferred Skills: Experience with SCALA for Spark-based applications. Qualifications: 6–8 years of professional experience in software development. Strong problem-solving skills and attention to detail. Excellent communication and collaboration abilities.
Posted 1 day ago
8.0 - 13.0 years
30 - 35 Lacs
Bengaluru
Work from Office
About The Role Data Engineer -1 (Experience 0-2 years) What we offer Our mission is simple Building trust. Our customer's trust in us is not merely about the safety of their assets but also about how dependable our digital offerings are. That"s why, we at Kotak Group are dedicated to transforming banking by imbibing a technology-first approach in everything we do, with an aim to enhance customer experience by providing superior banking services. We welcome and invite the best technological minds in the country to come join us in our mission to make banking seamless and swift. Here, we promise you meaningful work that positively impacts the lives of many. About our team DEX is a central data org for Kotak Bank which manages entire data experience of Kotak Bank. DEX stands for Kotak"s Data Exchange. This org comprises of Data Platform, Data Engineering and Data Governance charter. The org sits closely with Analytics org. DEX is primarily working on greenfield project to revamp entire data platform which is on premise solutions to scalable AWS cloud-based platform. The team is being built ground up which provides great opportunities to technology fellows to build things from scratch and build one of the best-in-class data lake house solutions. The primary skills this team should encompass are Software development skills preferably Python for platform building on AWS; Data engineering Spark (pyspark, sparksql, scala) for ETL development, Advanced SQL and Data modelling for Analytics.The org size is expected to be around 100+ member team primarily based out of Bangalore comprising of ~10 sub teams independently driving their charter.As a member of this team, you get opportunity to learn fintech space which is most sought-after domain in current world, be a early member in digital transformation journey of Kotak, learn and leverage technology to build complex data data platform solutions including, real time, micro batch, batch and analytics solutions in a programmatic way and also be futuristic to build systems which can be operated by machines using AI technologies. The data platform org is divided into 3 key verticals: Data Platform This Vertical is responsible for building data platform which includes optimized storage for entire bank and building centralized data lake, managed compute and orchestrations framework including concepts of serverless data solutions, managing central data warehouse for extremely high concurrency use cases, building connectors for different sources, building customer feature repository, build cost optimization solutions like EMR optimizers, perform automations and build observability capabilities for Kotak"s data platform. The team will also be center for Data Engineering excellence driving trainings and knowledge sharing sessions with large data consumer base within Kotak. Data Engineering This team will own data pipelines for thousands of datasets, be skilled to source data from 100+ source systems and enable data consumptions for 30+ data analytics products. The team will learn and built data models in a config based and programmatic and think big to build one of the most leveraged data model for financial orgs. This team will also enable centralized reporting for Kotak Bank which cuts across multiple products and dimensions. Additionally, the data build by this team will be consumed by 20K + branch consumers, RMs, Branch Managers and all analytics usecases. Data Governance The team will be central data governance team for Kotak bank managing Metadata platforms, Data Privacy, Data Security, Data Stewardship and Data Quality platform.If you"ve right data skills and are ready for building data lake solutions from scratch for high concurrency systems involving multiple systems then this is the team for you. You day to day role will include Drive business decisions with technical input and lead the team. Design, implement, and support an data infrastructure from scratch. Manage AWS resources, including EC2, EMR, S3, Glue, Redshift, and MWAA. Extract, transform, and load data from various sources using SQL and AWS big data technologies. Explore and learn the latest AWS technologies to enhance capabilities and efficiency. Collaborate with data scientists and BI engineers to adopt best practices in reporting and analysis. Improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers. Build data platforms, data pipelines, or data management and governance tools. BASIC QUALIFICATIONS for Data Engineer/ SDE in Data Bachelor's degree in Computer Science, Engineering, or a related field Experience in data engineering Strong understanding of AWS technologies, including S3, Redshift, Glue, and EMR Experience with data pipeline tools such as Airflow and Spark Experience with data modeling and data quality best practices Excellent problem-solving and analytical skills Strong communication and teamwork skills Experience in at least one modern scripting or programming language, such as Python, Java, or Scala Strong advanced SQL skills PREFERRED QUALIFICATIONS AWS cloud technologiesRedshift, S3, Glue, EMR, Kinesis, Firehose, Lambda, IAM, Airflow Prior experience in Indian Banking segment and/or Fintech is desired. Experience with Non-relational databases and data stores Building and operating highly available, distributed data processing systems for large datasets Professional software engineering and best practices for the full software development life cycle Designing, developing, and implementing different types of data warehousing layers Leading the design, implementation, and successful delivery of large-scale, critical, or complex data solutions Building scalable data infrastructure and understanding distributed systems concepts SQL, ETL, and data modelling Ensuring the accuracy and availability of data to customers Proficient in at least one scripting or programming language for handling large volume data processing Strong presentation and communications skills.
Posted 1 day ago
0 years
0 Lacs
Kochi, Kerala, India
On-site
Being the go-to MEP/CIVIL person Are you considered to be the go-to person for all MEP/CIVIL matters? That’s what you’ll be in this role. You’ll manage all activities related to mechanical, engineering and plumbing in terms of planning, designing, procurement, construction, testing and commissioning, and final handover. Your task is to fully understand, collect and deliver clients’ MEP/CIVIL requirements. The design manager in MEP/CIVIL will depend on you to help schedule or plan establishment, value engineer, and design change management. You’ll also assist the contract manager in MEP/CIVIL-related procurement and VO management. On top of that, you’ll support the construction manager in MEP/CIVIL-related installation, site inspection and contractor management. Making visions come true You’ll develop big ideas that will spark the effective management and successful execution of all phases of a project—from initiating, designing, planning, controlling, executing, monitoring, and closing. You’ll need to carefully identify and take note of our clients’ needs, and figure out what exactly needs to be done. This involves defining the scope of the work and expected outcome, while also detailing all the necessary objectives to get there. While you do all of these, you’ll need to keep tabs on company resources used in the projects, and to allocate these resources to complete the project within the budget. You’ll also need to help clients organize and analyze all tender and procurement for all contractors and suppliers; and represent them from the beginning to the end of a project. Building strong teams and business reputation One of your priorities will be to produce high-performing teams that drive successful project execution. You’ll also represent and promote the company throughout projects and in pursuit of more project opportunities. Keeping risks at bay How do you deal with risks? You’ll need to identify any potential risks in the MEP/CIVIL field and report them to the Project Manager. It will be critical to design a risk management and solution provision, particularly to identify health & safety issues. You will understand why this is your responsibility. Sound like you? To apply you need to be: An MEP/CIVIL pro You have a degree in MEP/CIVIL engineering-related discipline or related field, and five years of combined educational and work experience. You also need to have sufficient experience in construction site management, as well as a strong understanding of all aspects of development management including, financial appraisal, risk management, negotiation, etc. Do you have a strong background of all aspects of MEP/CIVIL-related management—including the development of MEP/CIVIL project plan and procedures and construction schedules? Are you familiar with HVAC, electrical engineering, and BMS? Do you have knowledge of security system, AV system, and IT system? If your answers are yes, let’s talk. A business savvy leader who can walk the talk You understand the business well enough, particularly in terms of the systems and tools to use, the best practices and the safety requirements. You’re also knowledgeable of key industries and local market, with the real estate and construction business above all. You also have a basic understanding of the key drivers that push the projects forward, while also considering the client’s business requirements. You’ll back up your business know-hows with the necessary communication skills, as you need to regularly do business development presentations to potential clients in both English and Chinese. You’ll also manage site activities, negotiate with contractors, review the legal aspects of contracts, contribute to market analysis, and manage change orders. A flexible leader with superb interpersonal skills Are you a people person with superb interpersonal skills? You’ll need to create a proactive working environment that not only motivates your employees, but also encourages them to maintain good relationship with clients, communicate effectively with each other, and contribute enthusiastically to the project. You also need to be a results-oriented leader with good problem-solving skills, as well as someone who can nurture positive relationships with all stakeholders involved, including your team members and clients. What we can do for you: At JLL, we make sure that you become the best version of yourself by helping you realise your full potential in an entrepreneurial and inclusive work environment. We will empower your ambitions through our dedicated Total Rewards Program, competitive pay and benefits package. Apply today!
Posted 1 day ago
3.0 - 5.0 years
5 - 7 Lacs
Hyderabad
Work from Office
Role Purpose The purpose of this role is to design, test and maintain software programs for operating systems or applications which needs to be deployed at a client end and ensure its meet 100% quality assurance parameters Do 1. Instrumental in understanding the requirements and design of the product/ software Develop software solutions by studying information needs, studying systems flow, data usage and work processes Investigating problem areas followed by the software development life cycle Facilitate root cause analysis of the system issues and problem statement Identify ideas to improve system performance and impact availability Analyze client requirements and convert requirements to feasible design Collaborate with functional teams or systems analysts who carry out the detailed investigation into software requirements Conferring with project managers to obtain information on software capabilities 2. Perform coding and ensure optimal software/ module development Determine operational feasibility by evaluating analysis, problem definition, requirements, software development and proposed software Develop and automate processes for software validation by setting up and designing test cases/scenarios/usage cases, and executing these cases Modifying software to fix errors, adapt it to new hardware, improve its performance, or upgrade interfaces. Analyzing information to recommend and plan the installation of new systems or modifications of an existing system Ensuring that code is error free or has no bugs and test failure Preparing reports on programming project specifications, activities and status Ensure all the codes are raised as per the norm defined for project / program / account with clear description and replication patterns Compile timely, comprehensive and accurate documentation and reports as requested Coordinating with the team on daily project status and progress and documenting it Providing feedback on usability and serviceability, trace the result to quality risk and report it to concerned stakeholders 3. Status Reporting and Customer Focus on an ongoing basis with respect to project and its execution Capturing all the requirements and clarifications from the client for better quality work Taking feedback on the regular basis to ensure smooth and on time delivery Participating in continuing education and training to remain current on best practices, learn new programming languages, and better assist other team members. Consulting with engineering staff to evaluate software-hardware interfaces and develop specifications and performance requirements Document and demonstrate solutions by developing documentation, flowcharts, layouts, diagrams, charts, code comments and clear code Documenting very necessary details and reports in a formal way for proper understanding of software from client proposal to implementation Ensure good quality of interaction with customer w.r.t. e-mail content, fault report tracking, voice calls, business etiquette etc Timely Response to customer requests and no instances of complaints either internally or externally Mandatory Skills: Apache Spark. Experience: 3-5 Years.
Posted 1 day ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
31458 Jobs | Dublin
Wipro
16542 Jobs | Bengaluru
EY
10788 Jobs | London
Accenture in India
10711 Jobs | Dublin 2
Amazon
8660 Jobs | Seattle,WA
Uplers
8559 Jobs | Ahmedabad
IBM
7988 Jobs | Armonk
Oracle
7535 Jobs | Redwood City
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi
Capgemini
6091 Jobs | Paris,France