Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 - 7.0 years
0 Lacs
karnataka
On-site
We are seeking an experienced Big Data Engineer to join our team in Bangalore immediately. If you have a strong background in big data technologies, data processing frameworks, and cloud platforms, we would like to hear from you. Your primary responsibilities will include designing, developing, and maintaining big data solutions using Hadoop, Hive, and Spark. You will be tasked with creating data processing pipelines with Spark (Scala/Java) and implementing real-time streaming solutions using Kafka. Additionally, working with PostgresSQL for data management and integration will be part of your daily routine. Collaboration with various teams to deliver scalable and robust data solutions is a key aspect of this role. You will utilize Git for version control and Jenkins for CI/CD pipeline automation. Experience in deploying and managing workloads on AWS and/or Azure cloud platforms is highly desirable. Moreover, having proficiency in leveraging Databricks for advanced Spark analytics (Databricks Spark certification is preferred) will be advantageous. As a Big Data Engineer, you will also be responsible for troubleshooting data and performance issues while optimizing processes for efficiency. If you are ready to contribute your expertise to a dynamic team and take on these exciting challenges, we encourage you to apply for this position.,
Posted 7 hours ago
8.0 - 12.0 years
0 Lacs
karnataka
On-site
As a Tech Lead at Carelon Global Solutions India, your primary responsibility will involve collaborating with data architects to implement data models and ensure seamless integration with AWS services. You will be developing solutions for effective data processing from on-premise to cloud requirements of large enterprises using AWS services and Snowflake. Additionally, you will design and develop data ingestion pipelines to extract data from various sources, transform, aggregate, and enrich the data. It will also be part of your role to implement data governance policies and procedures to ensure data quality, security, and compliance. Your expertise will be required in technologies such as Snowflake, Python, AWS S3-Athena, RDS, Cloudwatch, Lambda, Glue, and EMR Cluster. You will need a solid understanding of nested JSON file handling, flattening, and different file formats like Avro and Parquet. It will be essential to analyze day-to-day loads/issues, work closely with admin/architect teams during any issues, and document and simulate complex issues. Moreover, you will be responsible for overseeing the technical aspects of projects, making design and architecture decisions, and ensuring best practices are followed. Effective communication with peers, breaking down complex topics, and guiding cross-functional teams will be crucial. You will also need to have a deep understanding of the project's overall architecture and critical systems. Delegating work and assignments to both yourself and team members, organizing the team by implementing a project timeline, and collaborating with different business functions to identify and clear roadblocks for the team will be part of your routine. To qualify for this position, you should have a Bachelor's Degree in Information Technology/Data Engineering or a similar field, along with a minimum of 3-9 years of experience in AWS services. A total of 8+ years of experience in Snowflake with AWS, data engineering, and cloud services, as well as 8-11 years of overall experience in IT, will be required. Experience in agile development processes is also preferred. Being a part of Carelon Global Solutions India means being in an environment that fosters growth, well-being, purpose, and a sense of belonging. With extensive focus on learning and development, an innovative culture, holistic well-being, comprehensive rewards and recognitions, competitive health and medical insurance coverage, best-in-class amenities and workspaces, and policies designed with associates at the center, you will have limitless opportunities to thrive and succeed. At Carelon, we celebrate the diversity of our workforce and the diverse ways we work. We are committed to providing reasonable accommodations to empower our associates to deliver the best results for our customers. If you have a disability and need accommodation during the interview process, please ask for the Reasonable Accommodation Request Form. Carelon Global Solutions India is an equal opportunity employer, offering a full-time job position with a promise of limitless opportunities and a culture that values inclusivity and diversity.,
Posted 8 hours ago
5.0 - 10.0 years
0 Lacs
pune, maharashtra
On-site
The role of Data Engineer at Atgeir Solutions in Pune requires a Bachelor's or Master's degree in Computer Science, Engineering, or a related field with a minimum of 5-10 years of experience in IT Consulting & Services. As a Data Engineer, you will be responsible for designing, developing, and optimizing data infrastructure and workflows to drive insightful analytics, enhance AI/ML models, and facilitate data-driven decision-making for clients. Your primary responsibilities will include designing and implementing scalable data architectures, ETL pipelines, and data workflows to process, store, and analyze large datasets effectively. You will also integrate and consolidate data from various sources, ensuring data integrity and quality while implementing data governance practices and complying with information security standards. Collaborating closely with data scientists, software engineers, and business analysts, you will align data engineering solutions with business goals and requirements. Additionally, optimizing database performance, query execution, and storage solutions for efficiency is a crucial part of your role. You will also contribute to innovative projects and mentor junior team members, fostering a culture of continuous learning and development. The ideal candidate for this position should possess proficiency in programming languages such as Python, Java, or Scala, along with hands-on experience in big data technologies like Hadoop and Spark. Strong knowledge of SQL and NoSQL databases, familiarity with data warehousing tools, and experience with cloud platforms such as GCP, AWS, or Azure are essential. Expertise in ETL/ELT processes, data modeling, and data lake architectures, as well as knowledge of data visualization tools, are required. Educational/Academic Experience in teaching relevant technical subjects and the ability to explain complex concepts to diverse audiences are beneficial. Soft skills including excellent analytical and problem-solving abilities, strong interpersonal skills, and the capacity to collaborate effectively in cross-functional teams are highly valued. Preferred qualifications include certifications in cloud technologies, familiarity with AI/ML tools and frameworks, and contributions to research papers, technical blogs, or open-source projects.,
Posted 8 hours ago
4.0 - 8.0 years
0 Lacs
pune, maharashtra
On-site
As a Software Development Engineer (Data Engineering) in Enterprise Data Solution (EDS) at Mastercard, you will have the opportunity to contribute to building high-performance data pipelines for loading data into the company's Data Warehouse. The Data Warehouse plays a crucial role in providing analytical capabilities to various business users, helping them address their business challenges through data-driven insights. You will be an integral part of a growing organization, collaborating with experienced engineers to tackle complex problems. Your responsibilities will include participating in medium-to-large size data engineering projects, integrating new sources of real-time, streaming, batch, and API-based data into the platform, and supporting business stakeholders in leveraging data-driven insights for growth and transformation. You will be involved in building and maintaining data processing workflows, ensuring reliable integrations with internal systems and third-party APIs, and assisting data analysts in deriving meaningful insights from complex datasets. Collaborating with cross-functional agile teams, you will drive projects through the full development cycle, while also promoting data engineering best practices within the team. To excel in this role, you should hold at least a Bachelor's degree in Computer Science, Computer Engineering, or a related field, or possess equivalent work experience. You must have prior experience in Data Warehouse projects within a product or service-based organization, along with expertise in Data Engineering and implementing end-to-end Data Warehouse projects in a Big Data environment. Proficiency in working with databases like Oracle and Netezza, strong SQL knowledge, and experience in building data pipelines using Spark with Scala/Python/Java on Hadoop are essential. Familiarity with Nifi and Agile methodologies is advantageous. Strong analytical skills are necessary for debugging production issues, providing root cause analysis, and implementing mitigation plans. Effective communication, relationship-building, collaboration, and organizational skills are essential for this role. You should be detail-oriented, proactive, and able to work independently under pressure, demonstrating a high level of initiative and self-motivation. The ability to quickly learn and adopt new technologies, conduct proof of concepts (POCs), and work effectively in diverse, geographically distributed project teams is key to success in this role. In addition to your technical responsibilities, as a Mastercard employee, you are expected to uphold corporate security responsibilities. This includes compliance with security policies, maintaining confidentiality and integrity of accessed information, reporting any security violations or breaches, and completing mandatory security trainings as per Mastercard's guidelines.,
Posted 8 hours ago
4.0 - 8.0 years
0 Lacs
pune, maharashtra
On-site
As an Associate Manager Data Science within the Digital and Technology team, you will play a crucial role in modeling complex problems, uncovering insights, and streamlining workflows by utilizing statistical analysis, machine learning, generative AI, and data visualization techniques with cutting-edge big data & AI technologies. Your primary focus will involve collaborating with business partners to develop data science projects that drive advanced analytics and AI adoption to gain a competitive edge, enhance operational efficiency, and foster innovation. Your responsibilities will include executing end-to-end advanced data science projects aligned with business goals, mastering on-premise and cloud-based data warehousing and data lakes, staying abreast of the latest AI and ML technologies, constructing data pipelines for data ingestion, cleaning and imputing unclean data, conducting in-depth exploratory data analysis, engineering features relevant to data science problems, creating and assessing data models using various techniques, and presenting modeling results and recommendations to stakeholders in a clear and actionable manner. Moreover, you will collaborate closely with business stakeholders to identify challenges and opportunities, develop data-driven strategies, and implement solutions leveraging AI and machine learning technologies. It is essential to adhere to established standards, enhance documentation, share knowledge within teams, ensure data accuracy, consistency, and security, and effectively collaborate with team members. To qualify for this role, you should possess a Bachelor's or Master's degree in Computer Science, Engineering, Statistics, or Applied Mathematics, along with at least 4 years of experience in data science. You should have a strong background in machine learning, statistical analysis, and predictive modeling, with practical expertise in programming languages like Python, R, or Scala. Proficiency in big data technologies, cloud-based solutions, and experience in developing scalable machine learning models for production environments are crucial. Additionally, familiarity with data engineering tools, cloud services platforms, containerization, CI/CD tools, and excellent communication skills are required for this position. As an integral part of a fast-paced and collaborative environment, you should demonstrate strong analytical skills, problem-solving abilities, and have a growth mindset to drive innovation. Your leadership behaviors should emphasize building outstanding teams, setting clear directions, simplification, collaboration, execution, accountability, fostering growth, embracing inclusivity, and maintaining an external focus to excel in this role.,
Posted 8 hours ago
1.0 - 5.0 years
0 Lacs
haryana
On-site
As a Senior Quality Assurance Engineer at Ticketmaster Sport International Engineering in New Delhi, you will be joining a globally recognized leader in sports ticketing. The Ticketmaster Sports International Engineering division is dedicated to creating and maintaining industry-standard ticketing software solutions relied upon by high-profile sports clubs and organizations. Your role will involve ensuring the reliability, quality, and performance of our software solutions. Your responsibilities will include collaborating with a Microsoft .Net development team to enhance the end-user purchase experience through basket management, ticket delivery, and fulfillment. You will be involved in designing and implementing quality assurance practices, executing test cases, producing test reports, and improving automated test suites for robust and high-quality software solutions. Key responsibilities: - Quality control and sign-off of software releases - Designing modular testing solutions - Setting up and maintaining testing frameworks - Developing quality assurance practices and test plans - Executing test cases and preparing test plans - Producing test and quality reports - Creating automated test suites - Collaborating with a team to deliver software solutions - Reviewing defects and updating for accuracy - Operating effectively within a global organization - Defining and advocating for QA standards and best practices Required technical skills: - 3+ years of experience in the IT industry - Experience in Agile methodology and working in scrum teams - Automation testing using Selenium - Performance testing using Gatling - Knowledge of C#/Java/Scala and OOPS concepts - Familiarity with CI tools like GitLab CI and Jenkins - Experience with web services testing and relational databases - Understanding of web protocols and standards - Hands-on experience with Git version control Desired technical skills: - Experience in automation test framework setup - Testing in cloud environments like AWS - Mentoring QA staff on quality objectives - Working with containerization technologies like Docker - Experience in microservice development - Familiarity with GitLab CI pipelines and Octopus Deploy - Knowledge of TestRail and code analysis tools like SonarQube Behavioral skills: - Excellent communication and interpersonal skills - Problem-solving abilities - Enthusiasm for technology and desire to grow as a QA software engineer - Curiosity for new technologies - Ability to respond positively to challenges - Desire to take on responsibility and contribute to team success At Ticketmaster, you will be part of a diverse and inspiring culture driven by teamwork, integrity, and belonging. If you are passionate about live entertainment and technology, and want to contribute to a global vision of connecting people to live events, we welcome your application.,
Posted 9 hours ago
15.0 - 19.0 years
0 Lacs
karnataka
On-site
Salesforce is currently seeking software developers who are eager to make a significant and measurable positive impact through their code for users, the company's success, and the industry at large. Join a team of top-notch engineers to develop innovative features that customers will cherish, adopt, and utilize while ensuring the stability and scalability of our trusted CRM platform. The role of a software engineer at Salesforce involves architecture, design, implementation, and testing to guarantee the delivery of high-quality products. You will have the opportunity to participate in code reviews, mentor junior engineers, and offer technical guidance to the team based on your seniority level. At Salesforce, we prioritize writing high-quality, maintainable code that enhances product stability and simplifies our work processes. We embrace a hybrid work model that values the unique strengths of each team member and encourages personal growth. We believe that autonomous teams empowered to make decisions foster individual and collective success for the product, the company, and our customers. In the position of Backend Principal Engineer, your responsibilities will include: - Developing new and innovative components in a rapidly evolving market to enhance scalability and efficiency. - Creating high-quality, production-ready code to serve millions of users across our applications. - Making design choices based on performance, scalability, and future scalability. - Contributing to all stages of the software development life cycle (SDLC) in a Hybrid Engineering model, including design, implementation, code reviews, automation, and testing. - Building efficient components and algorithms within a microservice multi-tenant SaaS cloud environment. - Conducting code reviews, mentoring junior engineers, and providing technical guidance to the team as per your seniority level. Required Skills: - Proficiency in multiple programming languages and platforms. - Over 15 years of software development experience. - In-depth knowledge and experience in working with distributed systems. - Strong understanding of Services Oriented Architecture. - Deep expertise in object-oriented programming and various scripting languages such as C++, C#.Net, Java, Python, Scala, Go, and Node.JS. - Excellent grasp of RDBMS concepts and experience in developing applications on SQL Server, MySQL, and PostgreSQL. - Experience in developing SAAS applications on public cloud infrastructure like AWS, Azure, or GCP. - Proficiency in queues, locks, scheduling, event-driven architecture, workload distribution, relational, and non-relational databases. - Thorough understanding of software development best practices and leadership capabilities. - Degree or equivalent relevant experience required, with evaluations based on core competencies related to the role. Preferred Skills: - Familiarity with NoSQL databases like Cassandra, HBase, and document stores such as Elastic Search. - Experience with open-source projects like Kafka, Spark, or Zookeeper. - Knowledge or contributions to open-source technologies. - Experience in Native Windows or Linux development. - Development of RESTful services. - Understanding of security concepts such as mTLS, PKI, OAuth/SAML, etc. - Experience with distributed caching and load balancing systems. BENEFITS & PERKS Salesforce offers a comprehensive benefits package, including well-being reimbursement, generous parental leave, adoption assistance, fertility benefits, and more. Employees also have access to world-class enablement and on-demand training through Trailhead.com. Additionally, they can benefit from exposure to executive thought leaders, regular 1:1 coaching with leadership, volunteer opportunities, and participation in the 1:1:1 model for community giving back. For more details, please visit https://www.salesforcebenefits.com/,
Posted 10 hours ago
2.0 - 6.0 years
0 Lacs
pune, maharashtra
On-site
As an experienced Data Architect with over 8 years of relevant experience, you will be responsible for designing and implementing scalable and resilient data architectures for both batch and streaming data processing. Your role will involve developing data models and database structures to ensure efficient data storage and retrieval. Ensuring data security, integrity, and compliance with relevant regulations will be a key focus area for you. You will also be responsible for integrating data from various sources into the big data infrastructure and monitoring and optimizing its performance. Collaboration will be a crucial aspect of your job as you work closely with data scientists, engineers, and business stakeholders to understand requirements and translate them into technical solutions. Your expertise in evaluating and recommending new technologies and tools for data management and processing will be invaluable. Additionally, you will provide guidance and mentorship to junior team members, identify and resolve complex data challenges, and actively participate in the pre-and-post sales process. The ideal candidate for this role will have a Bachelor's or Master's degree in computer science, computer engineering, or a relevant field with a total of 10+ years of experience. At least 2 years of experience as a Big Data Architect is required. You should have a strong understanding of big data technologies such as Hadoop, Spark, NoSQL databases, and cloud-based data services like AWS, Azure, and GCP. Proficiency in programming languages like Python, Java, Scala, and Spark is essential. Experience with data modeling, database design, ETL processes, and data security principles is also necessary. In addition to technical skills, strong analytical and problem-solving abilities, good communication and collaboration skills, knowledge of API design and development, and an understanding of data visualization techniques are required. Familiarity with authentication mechanisms like LDAP, Active Directory, SAML, Kerberos, and authorization configuration for Hadoop-based distributed systems is a plus. Experience with DevOps methodology, toolsets, and automation will be beneficial. If you are passionate about data architecture and possess the required skills and qualifications, this role offers an exciting opportunity to contribute to the design and implementation of cutting-edge data solutions.,
Posted 10 hours ago
10.0 - 14.0 years
0 Lacs
chennai, tamil nadu
On-site
The Applications Development Technology Senior Lead Analyst position is at a senior level and involves establishing and implementing new or revised application systems and programs in coordination with the Technology Team. Your main objective in this role is to lead applications systems analysis and programming activities. You will be responsible for leading the Data Science functions regionally to meet goals, deploy new products, and enhance processes. Additionally, you will serve as a functional Subject Matter Expert (SME) across the company, utilizing advanced knowledge of algorithms, data structures, distributed systems, and networking to lead, architect, and drive broader adoption forward. To be successful in this role, you should have at least 10+ years of relevant experience in an Apps Development role or senior level experience within Data analytics/ML space, as well as at least 3+ years of experience applying AI to practical uses, with experience in deep learning, NLP, and Tensorflow. Proficiency in Scala, Python, and other language or domain-specific packages, as well as the Big Data ecosystem, is required. You should exhibit expertise in all aspects of technology by understanding broader patterns and techniques as they apply to Citigroup's internal and external cloud platforms (AWS, PCF, Akamai). Additionally, acquiring relevant technology and financial industry skills (AWS PWS) and understanding all aspects of NGA technology, including innovative approaches and new opportunities, is essential. Strong communication skills are a must, including the ability to translate business use cases to tech specs, work with diverse project teams, and develop relationships with vendors. You will be responsible for analyzing complex business processes, system processes, and industry standards to define and develop solutions to high-level problems. Your role will also involve allocating work and acting as an advisor/coach to developers, analysts, and new team members. You will provide expertise in the area of advanced knowledge of applications programming and plan assignments involving large budgets, cross-functional projects, or multiple projects. Additionally, you will appropriately assess risk when making business decisions, safeguarding Citigroup, its clients, and assets by driving compliance with applicable laws, rules, and regulations. You will utilize advanced knowledge of supported main system flows and comprehensive knowledge of multiple areas to achieve technology goals. Qualifications required for this role include over 14+ years of relevant experience, hands-on experience with Big Data, ML, and Gen AI, and experience in executing projects from start to end. You should be a demonstrated Subject Matter Expert (SME) in the area(s) of Applications Development, with demonstrated knowledge of client core business functions, leadership, project management, and development skills, as well as relationship and consensus-building skills. Education requirements include a Bachelor's degree/University degree or equivalent experience, with a Master's degree preferred. Citi is an equal opportunity and affirmative action employer, offering full-time job opportunities in the Technology Job Family Group, specifically in the Applications Development Job Family.,
Posted 11 hours ago
4.0 - 8.0 years
0 Lacs
pune, maharashtra
On-site
The Applications Development Intermediate Programmer Analyst position is an intermediate level role that involves participating in the establishment and implementation of new or revised application systems and programs in coordination with the Technology team. Your main objective will be to contribute to applications systems analysis and programming activities. You will be responsible for utilizing your knowledge of applications development procedures and concepts, along with basic knowledge of other technical areas, to identify and define necessary system enhancements. This includes using script tools, analyzing/interpreting code, consulting with users, clients, and other technology groups on issues, recommending programming solutions, installing, and supporting customer exposure systems. Additionally, you will apply fundamental knowledge of programming languages for design specifications, analyze applications to identify vulnerabilities and security issues, conduct testing and debugging, and serve as an advisor or coach to new or lower-level analysts. Your role will also involve identifying problems, analyzing information, and making evaluative judgments to recommend and implement solutions. You will resolve issues by identifying and selecting solutions through the application of acquired technical experience and guided by precedents. You should be able to operate with a limited level of direct supervision, exercise independence of judgment and autonomy, and act as a subject matter expert to senior stakeholders and/or other team members. Moreover, you will need to appropriately assess risks when making business decisions, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients, and assets. This includes driving compliance with applicable laws, rules, and regulations, adhering to policy, applying sound ethical judgment regarding personal behavior, conduct, and business practices, and escalating, managing, and reporting control issues with transparency. Qualifications: - 4-6 years of proven experience in developing and managing Big data solutions using Apache Spark, Scala is a must - Strong programming skills in Scala, Java, or Python - Hands-on experience with technologies like Apache Hive, Apache Kafka, HBase, Couchbase, Sqoop, Flume, etc. - Proficiency in SQL and experience with relational databases (Oracle/PL-SQL) - Experience in working on Kafka, JMS/MQ applications, and multiple operating systems (Unix, Linux, Win) - Familiarity with data warehousing concepts, ETL processes, data modeling, data architecture, and data integration techniques - Knowledge of best practices for data security, privacy, and compliance - Strong technical knowledge of Apache Spark, Hive, SQL, and the Hadoop ecosystem - Experience with developing frameworks and utility services, including logging/monitoring - Experience delivering high-quality software following continuous delivery and using code quality tools (JIRA, GitHub, Jenkins, Sonar, etc.) - Experience creating large-scale, multi-tiered, distributed applications with Hadoop and Spark - Profound knowledge of implementing different data storage solutions such as RDBMS (Oracle), Hive, HBase, Impala, and NoSQL databases Education: - Bachelor's degree/University degree or equivalent experience Please note that this job description provides a high-level review of the types of work performed, and other job-related duties may be assigned as required.,
Posted 11 hours ago
8.0 - 20.0 years
0 Lacs
chennai, tamil nadu
On-site
The Applications Development Senior Programmer Analyst position is an intermediate level role where you will be responsible for participating in the establishment and implementation of new or revised application systems and programs in coordination with the Technology team. Your main objective will be to contribute to applications systems analysis and programming activities. Your responsibilities will include conducting tasks related to feasibility studies, time and cost estimates, IT planning, risk technology, applications development, model development, and establishing and implementing new or revised applications systems and programs to meet specific business needs or user areas. You will also be monitoring and controlling all phases of the development process, including analysis, design, construction, testing, and implementation, as well as providing user and operational support on applications to business users. Your role will require you to utilize in-depth specialty knowledge of applications development to analyze complex problems/issues, provide evaluation of business process, system process, and industry standards, and make evaluative judgments. You will be recommending and developing security measures in post-implementation analysis of business usage to ensure successful system design and functionality. Additionally, you will be consulting with users/clients and other technology groups on issues, recommending advanced programming solutions, and installing and assisting customer exposure systems. Furthermore, you will ensure that essential procedures are followed, help define operating standards and processes, and serve as an advisor or coach to new or lower-level analysts. You should have the ability to operate with a limited level of direct supervision, exercise independence of judgment and autonomy, and act as a subject matter expert to senior stakeholders and/or other team members. As an Applications Development Senior Programmer Analyst, you will be expected to appropriately assess risk when making business decisions, demonstrate particular consideration for the firm's reputation, and safeguard Citigroup, its clients, and assets by driving compliance with applicable laws, rules, and regulations. This includes adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct, and business practices, and escalating, managing, and reporting control issues with transparency. Qualifications: - 8 to 20 years of relevant experience - Primary skills in Java/Scala + Spark - Must have experience in Hadoop/Java/Spark/Scala/Python - Experience in systems analysis and programming of software applications - Experience in managing and implementing successful projects - Working knowledge of consulting/project management techniques/methods - Ability to work under pressure and manage deadlines or unexpected changes in expectations or requirements Education: Bachelors degree/University degree or equivalent experience Please note that this job description provides a high-level review of the types of work performed, and other job-related duties may be assigned as required.,
Posted 11 hours ago
2.0 - 31.0 years
17 Lacs
Bengaluru/Bangalore
On-site
Job Title: Data Architect What are my responsibilities? As a Data Architect, you will: Design & develop technical solutions to integrate disparate information and create meaningful insights for business using big-data architectures. Build and analyze large structured and unstructured datasets on scalable cloud infrastructures. Develop prototypes and proof of concepts using multiple data sources and big-data technologies. Process, manage, extract, and cleanse data to apply data analytics effectively. Design and develop scalable end-to-end data pipelines for batch and stream processing. Stay current with the Data Analytics landscape, exploring new technologies, techniques, tools, and methods. Inspire enthusiasm for using modern data technologies to solve problems and deliver business value. Qualification: Bachelor’s or Master’s in Computer Science & Engineering, or equivalent. Professional Degree in Data Engineering/Analytics is desirable. Experience Level: Minimum 8 years in software development, with at least 2-3 years hands-on experience in Big Data/Data Engineering. Desired Knowledge & Experience: Data Engineer - Big Data Developer: Spark: Spark 3.x, RDD/DataFrames/SQL, Batch/Structured Streaming; knowledge of internals like Catalyst/Tungsten/Photon. Databricks: Workflows, SQL Warehouses/Endpoints, DLT, Pipelines, Unity, Autoloader. IDE & Tools: IntelliJ/PyCharm, Git, Azure DevOps, GitHub Copilot. Testing: pytest, Great Expectations. CI/CD: Yaml Azure Pipelines, Continuous Delivery, Acceptance Testing. Big Data Design: Lakehouse/Medallion Architecture, Parquet/Delta, Partitioning, Distribution, Data Skew, Compaction. Languages: Python/Functional Programming (FP). SQL: TSQL, Spark SQL, HiveQL. Storage: Data Lake, Big Data Storage Design. Additional Helpful Skills: Data Pipelines: ADF, Synapse Pipelines, Oozie, Airflow. Languages: Scala, Java. NoSQL: Cosmos DB, MongoDB, Cassandra. Cubes: SSAS (ROLAP, HOLAP, MOLAP), AAS, Tabular Model. SQL Server: TSQL, Stored Procedures. Hadoop Stack: HDInsight, MapReduce, HDFS, YARN, Oozie, Hive, HBase, Ambari, Ranger, Atlas, Kafka. Data Catalogs: Azure Purview, Apache Atlas, Informatica. Big Data Architect: Expertise: Mastery of technologies, languages, and methodologies mentioned in Data Engineer – Big Data Developer. Mentorship: Mentor and educate developers on relevant technologies and methodologies. Architecture Styles: Lakehouse, Lambda, Kappa, Delta, Data Lake, Data Mesh, Data Fabric, Data Warehouses (e.g., Data Vault). Application Architecture: Microservices, NoSQL, Kubernetes, Cloud-native solutions. Experience: Proven track record across multiple technology generations (Data Warehouse → Hadoop → Big Data → Cloud → Data Mesh). Certification: Architect certifications such as Siemens Certified Software Architect or iSAQB CPSA. Required Soft Skills & Other Capabilities: Excellent communication skills to convey technical concepts to non-technical stakeholders. Strong attention to detail with a proven ability to solve complex business problems. Initiative and resilience to experiment with new ideas. Effective planning and organizational skills. Collaborative mindset for sharing ideas and developing solutions. Ability to work independently and in a global team environment.
Posted 12 hours ago
2.0 - 6.0 years
0 Lacs
haryana
On-site
As a Data Scientist at KPMG in India, you will work closely with business stakeholders and cross-functional subject matter experts to gain a deep understanding of the business context and key questions. Your main responsibilities will include creating Proof of Concepts (POCs) and Minimum Viable Products (MVPs), guiding them through production deployment, and operationalizing projects. You will be instrumental in influencing the machine learning strategy for digital programs and projects. Your role will involve making solution recommendations that strike a balance between speed to market and analytical soundness. You will explore design options to assess efficiency and impact, develop approaches to enhance robustness and rigor, and formulate model-based solutions by combining machine learning algorithms with other techniques. Using a variety of commercial and open-source tools such as Python, R, and TensorFlow, you will develop analytical and modeling solutions. You will create algorithms to extract information from large datasets, deploy these algorithms to production for actionable insights, and compare results from different methodologies to recommend optimal techniques. Furthermore, you will work on multiple pillars of artificial intelligence, including cognitive engineering, conversational bots, and data science. It will be your responsibility to ensure that the solutions you develop exhibit high levels of performance, security, scalability, maintainability, and reliability upon deployment. In addition to your technical responsibilities, you will lead discussions, provide thought leadership, and share subject matter expertise in machine learning techniques, tools, and concepts. You will also facilitate the sharing of new ideas, learnings, and best practices across different geographies. To be successful in this role, you must have a Bachelor of Science or Bachelor of Engineering degree at a minimum, along with 2-4 years of work experience as a Data Scientist. You should possess a combination of business focus, strong analytical and problem-solving skills, programming knowledge, and proficiency in statistical concepts and machine learning algorithms. Key technical skills required for this role include proficiency in Python, SQL, Docker, and versioning tools, as well as experience with Microsoft Azure or AWS data management tools. Familiarity with Agile principles, descriptive statistics, predictive modeling, decision trees, optimization techniques, and deep learning methodologies is also essential. Moreover, you must have the ability to lead, manage, and deliver customer business results through data scientists or professional services teams. Strong communication skills, both written and verbal, are crucial, as you will be expected to share ideas effectively and communicate data analysis assumptions and results clearly. While certain skills such as Agent Framework, RAG Framework, and knowledge of AI on cloud services are a must, familiarity with AI algorithms, deep learning, computer vision, and responsible AI frameworks is considered a bonus. Overall, as a Data Scientist at KPMG in India, you will play a vital role in developing innovative solutions, driving business outcomes through data-driven insights, and contributing to the continuous growth and success of the organization.,
Posted 12 hours ago
3.0 - 7.0 years
0 Lacs
chennai, tamil nadu
On-site
You are a Hadoop Developer with a Bachelor's degree and 3-6 years of experience. You will be based in Chennai or Bangalore. Your primary responsibilities include working with Big Data technologies, writing SQL queries, and having proficiency in tools like Hadoop, Hive, Pyspark, Scala, Azure, and C. Additionally, experience in Control-M/Airflow and DevOps will be advantageous. You should have a good understanding of SDLC, Scrum, and Agile methodologies. It is desirable to have knowledge in Banking & Money Laundering domains. Your role will involve designing, developing, and implementing Big Data or Data lake solutions. You will be responsible for writing test cases, validating results, and automating manual tasks. As a Hadoop Developer, you must possess strong analytical skills, excellent programming abilities, and a proven track record in delivering successful Big Data solutions. Effective communication with stakeholders and coordination with multiple teams will be crucial for this role. Your focus on recognizing problems, providing value-added solutions, and driving business outcomes will be key to your success.,
Posted 12 hours ago
4.0 - 8.0 years
0 Lacs
surat, gujarat
On-site
We are seeking a Data Engineer to play a crucial role in designing, constructing, and maintaining scalable data systems and infrastructure. Your primary responsibilities will include collaborating with various teams to gather requirements, develop data pipelines, and establish best practices for data governance, security, and analytics. This position presents an exciting opportunity to shape the core of our data environment and directly impact how we harness data for business insights and innovation. In this role, you will architect and implement data solutions by designing and constructing scalable data platforms using cloud services such as AWS, Azure, or GCP, along with on-premises technologies. You will be tasked with developing best practices for data storage, ingestion, and processing, both in batch and streaming formats. Your expertise will be instrumental in creating and managing robust ETL/ELT workflows that handle various data types while optimizing data pipelines for reliability, scalability, and performance. Furthermore, you will be responsible for defining and enforcing data governance policies, ensuring compliance with relevant data privacy regulations, and implementing metadata management and cataloging solutions. Automating the detection of new data sets, schema changes, and lineage updates will be a key aspect of your role. Additionally, you will establish automated checks and alerts for data quality, completeness, and consistency, troubleshooting and resolving data-related issues as needed. Your collaboration skills will be put to the test as you work closely with cross-functional stakeholders to translate requirements into scalable technical solutions. Qualifications for this role include a Bachelor's or Master's degree in Computer Science, Engineering, or a related field, or equivalent experience, along with at least 4 years of hands-on experience in data engineering or related fields. Technical skills essential for this position include proficiency in cloud platforms, distributed data processing tools, SQL, programming languages, real-time data streaming solutions, data modeling, and modern data architecture patterns. Experience with data governance, security, and data quality frameworks is highly desirable. Soft skills such as excellent communication, stakeholder management, and the ability to explain complex technical concepts to diverse audiences are crucial for success in this role. Certifications in cloud platforms, infrastructure-as-code, orchestration tools, and containerization for data engineering pipelines are considered advantageous. If you are passionate about leveraging data for business insights and innovation and possess the technical skills and collaborative mindset required for this role, we encourage you to apply.,
Posted 12 hours ago
7.0 - 11.0 years
0 Lacs
ahmedabad, gujarat
On-site
The Lead Data Engineer (Databricks) position is an exciting opportunity for individuals with 7 to 10 years of experience in data engineering to join our team in Pune or Ahmedabad. As a Lead Data Engineer, you will play a crucial role in enhancing our data engineering capabilities, working with cutting-edge technologies like Databricks and Generative AI. Your responsibilities will include leading the design, development, and optimization of data solutions using Databricks to ensure scalability, efficiency, and security. You will collaborate with cross-functional teams to gather and analyze data requirements, translating them into robust data architectures and solutions. Developing and maintaining ETL pipelines, integrating with Azure Data Factory as necessary, will also be part of your role. Additionally, you will implement machine learning models and advanced analytics solutions, incorporating Generative AI to drive innovation. Ensuring data quality, governance, and security practices are followed to maintain the integrity and reliability of data solutions will be essential. As a technical leader, you will provide mentorship to junior engineers, creating an environment of learning and growth within the team. To qualify for this role, you should have a Bachelors or Masters degree in computer science, Information Technology, or a related field. Your proven expertise in building and optimizing data solutions using Databricks and integrating with Azure Data Factory/AWS Glue, along with proficiency in SQL and programming languages like Python or Scala, will be crucial. A strong understanding of data modeling, ETL processes, Data Warehousing/Data Lakehouse concepts, cloud platforms (especially Azure), and containerization technologies is required. Preferred skills for this role include experience with Generative AI technologies, familiarity with AWS or GCP cloud platforms, and knowledge of data governance frameworks and tools. Excellent analytical, problem-solving, and communication skills, as well as demonstrated leadership ability with experience mentoring junior team members, will be advantageous in this position. Stay updated on the latest trends in data engineering, Databricks, Generative AI, and Azure Data Factory to continuously enhance team capabilities.,
Posted 12 hours ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
You will be responsible for understanding the software requirements and developing them into a working source code as per the specified timelines. Collaboration with team members to complete deliverables will be a key aspect of your role. You will also be accountable for recording management of production incidents/errors and engaging/notifying the team as necessary. Your participation in the resolution of production incidents with stakeholders/management will be crucial. Additionally, you will manage application/infrastructure production configuration/processes/releases. It is important to get mentored on best practices followed in the software industry and contribute to all stages of the software development lifecycle. Envisioning system features and functionality, ensuring application designs align with business goals, and writing well-designed, testable code are essential tasks. You will be expected to develop and test software with high quality and standards, contribute to production support, and resolve issues in a timely manner. Understanding Agile practices and setting priorities on work products based on agreed iteration goals are also part of your responsibilities. Effective collaboration with team members to share best practices and flexibility to work and support Paris hours are required. As a Big Data Specialist Engineer with a Financial Domain Background, hands-on experience in Hadoop ecosystem application development and Spark and Scala Development is essential. A thorough understanding of Hadoop and its ecosystem, modern and traditional databases, SQL, microservices, excellent coding skills in Scala, hands-on experience with Spark, advanced Scala, Apache Nifi, Kafka, proficiency in Linux environment and tools, git, Jenkins, and Ansible is required. Experience in the financial and banking domain specifically in Credit Risk chain is a must. You should possess excellent communication skills, work independently or in a team effectively, have good problem-solving skills and attention to detail, and demonstrate the ability to work in a team environment. Joining us at Socit Gnrale offers the opportunity to be part of a team that believes in the transformative power of individuals. You will have the chance to impact the future by creating, daring, innovating, and taking action. Whether you're here for a short period or planning a long-term career, together we can make a positive difference. Employee engagement in solidarity actions, supporting ESG principles, and fostering diversity and inclusion are integral parts of our organizational culture. If you are looking to grow in a stimulating environment, contribute positively to society, and enhance your expertise, you will find a welcoming home with us.,
Posted 13 hours ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
The role of a Principal Data Engineer (Associate Director) is a permanent position based in Bangalore within the ISS department at Fidelity. As a part of the ISS Data Platform Team, you will be responsible for designing, developing, and maintaining scalable data pipelines and architectures to facilitate data ingestion, integration, and analytics. You will lead a team of senior and junior developers, providing mentorship and guidance while collaborating with enterprise architects, business analysts, and stakeholders to understand data requirements and drive technical innovation within the department. Your key responsibilities will include taking ownership of technical delivery, leading a subsection of the wider data platform, and ensuring code reusability, quality, and developer productivity are maximized. You will be expected to challenge the status quo by implementing the latest data engineering practices and techniques. Additionally, you will be required to leverage cloud-based data platforms such as Snowflake and Databricks, have expertise in the AWS ecosystem, particularly Lambda, EMR, MSK, Glue, and S3, and be proficient in Python, SQL, and CI/CD pipelines. The ideal candidate will possess advanced technical skills in designing event-based or streaming data architectures using tools like Kafka, implementing data access controls for regulatory compliance, and using both RDBMS and NoSQL offerings. Experience with CDC ingestion, orchestration tools like Airflow, and containerization technologies is desirable. Strong soft skills in problem-solving, strategic communication, and project management are crucial for this role. At Fidelity, we offer a comprehensive benefits package, prioritize your well-being, support your development, and promote a flexible work environment. We are committed to ensuring that you feel motivated by your work, happy to be a part of our team, and have the opportunity to build your future with us. To learn more about our work culture, commitment to dynamic working, and potential career growth opportunities, visit careers.fidelityinternational.com.,
Posted 14 hours ago
5.0 years
0 Lacs
Chandigarh
On-site
bebo Technologies is a leading complete software solution provider. bebo stands for 'be extension be offshore'. We are a business partner of QASource, inc. USA[www.QASource.com]. We offer outstanding services in the areas of software development, sustenance engineering, quality assurance and product support. bebo is dedicated to provide high-caliber offshore software services and solutions. Our goal is to 'Deliver in time-every time'. For more details visit our website: www.bebotechnologies.com Let's have a 360 tour of our bebo premises by clicking on below link: https://www.youtube.com/watch?v=S1Bgm07dPmMKey Skill Set Required: 5–7 years of software development experience, with 3+ years focused on building ML systems. Advanced programming skills in Python; working knowledge of Java, Scala, or C++ for backend services. Proficiency with ML frameworks: TensorFlow, PyTorch, Scikit-learn. Experience deploying ML solutions in cloud environments (AWS, GCP, Azure) using tools like SageMaker, Vertex AI, or Databricks. Strong grasp of distributed systems, CI/CD for ML, containerization (Docker/K8s), and serving frameworks. Deep understanding of algorithms, system design, and data pipelines. Experience with MLOps platforms (MLflow, Kubeflow, TFX) and feature stores. Familiarity with LLMs, RAG architectures, or multimodal AI. Experience with real-time data and streaming systems (Kafka, Flink, Spark Streaming). Exposure to governance/compliance in regulated industries (e.g., healthcare, finance). Published research, patents, or contributions to open-source ML tools is a strong plus
Posted 23 hours ago
5.0 years
4 - 9 Lacs
Hyderābād
Remote
Data Engineer Remote role based in India. Note - This is a full time, remote, salaried position through Red Elk Consulting, llc, based in India. This role is 100% focused and dedicated to supporting Together Labs, as a consultant, and includes; salary, benefits, vacation, and a local India - based support team We are seeking an experienced and motivated Data Engineer to join our dynamic data team. In this role, you will be responsible for designing, building, and maintaining scalable data pipelines, managing our data warehouse infrastructure, and supporting analytics initiatives across the organization. You will work closely with data scientists, analysts, and other stakeholders to ensure data quality, integrity, and accessibility, enabling the organization to make data-driven decisions. RESPONSIBILITIES Design and Develop Data Pipelines: Architect, develop, and maintain robust and scalable data pipelines for ingesting, processing, and transforming large volumes of data from multiple sources in real-time and batch modes . Data Warehouse Management: Manage, optimize, and maintain the data warehouse infrastructure, ensuring data integrity, security, and availability. Oversee the implementation of best practices for data storage, partitioning, indexing, and schema design. ETL Processes: Design and build efficient ETL (Extract, Transform, Load) processes to move data across various systems while ensuring high performance, reliability, and scalability. Data Integration: Integrate diverse data sources (structured, semi-structured, and unstructured data) into a unified data model that supports analytics and reporting needs. Support Analytics and BI: Collaborate with data analysts, data scientists, and business intelligence teams to understand data requirements and provide data sets, models, and solutions that support their analytics needs. Data Quality and Governance: Establish and enforce data quality standards, governance policies, and best practices. Implement monitoring and alerting to ensure data accuracy, consistency, and completeness. Operational Excellence: Drive the development of automated systems for provisioning, deployment, monitoring, failover, and recovery. Implement systems to monitor key performance metrics, logs, and alerts with a focus on automation and reducing manual intervention. Cross-functional Collaboration: Work closely with product, engineering, and QA teams to ensure the infrastructure supports and enhances development workflows and that services are deployed and operated smoothly at scale. Incident Management & Root Cause Analysis: Act as a first responder to data production issues, leading post-mortems and implementing long-term solutions to prevent recurrence. Ensure all incidents are handled promptly with a focus on minimizing impact. Security & Compliance: Ensure our infrastructure is designed with security best practices in mind, including encryption, access control, and vulnerability scanning. Continuous Improvement: Stay up-to-date with industry trends, technologies, and best practices, bringing innovative ideas into the team to improve reliability, performance, and scale. QUALIFICATIONS Education & Experience: Bachelor’s degree in Computer Science, Engineering, or related technical field, or equivalent practical experience. 5+ years of experience in data engineering, with a strong background in systems architecture, distributed systems, cloud infrastructure, or a related field. Proven experience building and managing data pipelines, data warehouses, and ETL processes. Technical skills: Strong proficiency in SQL and experience with relational databases (e.g., PostgreSQL, MySQL, Oracle) and data warehousing solutions (e.g., Snowflake, Redshift, BigQuery). Expertise in data pipeline tools and frameworks (e.g., AWS Glue, Google Dataflow, Apache Airflow, Apache NiFi, dbt). Hands-on experience with cloud platforms and their data services (e.g., AWS, Azure, Google Cloud Platform). Proficiency in programming languages such as Python, Java, or Scala for data manipulation and automation. Knowledge of data modeling, schema design, and data governance principles. Familiarity with distributed data processing frameworks like Apache Spark, Hadoop, or similar. Experience with BI tools (e.g., Tableau, Power BI, Looker) Experience with AWS and standard practices working in Cloud based environments Soft Skills: Strong problem-solving and analytical skills with a keen attention to detail. Excellent communication and collaboration skills, with the ability to work effectively with technical and non-technical stakeholders. Proactive mindset with the ability to work independently and handle multiple tasks in a fast-paced environment. ABOUT US Together Labs innovates technologies that empower people worldwide to connect, create and earn in virtual worlds. Our mission is to redefine social media as a catalyst for authentic human connection through the development of a family of products grounded in this core value. These include: IMVU, the world's largest friendship discovery and social platform and VCOIN, the first regulatory-approved transferable digital currency;. For more information, please visit https://togetherlabs.com/ Founded in 2004 and based in the heart of Silicon Valley, Together Labs is led by a team that's dedicated to pioneering in virtual worlds. Together Labs is backed by venture investors Allegis Capital, Bridgescale Partners and Best Buy Capital. Together Labs (formerly IMVU) has been for nine years running as Best Place to Work in the Silicon Valley. HOW TO APPLY Please familiarize yourself with our products and feel free to try out our core product at https://www.imvu.com/ Together Labs is an equal opportunity employer, and is committed to fostering a culture of inclusion. Our unique differences enable us to learn, collaborate, and grow together. We welcome all applicants without regard to race, color, religious creed, sex, national origin, citizenship status, age, physical or mental disability, sexual orientation, gender identification, marital, parental, veteran or military status, unfavorable military discharge, decisions regarding reproductive health, or any other status protected by applicable federal, state, or local law. This is a remote position.
Posted 23 hours ago
8.0 years
0 Lacs
Hyderābād
On-site
At least 8+ years of experience and strong knowledge in Scala programming language. Able to write clean, maintainable and efficient Scala code following best practices. Good knowledge on the fundamental Data Structures and their usage At least 8+ years of experience in designing and developing large scale, distributed data processing pipelines using Apache Spark and related technologies. Having expertise in Spark Core, Spark SQL and Spark Streaming. Experience with Hadoop, HDFS, Hive and other BigData technologies. Familiarity with Data warehousing and ETL concepts and techniques Having expertise in Database concepts and SQL/NoSQL operations. UNIX shell scripting will be an added advantage in scheduling/running application jobs. At least 8 years of experience in Project development life cycle activities and maintenance/support projects. Work in an Agile environment and participation in scrum daily standups, sprint planning reviews and retrospectives. Understand project requirements and translate them into technical solutions which meets the project quality standards Ability to work in team in diverse/multiple stakeholder environment and collaborate with upstream/downstream functional teams to identify, troubleshoot and resolve data issues. Strong problem solving and Good Analytical skills. Excellent verbal and written communication skills. Experience and desire to work in a Global delivery environment. Stay up to date with new technologies and industry trends in Development. Job Types: Full-time, Permanent, Contractual / Temporary Pay: ₹5,000.00 - ₹9,000.00 per day Work Location: In person
Posted 23 hours ago
1.0 years
1 - 5 Lacs
Hyderābād
On-site
Are you looking for an opportunity to join a team of engineers in positively affecting the experience of every consumer who uses Microsoft products? The OSSE team in OPG group is focused on building client experiences and services that light up Microsoft Account experiences across all devices and platforms. We are passionate about working together to build delightful and inclusive account experiences that empower customers to get the most out of what Microsoft has to offer. We’re looking for a collaborative, inclusive and customer obsessed engineer to help us build and sustain authentication experiences like Passkeys as well as engage with our customers by building experiences to help users keep their account secure and connected across multiple devices and applications. We're looking for an enthusiastic Software Engineer to help us build account experiences and deliver business Intelligence through data for experiences across 1.5 billion Windows devices and various Microsoft products. Your responsibilities will include working closely with a variety of teams such as Engineering, Program Management, Design and application partners to understand the key business questions for customer-facing scenarios, to set up the key performance indicators, and setup data pipelines to identify insights and experiment ideas that moves our business metrics. Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond. Responsibilities Enable the Windows, Developers, and Experiences team to do more with data across all aspects of the development lifecycle. Contribute to a data-driven culture as well as a culture of experimentation across the organization. Provide new and improve upon existing platform offerings with a fundamental understanding of the end-to-end scenarios. Collaborate with partner teams and customers to scope and deliver projects. You’ll write secure, reliable, scalable, and maintainable code, and then effectively debug it, test it, and support it. Authoring and design of Big Data ETL pipelines in SCOPE, Scala, SQL, Python, or C#. Qualifications Required Qualifications: Bachelor's Degree in Computer Science, or related technical discipline with proven experience coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience. Proven coding and debugging skills in C#, C++, Java, or SQL. Ability to work and communicate effectively across disciplines and teams. Preferred Qualifications: 1+ years of experience in data engineering. Understanding and experience with data cloud computing technologies such as – Azure Synapse, Azure Data Factory, SQL, Azure Data Explorer, Power BI, PowerApps, Hadoop, YARN, Apache Spark. Excellent analytical skills with systematic and structured approach to software design. Microsoft is an equal opportunity employer. Consistent with applicable law, all qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.
Posted 23 hours ago
12.0 years
1 - 10 Lacs
Gurgaon
On-site
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by diversity and inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health equity on a global scale. Join us to start Caring. Connecting. Growing together Primary Responsibilities: Manage and mentor a team of data engineers, fostering a culture of innovation and continuous improvement Design and maintain robust data architectures, including databases and data warehouses Oversee the development and optimization of data pipelines for efficient data processing Implement measures to ensure data integrity, including validation, cleansing, and governance practices Work closely with data scientists, analysts, and business stakeholders to understand requirements and deliver solutions Analyze, synthesize, and interpret data from a variety of data sources, investigating, reconciling and explaining data differences to understand the complete data lifecycle Architecting with modern technology stack and Designing Public Cloud Application leveraging in Azure Basic, structured, standard approach to work Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Undergraduate degree or equivalent experience. 12+ Years of Implementation experience on time-critical production projects following key software development practices 8+ years of programming experience in Python or any programming language 6+ years of hands-on programming experience in Spark using scala/python 4+ years of hands-on working experience with Azure services like: Azure Databricks Azure Data Factory Azure Functions Azure App Service Good knowledge in writing SQL queries Good knowledge in building REST API's Good knowledge on tools like Azure Dev Ops & Github Ability to understand the existing application codebase, perform impact analysis and update the code when required based on the business logic or for optimization Ability to learn modern technologies and be part of fast paced teams Proven excellent Analytical and Communication skills (Both Verbal and Written) Proficiency with AI-powered development tools such as GitHub Copilot or AWS Code Whisperer or Google’s Codey (Duet AI) or any relevant tools is expected. Candidates should be adept at integrating these tools into their workflows to accelerate development, improve code quality, and enhance delivery velocity. Expected to proactively leverage AI tools throughout the software development lifecycle to drive faster iteration, reduce manual effort, and boost overall engineering productivity Preferred Qualification: Good knowledge on Docker & Kubernetes services At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.
Posted 23 hours ago
175.0 years
2 - 7 Lacs
Gurgaon
On-site
At American Express, our culture is built on a 175-year history of innovation, shared values and Leadership Behaviors, and an unwavering commitment to back our customers, communities, and colleagues. As part of Team Amex, you'll experience this powerful backing with comprehensive support for your holistic well-being and many opportunities to learn new skills, develop as a leader, and grow your career. Here, your voice and ideas matter, your work makes an impact, and together, you will help us define the future of American Express. Description: The Analytics, Investment and Marketing Enablement (AIM) team – a part of GCS Marketing Organization – is the analytical engine that enables the Global Commercial Card business. The team drives Profitable Growth in Acquisitions through Data, Analytics, AI powered Targeting & Personalization Capabilities. This B30 role would be a part of AIM India team, based out of Gurgaon, and would be responsible for proactive retention and save a card analytics across the SME segment across marketing and sales distribution channels. This critical role represents a unique opportunity to make charge volume impact of 2+ Billion. A very important focus for the role shall be quantitatively determining the value, deriving insights, and then assuring the insights are leveraged to create positive impact that cause a meaningful difference to the business. Key Responsibilities include: Develop/enhance precursors in AI models partnering with Decision science and collaborate across Marketing, Risk, and Sales to help design customized treatments depending upon the precursors. Be a key analytical partner to the Marketing and Measurement teams to report on Digital, Field and Phone Programs that promote growth and retention. Support and enable the GCS partners with actionable, insightful analytical solutions (such as triggers, Prioritization Tiers) to help the Field and Phone Sales team prioritize efforts effectively. Partner with functional leaders, Strategic Business Partners, and Senior leaders to assess and identify opportunities for better customer engagement and revenue growth. Excellent communication skills with the ability to engage, influence, and inspire partners and stakeholders to drive collaboration and alignment. Exceptional execution skills – be able to resolve issues, identify opportunities, and define success metrics and make things happen. Drive Automation and ongoing refinement of analytical frameworks. Willingness to challenge the status quo; breakthrough thinking to generate insights, alternatives, and opportunities for business success. High degree of organization, individual initiative, and personal accountability Minimum Qualifications: Strong programming skills & experience with building models & analytical data products are required. Experience with technologies such as Java, Big Data, PySpark, Hive, Scala, Python Proficiency & experience in applying cutting edge statistical and machine learning techniques to business problems and leverage external thinking (from academia and/or other industries) to develop best in class data science solutions. Excellent communication and interpersonal skills, and ability to build and retain strong working relationships. Ability to interact effectively and deliver compelling messages to business leaders across various band levels. Preferred Qualifications: Good knowledge of statistical techniques like hypothesis testing, regression, knn, t-test, chi-square test Demonstrated ability to work independently and across a matrix organization partnering with capabilities, decision sciences, technology teams and external vendors to deliver solutions at top speed. Experience with commercial data and ability to create insights and drive results. We back you with benefits that support your holistic well-being so you can be and deliver your best. This means caring for you and your loved ones' physical, financial, and mental health, as well as providing the flexibility you need to thrive personally and professionally: Competitive base salaries Bonus incentives Support for financial-well-being and retirement Comprehensive medical, dental, vision, life insurance, and disability benefits (depending on location) Flexible working model with hybrid, onsite or virtual arrangements depending on role and business need Generous paid parental leave policies (depending on your location) Free access to global on-site wellness centers staffed with nurses and doctors (depending on location) Free and confidential counseling support through our Healthy Minds program Career development and training opportunities American Express is an equal opportunity employer and makes employment decisions without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran status, disability status, age, or any other status protected by law. Offer of employment with American Express is conditioned upon the successful completion of a background verification check, subject to applicable laws and regulations.
Posted 23 hours ago
4.0 years
19 - 39 Lacs
Noida
On-site
Sr Software Engineer (3 Openings) RACE Consulting is hiring for one of our top clients in the cybersecurity and AI space. If you're passionate about cutting-edge technology and ready to work on next-gen AI-powered log management and security automation, we want to hear from you! Role Highlights:Work on advanced agentic workflows, threat detection, and behavioral analysis Collaborate with a world-class team of security researchers and data scientists Tech stack: Scala, Python, Java, Go, Docker, Kubernetes, IaC Who We're Looking For:4+ years of experience in backend developmentStrong knowledge of microservices, containerization, and cloud-native architectureBonus if you’ve worked in cybersecurity or AI-driven analytics Job Type: Full-time Pay: ₹1,950,000.00 - ₹3,900,000.00 per year Benefits: Flexible schedule Health insurance Leave encashment Provident Fund Work Location: In person
Posted 23 hours ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Scala is a popular programming language that is widely used in India, especially in the tech industry. Job seekers looking for opportunities in Scala can find a variety of roles across different cities in the country. In this article, we will dive into the Scala job market in India and provide valuable insights for job seekers.
These cities are known for their thriving tech ecosystem and have a high demand for Scala professionals.
The salary range for Scala professionals in India varies based on experience levels. Entry-level Scala developers can expect to earn around INR 6-8 lakhs per annum, while experienced professionals with 5+ years of experience can earn upwards of INR 15 lakhs per annum.
In the Scala job market, a typical career path may look like: - Junior Developer - Scala Developer - Senior Developer - Tech Lead
As professionals gain more experience and expertise in Scala, they can progress to higher roles with increased responsibilities.
In addition to Scala expertise, employers often look for candidates with the following skills: - Java - Spark - Akka - Play Framework - Functional programming concepts
Having a good understanding of these related skills can enhance a candidate's profile and increase their chances of landing a Scala job.
Here are 25 interview questions that you may encounter when applying for Scala roles:
As you explore Scala jobs in India, remember to showcase your expertise in Scala and related skills during interviews. Prepare well, stay confident, and you'll be on your way to a successful career in Scala. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39928 Jobs | Dublin
Wipro
19405 Jobs | Bengaluru
Accenture in India
15976 Jobs | Dublin 2
EY
15128 Jobs | London
Uplers
11281 Jobs | Ahmedabad
Amazon
10521 Jobs | Seattle,WA
Oracle
9339 Jobs | Redwood City
IBM
9274 Jobs | Armonk
Accenture services Pvt Ltd
7978 Jobs |
Capgemini
7754 Jobs | Paris,France