Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
8.0 - 12.0 years
10 - 14 Lacs
Mumbai, Pune
Work from Office
The Developer will form part of our existing development team, and the candidate will have expertise in C#, .NET Core, Microsoft SQL and Azure API development. This role is focused , and maintaining interfaces between systems, ensuring seamless data exchange and integration across platforms. The ideal candidate will have strong experience in , cloud-, and software integrations. The successful applicant will work with international and project project teams. Experience working with Agile and DevOps methodologies would be preferable. Job Specification In this context the successful candidate will: Design, develop, and maintain robust, scalable APIs using C#, .NET Core Develop and implement Azure-based APIs and integration solutions to connect various enterprise systems. Collaborate with cross-functional teams to analyze system requirements and create efficient data exchange solutions. Troubleshoot and resolve issues in existing integrations to enhance performance and reliability. Ensure APIs and integrations adhere to security, compliance, and performance best practices. Document technical designs, integration processes, and best practices. Stay updated with emerging technologies and industry trends in system integrations. Troubleshooting, debugging, and upgrading software components and features. Support and maintain the where possible Work as part of a project team on larger projects, developing new features. Programming and implementing system designs. Engaging with clients and other stakeholders. Collaborate with other developers, designers, testers, and project managers using agile methodologies and tools such as Git or Azure DevOps. Write technical documentation and testing scripts. Apply working procedures, methodologies, and tools according to ITIL best practices and internal procedures. Comply with Information security best practices and guidelines. Participate in the elaboration and maintenance of the knowledge base of products. Skills Required: The successful candidate will: Bachelor's degree in Computer Science or a relevant technical field. Minimum of 8 years of development experience. Proven experience in the full software development lifecycle within an agile environment. Advanced working knowledge of T-SQL (DDL, DML, JSON, XML). Extensive experience with large datasets and incremental batch loading methodologies. Advanced understanding of relational data structures, including keys, constraints, and triggers. Performance tuning and optimization of RDBMS. Expertise in relational database technologies in a high-data-volume transactional systems environment. Ability to design and implement conceptual, logical, and physical data models. Solid experience in data modeling, data management, and governance methodologies. Ability to develop unit testing of code components. Advantageous Experience with Microsoft stack SSIS, SSRS, SSAS, Power BI, SQL Server. Experience building DevOps automation is beneficial.
Posted 1 day ago
2.0 - 5.0 years
18 - 21 Lacs
Hyderabad
Work from Office
Overview Annalect is currently seeking a data engineer to join our technology team. In this role you will build Annalect products which sit atop cloud-based data infrastructure. We are looking for people who have a shared passion for technology, design & development, data, and fusing these disciplines together to build cool things. In this role, you will work on one or more software and data products in the Annalect Engineering Team. You will participate in technical architecture, design, and development of software products as well as research and evaluation of new technical solutions. Responsibilities Design, build, test and deploy scalable and reusable systems that handle large amounts of data. Collaborate with product owners and data scientists to build new data products. Ensure data quality and reliability Qualifications Experience designing and managing data flows. Experience designing systems and APIs to integrate data into applications. 4+ years of Linux, Bash, Python, and SQL experience 2+ years using Spark and other frameworks to process large volumes of data. 2+ years using Parquet, ORC, or other columnar file formats. 2+ years using AWS cloud services, esp. services that are used for data processing e.g. Glue, Dataflow, Data Factory, EMR, Dataproc, HDInsights , Athena, Redshift, BigQuery etc. Passion for Technology: Excitement for new technology, bleeding edge applications, and a positive attitude towards solving real world challenges
Posted 2 days ago
2.0 - 3.0 years
10 - 14 Lacs
Coimbatore
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Microsoft BOT Framework Good to have skills : NAMinimum 7.5 year(s) of experience is required Educational Qualification : aMust have:BE/Btech/MCAbGood to have:ME/Mtech Project Role :Application Lead Project Role Description :Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have Skills :Microsoft BOT Framework, SSINON SSI:Good to Have Skills :SSI:No Technology Specialization NON SSI :Job :Key Responsibilities :aWork closely with the client teams to define and architect the solution for our clients Estimating the components required to provide a comprehensive AI solution that meets and exceeds the clients expectations delivering tangible business value to our clientsbCandidate is expected to deliver cognitive applications that solve/augment business issues using leading AI technology frameworkscDirect and influence client Bot/Virtual Agent architecture, solutioning and development Technical Experience :a7-8 years of experience and a thorough understanding of Azure PaaS landscapea2-3 years of experience in Microsoft AI/Bot FrameworkbGood to have - Knowledge experience in Azure Machine Learning, AI, Azure HDInsight Professional Attributes :AShould have strong problem-solving and Analytical skills BGood Communication skills CAbility to work independently with little supervision or as a team DShould be able to work and deliver under tight timelines ECandidate should have good analytical skills Educational Qualification:aMust have:BE/Btech/MCAbGood to have:ME/MtechAdditional Info : Qualification aMust have:BE/Btech/MCAbGood to have:ME/Mtech
Posted 2 days ago
2.0 - 3.0 years
10 - 14 Lacs
Coimbatore
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Microsoft BOT Framework Good to have skills : NAMinimum 7.5 year(s) of experience is required Educational Qualification : BE/B-Tech/M-Tech Project Role :Application Lead Project Role Description :Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have Skills :Microsoft BOT FrameworkGood to Have Skills :No Technology SpecializationJob :Key Responsibilities :aWork closely with the client teams to define and architect the solution for our clients Estimating the components required to provide a comprehensive AI solution that meets and exceeds the clients expectations delivering tangible business value to our clientsbCandidate is expected to deliver cognitive applications that solve/augment business issues using leading AI technology frameworkscDirect and influence client Bot/Virtual Agent architecture, solutioning and development Technical Experience :a7-8 years of experience and a thorough understanding of Azure PaaS landscapea2-3 years of experience in Microsoft AI/Bot FrameworkbGood to have - Knowledge experience in Azure Machine Learning, AI, Azure HDInsight Professional Attributes :AShould have strong problem-solving and Analytical skills BGood Communication skills CAbility to work independently with little supervision or as a team DShould be able to work and deliver under tight timelines ECandidate should have good analytical skills Educational Qualification:BE/B-Tech/M-TechAdditional Info : Qualification BE/B-Tech/M-Tech
Posted 2 days ago
6.0 - 7.0 years
14 - 17 Lacs
Hyderabad
Work from Office
As Data Engineer, you wi deveop, maintain, evauate and test big data soutions. You wi be invoved in the deveopment of data soutions using Spark Framework with Python or Scaa on Hadoop and Azure Coud Data Patform Responsibiities: Experienced in buiding data pipeines to Ingest, process, and transform data from fies, streams and databases. Process the data with Spark, Python, PySpark and Hive, Hbase or other NoSQL databases on Azure Coud Data Patform or HDFS Experienced in deveop efficient software code for mutipe use cases everaging Spark Framework / using Python or Scaa and Big Data technoogies for various use cases buit on the patform Experience in deveoping streaming pipeines Experience to work with Hadoop / Azure eco system components to impement scaabe soutions to meet the ever-increasing data voumes, using big data/coud technoogies Apache Spark, Kafka, any Coud computing etc Required education Bacheor's Degree Preferred education Master's Degree Required technica and professiona expertise Tota 6 - 7+ years of experience in Data Management (DW, DL, Data Patform, Lakehouse) and Data Engineering skis Minimum 4+ years of experience in Big Data technoogies with extensive data engineering experience in Spark / Python or Scaa; Minimum 3 years of experience on Coud Data Patforms on Azure; Experience in DataBricks / Azure HDInsight / Azure Data Factory, Synapse, SQL Server DB Good to exceent SQL skis Preferred technica and professiona experience Certification in Azure and Data Bricks or Coudera Spark Certified deveopers
Posted 1 week ago
4.0 - 9.0 years
12 - 16 Lacs
Kochi
Work from Office
As Data Engineer, you wi deveop, maintain, evauate and test big data soutions. You wi be invoved in the deveopment of data soutions using Spark Framework with Python or Scaa on Hadoop and Azure Coud Data Patform Responsibiities: Experienced in buiding data pipeines to Ingest, process, and transform data from fies, streams and databases. Process the data with Spark, Python, PySpark and Hive, Hbase or other NoSQL databases on Azure Coud Data Patform or HDFS Experienced in deveop efficient software code for mutipe use cases everaging Spark Framework / using Python or Scaa and Big Data technoogies for various use cases buit on the patform Experience in deveoping streaming pipeines Experience to work with Hadoop / Azure eco system components to impement scaabe soutions to meet the ever-increasing data voumes, using big data/coud technoogies Apache Spark, Kafka, any Coud computing etc Required education Bacheor's Degree Preferred education Master's Degree Required technica and professiona expertise Minimum 4+ years of experience in Big Data technoogies with extensive data engineering experience in Spark / Python or Scaa; Minimum 3 years of experience on Coud Data Patforms on Azure; Experience in DataBricks / Azure HDInsight / Azure Data Factory, Synapse, SQL Server DB Good to exceent SQL skis Exposure to streaming soutions and message brokers ike Kafka technoogies Preferred technica and professiona experience Certification in Azure and Data Bricks or Coudera Spark Certified deveopers
Posted 1 week ago
4.0 - 8.0 years
12 - 17 Lacs
Bengaluru
Work from Office
Reference 250009V9 Responsibilities Independently design components, develop code and test case scenarios by applying relevant software craftsmanship principles and meet the acceptance criteria, Complete the assigned learning path Take part in team ceremonies be it agile practices or chapter meetings, Deliver on all aspects of Software Development Lifecyle (SDLC) in-line with Agile and IT craftsmanship principles, Support decomposition of customer requests info detailed stories by interacting with the product owner Deliver high-quality clean code and design that can be re-used, Actively, work with other development teams to define and implement API's and rules for data access, Ensure customers, stakeholders and partners are rightly communicated on time, Assess production improvement areas such as recurrent issues, Perform daily checks and maintain required standards and production processes, Provide suggestions for automating the repetitive and regular production activities, Perform bug-free release validations and produce metrices, tests and defect reports, Assist in developing guidelines and ensuring that team practices them, Ability to perform level 2/level 3 production support, Increase coverage of data models, data dictionary, data pipeline standards, storage of source, process and consumer metadata (#reuse and #extend), Required Profile required Experience : 3-5 years Mandatory skills : Java, Spring Boot Framework, Java/Spark, CI/CD, SQL Detailed Job Description : Should have 3 to 5 years of hands on experience working on Java, Spring Framework related components Should have at least 2 years of hands on experience working on using Java Spark on HDInsight or SoK8s Should have at least 2 years of hands on experience working on using Container & Orchestration tools such as Docker & Kubernetes Should have experience working on projects using Agile Methodologies and CI/CD Pipelines Should have experience working on at least one of the RDBMS databases such as Oracle, PostgreSQL and SQL Server Nice to have exposure to Linux platform such as RHEL and Cloud platforms such as Azure Data Lake Nice to have exposure to Investment Banking Domain Why join us We are committed to creating a diverse environment and are proud to be an equal opportunity employer All qualified applicants receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status Business insight At SocitGnrale, we are convinced that people are drivers of change, and that the world of tomorrow will be shaped by all their initiatives, from the smallest to the most ambitious Whether youre joining us for a period of months, years or your entire career, together we can have a positive impact on the future Creating, daring, innovating, and taking action are part of our DNA If you too want to be directly involved, grow in a stimulating and caring environment, feel useful on a daily basis and develop or strengthen your expertise, you will feel right at home with us! Still hesitating You should know that our employees can dedicate several days per year to solidarity actions during their working hours, including sponsoring people struggling with their orientation or professional integration, participating in the financial education of young apprentices, and sharing their skills with charities There are many ways to get involved, We are committed to support accelerating our Groups ESG strategy by implementing ESG principles in all our activities and policies They are translated in our business activity (ESG assessment, reporting, project management or IT activities), our work environment and in our responsible practices for environment protection, Diversity and Inclusion We are an equal opportunities employer and we are proud to make diversity a strength for our company Societe Generale is committed to recognizing and promoting all talents, regardless of their beliefs, age, disability, parental status, ethnic origin, nationality, gender identity, sexual orientation, membership of a political, religious, trade union or minority organisation, or any other characteristic that could be subject to discrimination,
Posted 1 week ago
2.0 - 4.0 years
7 - 9 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
POSITION Senior Data Engineer / Data Engineer LOCATION Bangalore/Mumbai/Kolkata/Gurugram/Hyd/Pune/Chennai EXPERIENCE 2+ Years JOB TITLE: Senior Data Engineer / Data Engineer OVERVIEW OF THE ROLE: As a Data Engineer or Senior Data Engineer, you will be hands-on in architecting, building, and optimizing robust, efficient, and secure data pipelines and platforms that power business-critical analytics and applications. You will play a central role in the implementation and automation of scalable batch and streaming data workflows using modern big data and cloud technologies. Working within cross-functional teams, you will deliver well-engineered, high-quality code and data models, and drive best practices for data reliability, lineage, quality, and security. HASHEDIN BY DELOITTE 2025 Mandatory Skills: Hands-on software coding or scripting for minimum 3 years Experience in product management for at-least 2 years Stakeholder management experience for at-least 3 years Experience in one amongst GCP, AWS or Azure cloud platform Key Responsibilities: Design, build, and optimize scalable data pipelines and ETL/ELT workflows using Spark (Scala/Python), SQL, and orchestration tools (e.g., Apache Airflow, Prefect, Luigi). Implement efficient solutions for high-volume, batch, real-time streaming, and event-driven data processing, leveraging best-in-class patterns and frameworks. Build and maintain data warehouse and lakehouse architectures (e.g., Snowflake, Databricks, Delta Lake, BigQuery, Redshift) to support analytics, data science, and BI workloads. Develop, automate, and monitor Airflow DAGs/jobs on cloud or Kubernetes, following robust deployment and operational practices (CI/CD, containerization, infra-as-code). Write performant, production-grade SQL for complex data aggregation, transformation, and analytics tasks. Ensure data quality, consistency, and governance across the stack, implementing processes for validation, cleansing, anomaly detection, and reconciliation. Collaborate with Data Scientists, Analysts, and DevOps engineers to ingest, structure, and expose structured, semi-structured, and unstructured data for diverse use-cases. Contribute to data modeling, schema design, data partitioning strategies, and ensure adherence to best practices for performance and cost optimization. Implement, document, and extend data lineage, cataloging, and observability through tools such as AWS Glue, Azure Purview, Amundsen, or open-source technologies. Apply and enforce data security, privacy, and compliance requirements (e.g., access control, data masking, retention policies, GDPR/CCPA). Take ownership of end-to-end data pipeline lifecycle: design, development, code reviews, testing, deployment, operational monitoring, and maintenance/troubleshooting. Contribute to frameworks, reusable modules, and automation to improve development efficiency and maintainability of the codebase. Stay abreast of industry trends and emerging technologies, participating in code reviews, technical discussions, and peer mentoring as needed. Skills & Experience: Proficiency with Spark (Python or Scala), SQL, and data pipeline orchestration (Airflow, Prefect, Luigi, or similar). Experience with cloud data ecosystems (AWS, GCP, Azure) and cloud-native services for data processing (Glue, Dataflow, Dataproc, EMR, HDInsight, Synapse, etc.). © HASHEDIN BY DELOITTE 2025 Hands-on development skills in at least one programming language (Python, Scala, or Java preferred); solid knowledge of software engineering best practices (version control, testing, modularity). Deep understanding of batch and streaming architectures (Kafka, Kinesis, Pub/Sub, Flink, Structured Streaming, Spark Streaming). Expertise in data warehouse/lakehouse solutions (Snowflake, Databricks, Delta Lake, BigQuery, Redshift, Synapse) and storage formats (Parquet, ORC, Delta, Iceberg, Avro). Strong SQL development skills for ETL, analytics, and performance optimization. Familiarity with Kubernetes (K8s), containerization (Docker), and deploying data pipelines in distributed/cloud-native environments. Experience with data quality frameworks (Great Expectations, Deequ, or custom validation), monitoring/observability tools, and automated testing. Working knowledge of data modeling (star/snowflake, normalized, denormalized) and metadata/catalog management. Understanding of data security, privacy, and regulatory compliance (access management, PII masking, auditing, GDPR/CCPA/HIPAA). Familiarity with BI or visualization tools (PowerBI, Tableau, Looker, etc.) is an advantage but not core. Previous experience with data migrations, modernization, or refactoring legacy ETL processes to modern cloud architectures is a strong plus. Bonus: Exposure to open-source data tools (dbt, Delta Lake, Apache Iceberg, Amundsen, Great Expectations, etc.) and knowledge of DevOps/MLOps processes. Professional Attributes: Strong analytical and problem-solving skills; attention to detail and commitment to code quality and documentation. Ability to communicate technical designs and issues effectively with team members and stakeholders. Proven self-starter, fast learner, and collaborative team player who thrives in dynamic, fast-paced environments. Passion for mentoring, sharing knowledge, and raising the technical bar for data engineering practices. Desirable Experience: Contributions to open source data engineering/tools communities. Implementing data cataloging, stewardship, and data democratization initiatives. Hands-on work with DataOps/DevOps pipelines for code and data. Knowledge of ML pipeline integration (feature stores, model serving, lineage/monitoring integration) is beneficial. © HASHEDIN BY DELOITTE 2025 EDUCATIONAL QUALIFICATIONS: Bachelor’s or Master’s degree in Computer Science, Data Engineering, Information Systems, or related field (or equivalent experience). Certifications in cloud platforms (AWS, GCP, Azure) and/or data engineering (AWS Data Analytics, GCP Data Engineer, Databricks). Experience working in an Agile environment with exposure to CI/CD, Git, Jira, Confluence, and code review processes. Prior work in highly regulated or large-scale enterprise data environments (finance, healthcare, or similar) is a plus.
Posted 1 week ago
3.0 - 5.0 years
5 - 7 Lacs
Bengaluru
Work from Office
As a Software Developer you'll participate in many aspects of the software development lifecycle, such as design, code implementation, testing, and support. You will create software that enables your clients' hybrid-cloud and AI journeys. Your primary responsibilities include: Comprehensive Feature Development and Issue Resolution: Working on the end-to-end feature development and solving challenges faced in the implementation. Stakeholder Collaboration and Issue Resolution: Collaborate with key stakeholders, internal and external, to understand the problems, issues with the product and features and solve the issues as per SLAs defined. Continuous Learning and Technology Integration: Being eager to learn new technologies and implementing the same in feature development Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Relevant experience with APIM, Azure API Management experience Proficient in .Net * Proficient with Azure Platform Development (Azure Functions, Azure Services etc) Candidate should be from .NET background Azure Services like Azure Functions , API Integration , Logic Apps, APIM, Azure Storage (Blob, Table), Cosmos DB etc Preferred technical and professional experience .Net Azure Full stack Proficient in .Net Core with hands on coding in .Net core
Posted 1 week ago
5.0 - 8.0 years
7 - 10 Lacs
Bengaluru
Work from Office
As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and Azure Cloud Data Platform Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark and Hive, Hbase or other NoSQL databases on Azure Cloud Data Platform or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / Azure eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Total 5-8 years of experience in Data Management (DW, DL, Data Platform, Lakehouse) and Data Engineering skills Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala; Minimum 3 years of experience on Cloud Data Platforms on Azure; Experience in DataBricks / Azure HDInsight / Azure Data Factory, Synapse, SQL Server DB Good to excellent SQL skills Preferred technical and professional experience Certification in Azure and Data Bricks or Cloudera Spark Certified developers Experience in DataBricks / Azure HDInsight / Azure Data Factory, Synapse, SQL Server DB Knowledge or experience of Snowflake will be an added advantage
Posted 1 week ago
5.0 - 8.0 years
7 - 10 Lacs
Bengaluru
Work from Office
As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and Azure Cloud Data Platform Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark and Hive, Hbase or other NoSQL databases on Azure Cloud Data Platform or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / Azure eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Total 5-8 years of experience in Data Management (DW, DL, Data Platform, Lakehouse) and Data Engineering skills Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala; Minimum 3 years of experience on Cloud Data Platforms on Azure; Experience in DataBricks / Azure HDInsight / Azure Data Factory, Synapse, SQL Server DB Good to excellent SQL skills Preferred technical and professional experience Certification in Azure and Data Bricks or Cloudera Spark Certified developers Experience in DataBricks / Azure HDInsight / Azure Data Factory, Synapse, SQL Server DB Knowledge or experience of Snowflake will be an added advantage
Posted 1 week ago
5.0 - 10.0 years
13 - 22 Lacs
Pune
Work from Office
SUMMARY Job Role: Node.js with Azure Developer Location: Pune Experience: 5+ years Must-Have: The ideal candidate should possess a minimum of 4 years of relevant experience in Node.js with Azure Development. We are seeking a motivated and skilled Azure AAD Developer with expertise in crafting cloud-based solutions using Microsoft Azure and Node.js. This position is perfect for an individual who is enthusiastic about advancing in cloud-native development and contributing to the creation of scalable integration solutions. What You Will Do: Develop and manage integration workflows utilizing Azure Logic Apps and Azure Functions. Aid in the implementation of messaging solutions using Azure Service Bus, Event Grid, and Event Hub. Provide support for API development and management using Azure API Management. Work collaboratively with senior developers and architects to deliver scalable cloud solutions. Participate in code reviews, testing, and deployment processes. What You Will Need: Education & Experience: 4 to 6 years of experience in software development, with a minimum of 2+ years in Azure. BE/BTech degree in a technical field or equivalent combination of education and experience. Knowledge, Skills & Abilities: Proficiency in Node.js and JavaScript development. Experience in API and RESTful service development. Exposure to Azure integration tools and messaging services. Cloud Development Experience Azure (App services, API Management, Azure Function, Azure Logic Apps) AZ-204 certification is a plus. Hands-on experience with Azure Logic Apps, Azure Functions, Azure messaging services, API Management, Azure Service Bus, Event Grid, and Event Hub. Strong problem-solving and communication skills. [ Reporting Relationships: ]() Will report to a Manager, Product Delivery and has no direct reports. Working Conditions: The work environment will primarily be an air-conditioned office setting requiring the employee to sit for prolonged periods while concentrating on a computer screen. Requirements Requirements: 4-6 years of software development experience, with at least 2+ years in Azure BE/BTech degree in a technical field or equivalent combination of education and experience Proficiency in Node.js and JavaScript development AZ-204 certification is a plus
Posted 1 week ago
4.0 - 9.0 years
12 - 16 Lacs
Kochi
Work from Office
As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and Azure Cloud Data Platform Responsibilities: Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark and Hive, Hbase or other NoSQL databases on Azure Cloud Data Platform or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / Azure eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala; Minimum 3 years of experience on Cloud Data Platforms on Azure; Experience in DataBricks / Azure HDInsight / Azure Data Factory, Synapse, SQL Server DB Good to excellent SQL skills Exposure to streaming solutions and message brokers like Kafka technologies Preferred technical and professional experience Certification in Azure and Data Bricks or Cloudera Spark Certified developers
Posted 1 week ago
7.0 - 12.0 years
9 - 14 Lacs
Pune, Hinjewadi
Work from Office
Job Summary Synechron is seeking an experienced and technically proficient Senior PySpark Data Engineer to join our data engineering team. In this role, you will be responsible for developing, optimizing, and maintaining large-scale data processing solutions using PySpark. Your expertise will support our organizations efforts to leverage big data for actionable insights, enabling data-driven decision-making and strategic initiatives. Software Requirements Required Skills: Proficiency in PySpark Familiarity with Hadoop ecosystem components (e.g., HDFS, Hive, Spark SQL) Experience with Linux/Unix operating systems Data processing tools like Apache Kafka or similar streaming platforms Preferred Skills: Experience with cloud-based big data platforms (e.g., AWS EMR, Azure HDInsight) Knowledge of Python (beyond PySpark), Java or Scala relevant to big data applications Familiarity with data orchestration tools (e.g., Apache Airflow, Luigi) Overall Responsibilities Design, develop, and optimize scalable data processing pipelines using PySpark. Collaborate with data engineers, data scientists, and business analysts to understand data requirements and deliver solutions. Implement data transformations, aggregations, and extraction processes to support analytics and reporting. Manage large datasets in distributed storage systems, ensuring data integrity, security, and performance. Troubleshoot and resolve performance issues within big data workflows. Document data processes, architectures, and best practices to promote consistency and knowledge sharing. Support data migration and integration efforts across varied platforms. Strategic Objectives: Enable efficient and reliable data processing to meet organizational analytics and reporting needs. Maintain high standards of data security, compliance, and operational durability. Drive continuous improvement in data workflows and infrastructure. Performance Outcomes & Expectations: Efficient processing of large-scale data workloads with minimum downtime. Clear, maintainable, and well-documented code. Active participation in team reviews, knowledge transfer, and innovation initiatives. Technical Skills (By Category) Programming Languages: Required: PySpark (essential); Python (needed for scripting and automation) Preferred: Java, Scala Databases/Data Management: Required: Experience with distributed data storage (HDFS, S3, or similar) and data warehousing solutions (Hive, Snowflake) Preferred: Experience with NoSQL databases (Cassandra, HBase) Cloud Technologies: Required: Familiarity with deploying and managing big data solutions on cloud platforms such as AWS (EMR), Azure, or GCP Preferred: Cloud certifications Frameworks and Libraries: Required: Spark SQL, Spark MLlib (basic familiarity) Preferred: Integration with streaming platforms (e.g., Kafka), data validation tools Development Tools and Methodologies: Required: Version control systems (e.g., Git), Agile/Scrum methodologies Preferred: CI/CD pipelines, containerization (Docker, Kubernetes) Security Protocols: Optional: Basic understanding of data security practices and compliance standards relevant to big data management Experience Requirements Minimum of 7+ years of experience in big data environments with hands-on PySpark development. Proven ability to design and implement large-scale data pipelines. Experience working with cloud and on-premises big data architectures. Preference for candidates with domain-specific experience in finance, banking, or related sectors. Candidates with substantial related experience and strong technical skills in big data, even from different domains, are encouraged to apply. Day-to-Day Activities Develop, test, and deploy PySpark data processing jobs to meet project specifications. Collaborate in multi-disciplinary teams during sprint planning, stand-ups, and code reviews. Optimize existing data pipelines for performance and scalability. Monitor data workflows, troubleshoot issues, and implement fixes. Engage with stakeholders to gather new data requirements, ensuring solutions are aligned with business needs. Contribute to documentation, standards, and best practices for data engineering processes. Support the onboarding of new data sources, including integration and validation. Decision-Making Authority & Responsibilities: Identify performance bottlenecks and propose effective solutions. Decide on appropriate data processing approaches based on project requirements. Escalate issues that impact project timelines or data integrity. Qualifications Bachelors degree in Computer Science, Information Technology, or related field. Equivalent experience considered. Relevant certifications are preferred: Cloudera, Databricks, AWS Certified Data Analytics, or similar. Commitment to ongoing professional development in data engineering and big data technologies. Demonstrated ability to adapt to evolving data tools and frameworks. Professional Competencies Strong analytical and problem-solving skills, with the ability to model complex data workflows. Excellent communication skills to articulate technical solutions to non-technical stakeholders. Effective teamwork and collaboration in a multidisciplinary environment. Adaptability to new technologies and emerging trends in big data. Ability to prioritize tasks effectively and manage time in fast-paced projects. Innovation mindset, actively seeking ways to improve data infrastructure and processes.
Posted 2 weeks ago
6.0 - 10.0 years
6 - 10 Lacs
Bengaluru
Work from Office
Experienced in backend and integration development using .NET framework, Microservices, RESTful APIs and SOAP services. Experience in using Azure Integration Services, as well as core Azure components, such as APIM, Azure Functions, Logic Apps, Service Bus, Azure AD, Azure KeyVault, etc. Experience with the OAuth authorisation framework and OpenID Connect authentication protocol Experience in designing, coding and testing solutions targeted for Azure Good understanding of patterns of synchronous and asynchronous communication, such as request-reply, messaging, event-driven architecture, and their implementation in Azure environment Knowledge of network protocols such as SFTP, SCP, FTP, SSH key-based authentication Experience in using API tools such as Postman, SoapUI, etc Knowledge in contract-first API approach with Swagger 2 or OpenAPI 3 Experience in using Git for version control Knowledge in CI/CD automated builds, pipelines, and releases Knowledge of Agile and DevOps methods of delivering projects Location: Bangalore, Mumbai, Pune, Chennai, Hyderabad, Kolkata
Posted 2 weeks ago
6.0 - 7.0 years
14 - 18 Lacs
Bengaluru
Work from Office
As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and Azure Cloud Data Platform Responsibilities: Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark and Hive, Hbase or other NoSQL databases on Azure Cloud Data Platform or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / Azure eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Total 6 - 7+ years of experience in Data Management (DW, DL, Data Platform, Lakehouse) and Data Engineering skills Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala; Minimum 3 years of experience on Cloud Data Platforms on Azure; Experience in DataBricks / Azure HDInsight / Azure Data Factory, Synapse, SQL Server DB Good to excellent SQL skills Preferred technical and professional experience Certification in Azure and Data Bricks or Cloudera Spark Certified developers
Posted 2 weeks ago
5.0 - 7.0 years
14 - 18 Lacs
Bengaluru
Work from Office
As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and Azure Cloud Data Platform Responsibilities Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark and Hive, Hbase or other NoSQL databases on Azure Cloud Data Platform or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / Azure eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Total 5 - 7+ years of experience in Data Management (DW, DL, Data Platform, Lakehouse) and Data Engineering skills Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala. Minimum 3 years of experience on Cloud Data Platforms on Azure Experience in DataBricks / Azure HDInsight / Azure Data Factory, Synapse, SQL Server DB Exposure to streaming solutions and message brokers like Kafka technologies Experience Unix / Linux Commands and basic work experience in Shell Scripting Preferred technical and professional experience Certification in Azure and Data Bricks or Cloudera Spark Certified developers
Posted 2 weeks ago
4.0 - 8.0 years
10 - 15 Lacs
Gurugram
Work from Office
As a Software Developer you'll participate in many aspects of the software development lifecycle, such as design, code implementation, testing, and support. You will create software that enables your clients' hybrid-cloud and AI journeys. Your primary responsibilities include: Comprehensive Feature Development and Issue ResolutionWorking on the end-to-end feature development and solving challenges faced in the implementation. Stakeholder Collaboration and Issue ResolutionCollaborate with key stakeholders, internal and external, to understand the problems, issues with the product and features and solve the issues as per SLAs defined. Continuous Learning and Technology IntegrationBeing eager to learn new technologies and implementing the same in feature development Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Relevant experience with APIM, Azure API Management experience Proficient in .Net Proficient with Azure Platform Development (Azure Functions, Azure Services etc) Candidate should be from .NET background Azure Services like Azure Functions , API Integration , Logic Apps, APIM, Azure Storage (Blob, Table), Cosmos DB etc Preferred technical and professional experience .Net Azure Full stack Proficient in .Net Core with hands on coding in .Net core
Posted 2 weeks ago
1.0 - 6.0 years
5 - 8 Lacs
Hyderabad
Work from Office
Develop templates to automate infrastructure provisioning on Microsoft Azure using scripting language of your choice. Lead troubleshooting investigations to bring quicker issue resolution complex problems impacting our end users. Design implement continuous improvements to monitoring mechanisms. Review application architecture reviews to recommend improvements for better reliability and applciation performance. Identify problems relating to mission critical services and implement automation to prevent problem recurrence; with the goal of automating response to all non-exceptional service conditions. Drive continuous improvement in the Azure platform incorporating feedback. Enthusiastic, self-motivated, and a great teammate. Engage in application performance analysis and system tuning, and capacity planning Level 3 application support, troubleshooting application (all environments) issues reported by clients or product teams. Troubleshoot issues in production and other environments, applying debugging and problem-solving techniques (e.g., log analysis, non-invasive tests), working closely with Development, QA teams. Qualifications: Strong experience on Azure technologies (WebApp, Azure API, frontdoor, MI\FoG, Compute, network, storage and data combination or any three skill) : IAAS and PAAS VM creations, networks(vnet, traffic manager, load balancer, application gateway ), App services configuration. Function app/ storage account/ cdn/ logic apps/ azure key vault/dns Should have strong understanding of working on Web technologies. IIS/ web application works Strong experience with any scripting language (PowerShell/Python) Experience with Git or similar development repo Understanding of database technologies (SQL/Cosmos/Mongo/PostGreSQL) Strong experience with instrumentation, monitoring and alerting Capable of performing technical deep dives into application design and perform root cause analysis. Experience in designing and configuring high availability for web applications. Passionate for making things better and driving action with a sense of urgency Brings new thinking to challenge existing technology implementations and processes Excellent at building relationships across teams Firm sense of accountability and ownership Desire to understand our businesses and users Understanding of the Reliability and configuration management principles A strong understanding of Windows administration, troubleshooting Windows OS, IIS errors Knowledge of networking, firewalls, load balancers etc. Education: Bachelor's Degree in a technical field
Posted 2 weeks ago
6.0 - 10.0 years
6 - 10 Lacs
Bengaluru
Work from Office
Job Information Job Opening ID ZR_1585_JOB Date Opened 26/11/2022 Industry Technology Job Type Work Experience 6-10 years Job Title Azure API Integration City Bangalore Province Karnataka Country India Postal Code 560002 Number of Positions 4 LocationBangalore, Mumbai, Pune, Chennai, Hyderabad, Kolkata Experienced in backend and integration development using .NET framework, Microservices, RESTful APIs and SOAP services. Experience in using Azure Integration Services, as well as core Azure components, such as APIM, Azure Functions, Logic Apps, Service Bus, Azure AD, Azure KeyVault, etc. Experience with the OAuth authorisation framework and OpenID Connect authentication protocol Experience in designing, coding and testing solutions targeted for Azure Good understanding of patterns of synchronous and asynchronous communication, such as request-reply, messaging, event-driven architecture, and their implementation in Azure environment Knowledge of network protocols such as SFTP, SCP, FTP, SSH key-based authentication Experience in using API tools such as Postman, SoapUI, etc Knowledge in contract-first API approach with Swagger 2 or OpenAPI 3 Experience in using Git for version control Knowledge in CI/CD automated builds, pipelines, and releases Knowledge of Agile and DevOps methods of delivering projects check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested
Posted 3 weeks ago
5.0 - 7.0 years
14 - 18 Lacs
Bengaluru
Work from Office
As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and Azure Cloud Data Platform Responsibilities: Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark and Hive, Hbase or other NoSQL databases on Azure Cloud Data Platform or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / Azure eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Total 5 - 7+ years of experience in Data Management (DW, DL, Data Platform, Lakehouse) and Data Engineering skills Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala. Minimum 3 years of experience on Cloud Data Platforms on Azure Experience in DataBricks / Azure HDInsight / Azure Data Factory, Synapse, SQL Server DB Exposure to streaming solutions and message brokers like Kafka technologies Experience Unix / Linux Commands and basic work experience in Shell Scripting Preferred technical and professional experience Certification in Azure and Data Bricks or Cloudera Spark Certified developers
Posted 3 weeks ago
4.0 - 9.0 years
5 - 9 Lacs
Bengaluru
Work from Office
Develop, maintain, and support scalable and secure applications using .NET (Core/Framework) and Azure cloud services. Design and implement cloud-based solutions utilizing various Azure services, including Azure Functions, App Services, Azure Storage, Azure SQL Database, Azure DevOps, and Azure Kubernetes Service. Collaborate with cross-functional teams, including front-end developers, DevOps engineers, and product managers, to design and deliver end-to-end solutions. Implement RESTful APIs and integrate with third-party services using Azure API Management and other relevant technologies. Write efficient, reusable, and modular code while adhering to best practices, coding standards, and version control using Git. Ensure application performance, scalability, and security by leveraging Azure monitoring tools, logging frameworks, and cloud security best practices. Participate in code reviews, provide feedback, and maintain a culture of continuous improvement. Troubleshoot and resolve application issues, including performance bottlenecks and security vulnerabilities. Create and manage deployment pipelines using Azure DevOps or other CI/CD tools for automated testing and deployment. Work closely with the team to ensure the implementation of cloud solutions meets business requirements, performance expectations, and cost optimizations. Works in the area of Software Engineering, which encompasses the development, maintenance and optimization of software solutions/applications.1. Applies scientific methods to analyse and solve software engineering problems.2. He/she is responsible for the development and application of software engineering practice and knowledge, in research, design, development and maintenance.3. His/her work requires the exercise of original thought and judgement and the ability to supervise the technical and administrative work of other software engineers.4. The software engineer builds skills and expertise of his/her software engineering discipline to reach standard software engineer skills expectations for the applicable role, as defined in Professional Communities.5. The software engineer collaborates and acts as team player with other software engineers and stakeholders. - Grade Specific Develop, maintain, and support scalable and secure applications using .NET (Core/Framework) and Azure cloud services. Design and implement cloud-based solutions utilizing various Azure services, including Azure Functions, App Services, Azure Storage, Azure SQL Database, Azure DevOps, and Azure Kubernetes Service. Collaborate with cross-functional teams, including front-end developers, DevOps engineers, and product managers, to design and deliver end-to-end solutions.
Posted 3 weeks ago
10.0 - 18.0 years
2 - 11 Lacs
Coimbatore, Tamil Nadu, India
On-site
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Microsoft BOT Framework Good to have skills : NA Minimum 10+ year(s) of experience is required Educational Qualification : a Must have: BE/Btech/MCA b Good to have: ME/Mtech Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have Skills : Microsoft BOT Framework, SSI: NON SSI: Good to Have Skills :SSI: No Technology Specialization NON SSI : Job Requirements : Key Responsibilities : a Work closely with the client teams to define and architect the solution for our clients Estimating the components required to provide a comprehensive AI solution that meets and exceeds the clients expectations delivering tangible business value to our clients b Candidate is expected to deliver cognitive applications that solve/augment business issues using leading AI technology frameworks c Direct and influence client Bot/Virtual Agent architecture, solutioning and development Technical Experience : a 7-8 years of experience and a thorough understanding of Azure PaaS landscape a 2-3 years of experience in Microsoft AI/Bot Framework b Good to have - Knowledge experience in Azure Machine Learning, AI, Azure HDInsight Professional Attributes : A Should have strong problem-solving and Analytical skills B Good Communication skills C Ability to work independently with little supervision or as a team D Should be able to work and deliver under tight timelines E Candidate should have good analytical skills Educational Qualification: a Must have: BE/Btech/MCA b Good to have: ME/Mtech
Posted 4 weeks ago
4.0 - 9.0 years
6 - 10 Lacs
Hyderabad
Work from Office
- Building and operationalizing large scale enterprise data solutions and applications using one or more of AZURE data and analytics services in combination with custom solutions - Azure Synapse/Azure SQL DWH, Azure Data Lake, Azure Blob Storage, Spark, HDInsights, Databricks, CosmosDB, EventHub/IOTHub. - Experience in migrating on-premise data warehouses to data platforms on AZURE cloud. - Designing and implementing data engineering, ingestion, and transformation functions - Azure Synapse or Azure SQL data warehouse - Spark on Azure is available in HD insights and data bricks - Good customer communication. - Good Analytical skill Apply Insights Follow-up Save this job for future reference Did you find something suspiciousReport Here! Hide This Job Click here to hide this job for you. You can also choose to hide all the jobs from the recruiter.
Posted 1 month ago
1 - 3 years
4 - 8 Lacs
Pune
Work from Office
Required Experience 5 - 7 Years Skills Node.js img {max-height240px;} Critical Skills to Possess: At least 5 years of experience as a MERN stack software developer Experience in working in PODs or in an Agile environment as a software developer partnering successfully with the tech leader Proven experience in developing and implementing high performing and scalable solutions Successful track record in working as an individual contributor and in collaboration with other pod members Good knowledge of the software development lifecycle and the process tools Sound industry trends awareness Problem-solver and innovative UI / UXHTML5, CSS, React.js, Next.js, Microsoft .Net MicroservicesNode.js, .Net Cloud TechnologiesMicrosoft Azure Technology Stack CI / CD StackAzure DevOps Pipelines, Jenkins, GIT DatabasesCosmos DB, MySQL, Mongo DB, PostgreSQL, Azure SQL Database Defining and Implementing Open API Specifications (Swaggers) Understanding of the below areas: API / Integration ToolsDell Boomi, Broadcom / CA API Gateway / Azure API Gateway Identity management Tools Information Security tools and technologies ContainersDocker, Kubernetes Container ManagementAzure Kubernetes Services Content Management Tools (DX, Wordpress) Content Delivery Network (CDN) Tools (Akamai) Testing / Performance Testing, Monitoring and Documentation Tools Azure Monitors, AppDynamics, Blaze Meter, Selenium, Junit, Mocha Preferred Qualifications: BS degree in Computer Science or Engineering or equivalent experience Sign in to apply Share this job
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
20183 Jobs | Dublin
Wipro
10025 Jobs | Bengaluru
EY
8024 Jobs | London
Accenture in India
6531 Jobs | Dublin 2
Amazon
6260 Jobs | Seattle,WA
Uplers
6244 Jobs | Ahmedabad
Oracle
5916 Jobs | Redwood City
IBM
5765 Jobs | Armonk
Capgemini
3771 Jobs | Paris,France
Tata Consultancy Services
3728 Jobs | Thane