Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0 years
0 Lacs
India
Remote
Data Scientist Associate Responsibilities As a selected intern, your daily tasks will include: Engaging in data science projects and analytics. Developing and implementing data models, AI, ML, deep learning, NLP, GenAI, LangChain, LLM, LLAMA, OpenAI, and GPT-based solutions. Managing data pipelines, ETL/ELT processes, and data warehousing. Utilizing Python and its libraries for advanced programming tasks. Handling data collection, management, cleaning, and transformation. Creating data visualizations using BI tools such as Power BI, Kibana, and Google Data Studio. Working with databases like MongoDB, Neo4j, Dgraph, and SQL. Leveraging cloud platforms, including GCP, AWS, Azure, Linode, and Heroku. Required Skills Python Flask Django MongoDB API Development Elasticsearch Machine Learning Artificial Intelligence Job Details Work Mode: Remote (Work From Home) Start Date: Immediate Duration: 6 months Stipend: ₹10,000 – ₹12,000 per month Industry: Information Technology & Services Employment Type: Probation of 6 Months followed by Full-time Position based on performance Show more Show less
Posted 2 weeks ago
4.0 years
0 Lacs
Bengaluru, Karnataka
Remote
Location: Bangalore - Karnataka, India - EOIZ Industrial Area Worker Type Reference: Regular - Permanent Pay Rate Type: Salary Career Level: T3(B) Job ID: R-45559-2025 Description & Requirements Job Description Introduction: A Career at HARMAN Digital Transformation Solutions (DTS) We’re a global, multi-disciplinary team that’s putting the innovative power of technology to work and transforming tomorrow. At HARMAN DTS, you solve challenges by creating innovative solutions. Extensive experience in defining, developing, and implementing security software, ideally with a strong embedded firmware development background About the Role This position offers an opportunity to work in a globally distributed team where you will get a unique opportunity of personal development in a multi-cultural environment. You will also get a challenging environment to develop expertise in the technologies useful in the industry. What You Need 4+ Years Exp Bangalore Translate requirements of business functions into “Graph-Thinking”. Build and maintain graphs and related applications from data and information, using latest graph technologies to leverage high value use cases. Support and manage graph databases. Integrate graph data from various sources – internal and external. Extract data from various sources, including databases, APIs, and flat files. Load data into target systems, such as data warehouses and data lakes. Develop code to move data (ETL) from the enterprise platform applications into the enterprise knowledge graphs. Optimize ETL processes for performance and scalability. Collaborate with data engineers, data scientists and other stakeholders to model the graph environment to best represent the data coming from the multiple enterprise systems. Excellent problem-solving and analytical skills A growth mindset with a curiosity to learn and improve. Team player with strong interpersonal, written, and verbal communication skills. Business consulting and technical consulting skills. An entrepreneurial spirit and the ability to foster a positive and energized culture. You can demonstrate fluent communication skills in English (spoken and written). Experience working in Agile (Scrum knowledge appreciated) with a DevOps mindset. What is Nice to Have Graph databases, such as Dydra, Amazon Neptune, Neo4J, Oracle Spatial & Graph is a plus A master’s or bachelor’s degree in the field of computer science, mathematics, electronics engineering or related discipline with at least 10 plus years of experience in a similar role Experience with other NoSQL databases, such as key-value databases and document-based databases (e.g. XML databases) is a plus Experience with relational databases Programming experience, preferably Java, JavaScript, Python, PL/SQL Experience with web technologies: HTML, CSS, XML, XSLT, XPath Experience with modelling languages such as UML Understanding of CI/CD automation, version control, build automation, testing frameworks, static code analysis, IT service management, artifact management, container management, and experience with related tools and platforms. Familiarity with Cloud computing concepts (e.g. in AWS and Azure). What Makes You Eligible Any offer of employment is conditioned upon the successful completion of a background investigation and drug screen. Dedicated performer & team player with the ability to advocate appropriately for product quality. Relentless learner with a dedication to learn new technologies and test methods Self-driven and Innovative to drive continuous improvements in Test process Resourcefulness in triaging problems and coordinating with multiple teams for issue resolution Strong written, verbal communication and inter personal relationship skills What We Offer Flexible work environment, allowing for full-time remote work globally for positions that can be performed outside a HARMAN or customer location Access to employee discounts on world-class Harman and Samsung products (JBL, HARMAN Kardon, AKG, etc.) Extensive training opportunities through our own HARMAN University Competitive wellness benefits Tuition reimbursement “Be Brilliant” employee recognition and rewards program An inclusive and diverse work environment that fosters and encourages professional and personal development You Belong Here HARMAN is committed to making every employee feel welcomed, valued, and empowered. No matter what role you play, we encourage you to share your ideas, voice your distinct perspective, and bring your whole self with you – all within a support-minded culture that celebrates what makes each of us unique. We also recognize that learning is a lifelong pursuit and want you to flourish. We proudly offer added opportunities for training, development, and continuing education, further empowering you to live the career you want. About HARMAN: Where Innovation Unleashes Next-Level Technology Ever since the 1920s, we’ve been amplifying the sense of sound. Today, that legacy endures, with integrated technology platforms that make the world smarter, safer, and more connected. Across automotive, lifestyle, and digital transformation solutions, we create innovative technologies that turn ordinary moments into extraordinary experiences. Our renowned automotive and lifestyle solutions can be found everywhere, from the music we play in our cars and homes to venues that feature today’s most sought-after performers, while our digital transformation solutions serve humanity by addressing the world’s ever-evolving needs and demands. Marketing our award-winning portfolio under 16 iconic brands, such as JBL, Mark Levinson, and Revel, we set ourselves apart by exceeding the highest engineering and design standards for our customers, our partners and each other. If you’re ready to innovate and do work that makes a lasting impact, join our talent community today ! Important Notice: Recruitment Scams Please be aware that HARMAN recruiters will always communicate with you from an '@harman.com' email address. We will never ask for payments, banking, credit card, personal financial information or access to your LinkedIn/email account during the screening, interview, or recruitment process. If you are asked for such information or receive communication from an email address not ending in '@harman.com' about a job with HARMAN, please cease communication immediately and report the incident to us through: harmancareers@harman.com. HARMAN is proud to be an Equal Opportunity / Affirmative Action employer. All qualified applicants will receive consideration for employment without regard to race, religion, color, national origin, gender (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender identity, gender expression, age, status as a protected veteran, status as an individual with a disability, or other applicable legally protected characteristics.
Posted 2 weeks ago
2.0 - 3.0 years
4 - 5 Lacs
Bengaluru
Work from Office
We are looking for: Experience At least2to 3yrs of experience in NodeJS, TypeScript, React is required Proven experience in building, deploying, maintaining & scaling APIs, microservices Job Responsibilities Solid experience in NodeJS, TypeScript, React, Neo4j and Firestore (GCP) . In-depth knowledge in software design & development practices Design and develop scalable systems using advanced concepts in NodeJS, TypeScript, Javascript, and React. Should have a better understanding of deploying and working with GKE. Ability to design for scale and performance/peer code reviews Architecture/platform development, API, data modelling at scale Excellent working experience in Express, Knex, Serverless GC Functions Solid experience in JavaScript Frameworks (Angular / React.JS), Redux, JavaScript , JQuery, CSS, HTML5, ES5, ES6 & ES7, in-memory databases (Redis / Hazelcast), Build tools - (web pack) Good Error and Exceptional Handling Skills. Ability to work with Git repositories, and remote code hosting services like GitHub and Gitlab Ability to deliver amazing results with minimal guidance and supervision Passionate (especially about web development!), highly motivated, and Fun to work with.
Posted 2 weeks ago
5.0 - 10.0 years
5 - 9 Lacs
Bengaluru
Work from Office
The Architect cum Senior Developer with Neo4j/graph db expertise in the telecom sector will be responsible for designing, developing, and implementing graph db solutions tailored to the operational and business requirements of telecom service providers. Design, develop, and maintain graph database models using Neo4j. Create and optimize Cypher queries for data retrieval and manipulation. Integrate Neo4j with applications using appropriate languages (Java, Python, JavaScript, etc.). Implement ETL pipelines for loading and transforming data into Neo4j. Ensure data integrity and performance of graph structures. Strong experience with Neo4j and Cypher query language. Proficiency in programming languages like Java, Python, or JavaScript. Experience with data modeling and graph theory concepts. Familiarity with database security and performance tuning. Experience with Spring Data Neo4j or Neo4j Bolt driver. Background in network analysis, recommendation systems, or knowledge graphs. Excellent communication skills for stakeholder engagement and technical documentation. Ability to mentor junior developers and guide them in best practices. Proactive approach to identifying and resolving data-related challenges in telecom network systems. 7-10 years of experience in Neo4j/graph database development with a minimum of 5 years in the telecom sector. Telecom Domain, Python, Neo4j, Java, Javascript
Posted 2 weeks ago
6.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Role**: Python,AWS,Terraform Required Technical Skill Set: Python Full stack Developer Desired Experience Range: 06 - 08 yrs Notice Period: Immediate to 90Days only Location of Requirement: Hyderabad We are currently planning to do a Walk in Interview on 14 th June 2025 (Saturday) Date – 14 th June 2025 (Saturday) Venue - Tata Consultancy Services Synergy Park (Non-SEZ) Campus, C9X3+FH8, TCS Synergy park, Indian Immunologicals Colony, Gachibowli, Hyderabad, Telangana 500032 Job Description: Primary Skill Frontend o 6+ years of overall experience with proficiency in React (2+ years), Typescript (1+ year), React hooks (1+ year) o Experience with ESlint, CSS in JS styling (preferably Emotion), state management (preferably Redux), and JavaScript bundlers such as Webpack o Experience with integrating with RESTful APIs or other web services Backend o Expertise with Python (3+ years, preferably Python3) o Proficiency with a Python web framework (2+ years, preferably flask and FastAPI) o Experience with a Python linter (preferably flake8), graph databases (preferably Neo4j), a package manager (preferably pip), Elasticsearch, and Airflow o Experience with developing microservices, RESTful APIs or other web services o Experience with Database design and management, including NoSQL/RDBMS tradeoffs Show more Show less
Posted 2 weeks ago
1.0 - 4.0 years
2 - 6 Lacs
Mumbai, Pune, Chennai
Work from Office
Graph data Engineer required for a complex Supplier Chain Project. Key required Skills Graph data modelling (Experience with graph data models (LPG, RDF) and graph language (Cypher), exposure to various graph data modelling techniques) Experience with neo4j Aura, Optimizing complex queries. Experience with GCP stacks like BigQuery, GCS, Dataproc. Experience in PySpark, SparkSQL is desirable. Experience in exposing Graph data to visualisation tools such as Neo Dash, Tableau and PowerBI The Expertise You Have: Bachelors or Masters Degree in a technology related field (e.g. Engineering, Computer Science, etc.). Demonstrable experience in implementing data solutions in Graph DB space. Hands-on experience with graph databases (Neo4j(Preferred), or any other). Experience Tuning Graph databases. Understanding of graph data model paradigms (LPG, RDF) and graph language, hands-on experience with Cypher is required. Solid understanding of graph data modelling, graph schema development, graph data design. Relational databases experience, hands-on SQL experience is required. Desirable (Optional) skills: Data ingestion technologies (ETL/ELT), Messaging/Streaming Technologies (GCP data fusion, Kinesis/Kafka), API and in-memory technologies. Understanding of developing highly scalable distributed systems using Open-source technologies. Experience in Supply Chain Data is desirable but not essential. Location: Pune, Mumbai, Chennai, Bangalore, Hyderabad
Posted 2 weeks ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Overview Angular with an exp of 5 to 7 yrs, knowledge of Java would be great. Responsibilities Experience in designing & development in UI technologies like Angular/React, HTML, CSS and lead from an Engineering point of view. Jasmine, Karma and Istanbul tools understanding and hands on experience. Design and build next gen UI Visualizers in 2D, 3D with backend integration experience. Build reusable code and libraries for future use Understand various metrics for different types of test strategy for UI components. To analyze problems and propose solutions with high quality and familiarity with browser testing and debugging. Work closely with Functional designers, UX designers and other tech leads on overall E2E app strategy. Good understanding of telecom OSS domain. Experience on working on Graph DB [Neo4J/Orient DB] Ability to deep dive on technical areas and get the best outcome out of technically challenging situations. Possess good hands-on experience and be able to mentor technical teams. Understand need of multiple projects and communicate them divisionally and/or cross-divisionally Adopt new and emerging technologies to provide solutions to meet challenging needs. Ability to debug complex issues and provide the right solution Ability to drive and validate technical and functional designs and lead to implementation. May involve liaising with internal, external and third-party suppliers. has context menu Requirements Essentials Implemented different apps on various platforms like Angular, React, JS, HTML and CSS and rapid prototyping. Proficient understanding of client-side scripting and JavaScript frameworks, including jQuery Good understanding of asynchronous request handling, partial page updates, and AJAX Hands-on coding ability and strong analytical skills to trouble shoot and provide technological solutions using UI design Patterns, Oracle, PLSQL, Weblogic and JavaScript. In-depth understanding of the entire web development process (design, development and deployment) Experience working in an Agile/Scrum development process Proficient understanding of code versioning tools, such as Git, SVN and experience in building CICD pipelines for UI projects. Should possess good telecom OSS knowledge in areas like Planning, Inventory management, capacity management, orchestration and Activation. Working knowledge of performance tuning of application and continuous integration techniques. Effective verbal and written communication skills. Service Design patterns implementation and knowledge Desirables Working with large Telco service providers is a plus Telecom & Industry certifications Past experience working on Graph database [Any flavor] and use cases related to traversal will be an added advantage. Experience in working with geographically dispersed teams Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Requirements Platform Experience Operating Systems : Linux (preferably RHEL 8.x) Containerization & Orchestration : Kubernetes, Docker, Containers CI/CD & Version Control : Jenkins, Git Configuration Management : Ansible Cloud Platforms : OpenShift Container Platform (OCP), AWS (including EKS) Virtualization & Cloud Infrastructure : OpenStack, VMWare (Hypervisor, vSphere) Databases : Oracle, MariaDB, PostgreSQL, Neo4j Technical Skills & Responsibilities Proven experience working as a Software Implementation Specialist Strong experience in Linux-based product deployments on both BareMetal and Cloud environments Deep expertise in cloud infrastructure including OpenStack, VMware, AWS-EKS, Docker, and Kubernetes Installation, configuration, and operational support of Java-based software products Experience with BareMetal , VNF , and CNF software deployments Basic hands-on experience in networking (not requiring deep expertise): IP Routing, Gateway configuration, IP addressing Familiarity with database installation , configuration, query execution, and backups Experience in scripting (Shell, Python, or Perl) Additional Key Responsibilities Creation and maintenance of technical documentation , including installation guides, runbooks, deployment manuals, and architectural overviews Ability to work during Maintenance Windows (outside regular business hours) for planned deployments, upgrades, or troubleshooting activities Domain Knowledge Telecom domain exposure with understanding of network elements and 3G/4G/5G architectures Soft Skills Strong analytical, troubleshooting, and problem-solving skills Excellent verbal and written communication skills Ability to understand and interpret technical specifications and customer requirements Team leadership and people management experience Direct interaction with customers for technical discussions and implementation planning Show more Show less
Posted 2 weeks ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Primary Roles And Responsibilities Developing Modern Data Warehouse solutions using Snowflake, Databricks and ADF. Ability to provide solutions that are forward-thinking in data engineering and analytics space Collaborate with DW/BI leads to understand new ETL pipeline development requirements. Triage issues to find gaps in existing pipelines and fix the issues Work with business to understand the need in the reporting layer and develop a data model to fulfill reporting needs Help joiner team members to resolve issues and technical challenges. Drive technical discussions with client architect and team members Orchestrate the data pipelines in the scheduler via Airflow Skills And Qualifications Skills: sql,pl/sql,spark,star and snowflake dimensional modeling,databricks,snowsight,terraform,git,unix shell scripting,snowsql,cassandra,circleci,azure,pyspark,snowpipe,mongodb,neo4j,azure data factory,snowflake,python Bachelor's and/or master’s degree in computer science or equivalent experience. Must have total 6+ yrs. of IT experience and 3+ years' experience in Data warehouse/ETL projects. Expertise in Snowflake security, Snowflake SQL and designing/implementing other Snowflake objects. Hands-on experience with Snowflake utilities, SnowSQL, Snowpipe, Snowsight and Snowflake connectors. Deep understanding of Star and Snowflake dimensional modeling. Strong knowledge of Data Management principles Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture Should have hands-on experience in SQL and Spark (PySpark) Experience in building ETL / data warehouse transformation processes Experience with Open Source non-relational / NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J) Experience working with structured and unstructured data including imaging & geospatial data. Experience working in a Dev/Ops environment with tools such as Terraform, CircleCI, GIT. Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning, troubleshooting and Query Optimization. Databricks Certified Data Engineer Associate/Professional Certification (Desirable). Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects Should have experience working in Agile methodology Strong verbal and written communication skills. Strong analytical and problem-solving skills with a high attention to detail. Mandatory Skills: Snowflake/Azure Data Factory/ PySpark / Databricks Show more Show less
Posted 2 weeks ago
3.0 years
0 Lacs
Greater Kolkata Area
On-site
Primary Roles And Responsibilities Developing Modern Data Warehouse solutions using Snowflake, Databricks and ADF. Ability to provide solutions that are forward-thinking in data engineering and analytics space Collaborate with DW/BI leads to understand new ETL pipeline development requirements. Triage issues to find gaps in existing pipelines and fix the issues Work with business to understand the need in the reporting layer and develop a data model to fulfill reporting needs Help joiner team members to resolve issues and technical challenges. Drive technical discussions with client architect and team members Orchestrate the data pipelines in the scheduler via Airflow Skills And Qualifications Skills: sql,pl/sql,spark,star and snowflake dimensional modeling,databricks,snowsight,terraform,git,unix shell scripting,snowsql,cassandra,circleci,azure,pyspark,snowpipe,mongodb,neo4j,azure data factory,snowflake,python Bachelor's and/or master’s degree in computer science or equivalent experience. Must have total 6+ yrs. of IT experience and 3+ years' experience in Data warehouse/ETL projects. Expertise in Snowflake security, Snowflake SQL and designing/implementing other Snowflake objects. Hands-on experience with Snowflake utilities, SnowSQL, Snowpipe, Snowsight and Snowflake connectors. Deep understanding of Star and Snowflake dimensional modeling. Strong knowledge of Data Management principles Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture Should have hands-on experience in SQL and Spark (PySpark) Experience in building ETL / data warehouse transformation processes Experience with Open Source non-relational / NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J) Experience working with structured and unstructured data including imaging & geospatial data. Experience working in a Dev/Ops environment with tools such as Terraform, CircleCI, GIT. Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning, troubleshooting and Query Optimization. Databricks Certified Data Engineer Associate/Professional Certification (Desirable). Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects Should have experience working in Agile methodology Strong verbal and written communication skills. Strong analytical and problem-solving skills with a high attention to detail. Mandatory Skills: Snowflake/Azure Data Factory/ PySpark / Databricks Show more Show less
Posted 2 weeks ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Primary Roles And Responsibilities Developing Modern Data Warehouse solutions using Snowflake, Databricks and ADF. Ability to provide solutions that are forward-thinking in data engineering and analytics space Collaborate with DW/BI leads to understand new ETL pipeline development requirements. Triage issues to find gaps in existing pipelines and fix the issues Work with business to understand the need in the reporting layer and develop a data model to fulfill reporting needs Help joiner team members to resolve issues and technical challenges. Drive technical discussions with client architect and team members Orchestrate the data pipelines in the scheduler via Airflow Skills And Qualifications Skills: sql,pl/sql,spark,star and snowflake dimensional modeling,databricks,snowsight,terraform,git,unix shell scripting,snowsql,cassandra,circleci,azure,pyspark,snowpipe,mongodb,neo4j,azure data factory,snowflake,python Bachelor's and/or master’s degree in computer science or equivalent experience. Must have total 6+ yrs. of IT experience and 3+ years' experience in Data warehouse/ETL projects. Expertise in Snowflake security, Snowflake SQL and designing/implementing other Snowflake objects. Hands-on experience with Snowflake utilities, SnowSQL, Snowpipe, Snowsight and Snowflake connectors. Deep understanding of Star and Snowflake dimensional modeling. Strong knowledge of Data Management principles Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture Should have hands-on experience in SQL and Spark (PySpark) Experience in building ETL / data warehouse transformation processes Experience with Open Source non-relational / NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J) Experience working with structured and unstructured data including imaging & geospatial data. Experience working in a Dev/Ops environment with tools such as Terraform, CircleCI, GIT. Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning, troubleshooting and Query Optimization. Databricks Certified Data Engineer Associate/Professional Certification (Desirable). Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects Should have experience working in Agile methodology Strong verbal and written communication skills. Strong analytical and problem-solving skills with a high attention to detail. Mandatory Skills: Snowflake/Azure Data Factory/ PySpark / Databricks Show more Show less
Posted 2 weeks ago
3.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Primary Roles And Responsibilities Developing Modern Data Warehouse solutions using Snowflake, Databricks and ADF. Ability to provide solutions that are forward-thinking in data engineering and analytics space Collaborate with DW/BI leads to understand new ETL pipeline development requirements. Triage issues to find gaps in existing pipelines and fix the issues Work with business to understand the need in the reporting layer and develop a data model to fulfill reporting needs Help joiner team members to resolve issues and technical challenges. Drive technical discussions with client architect and team members Orchestrate the data pipelines in the scheduler via Airflow Skills And Qualifications Skills: sql,pl/sql,spark,star and snowflake dimensional modeling,databricks,snowsight,terraform,git,unix shell scripting,snowsql,cassandra,circleci,azure,pyspark,snowpipe,mongodb,neo4j,azure data factory,snowflake,python Bachelor's and/or master’s degree in computer science or equivalent experience. Must have total 6+ yrs. of IT experience and 3+ years' experience in Data warehouse/ETL projects. Expertise in Snowflake security, Snowflake SQL and designing/implementing other Snowflake objects. Hands-on experience with Snowflake utilities, SnowSQL, Snowpipe, Snowsight and Snowflake connectors. Deep understanding of Star and Snowflake dimensional modeling. Strong knowledge of Data Management principles Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture Should have hands-on experience in SQL and Spark (PySpark) Experience in building ETL / data warehouse transformation processes Experience with Open Source non-relational / NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J) Experience working with structured and unstructured data including imaging & geospatial data. Experience working in a Dev/Ops environment with tools such as Terraform, CircleCI, GIT. Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning, troubleshooting and Query Optimization. Databricks Certified Data Engineer Associate/Professional Certification (Desirable). Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects Should have experience working in Agile methodology Strong verbal and written communication skills. Strong analytical and problem-solving skills with a high attention to detail. Mandatory Skills: Snowflake/Azure Data Factory/ PySpark / Databricks Show more Show less
Posted 2 weeks ago
3.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Primary Roles And Responsibilities Developing Modern Data Warehouse solutions using Snowflake, Databricks and ADF. Ability to provide solutions that are forward-thinking in data engineering and analytics space Collaborate with DW/BI leads to understand new ETL pipeline development requirements. Triage issues to find gaps in existing pipelines and fix the issues Work with business to understand the need in the reporting layer and develop a data model to fulfill reporting needs Help joiner team members to resolve issues and technical challenges. Drive technical discussions with client architect and team members Orchestrate the data pipelines in the scheduler via Airflow Skills And Qualifications Skills: sql,pl/sql,spark,star and snowflake dimensional modeling,databricks,snowsight,terraform,git,unix shell scripting,snowsql,cassandra,circleci,azure,pyspark,snowpipe,mongodb,neo4j,azure data factory,snowflake,python Bachelor's and/or master’s degree in computer science or equivalent experience. Must have total 6+ yrs. of IT experience and 3+ years' experience in Data warehouse/ETL projects. Expertise in Snowflake security, Snowflake SQL and designing/implementing other Snowflake objects. Hands-on experience with Snowflake utilities, SnowSQL, Snowpipe, Snowsight and Snowflake connectors. Deep understanding of Star and Snowflake dimensional modeling. Strong knowledge of Data Management principles Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture Should have hands-on experience in SQL and Spark (PySpark) Experience in building ETL / data warehouse transformation processes Experience with Open Source non-relational / NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J) Experience working with structured and unstructured data including imaging & geospatial data. Experience working in a Dev/Ops environment with tools such as Terraform, CircleCI, GIT. Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning, troubleshooting and Query Optimization. Databricks Certified Data Engineer Associate/Professional Certification (Desirable). Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects Should have experience working in Agile methodology Strong verbal and written communication skills. Strong analytical and problem-solving skills with a high attention to detail. Mandatory Skills: Snowflake/Azure Data Factory/ PySpark / Databricks Show more Show less
Posted 2 weeks ago
10.0 - 15.0 years
20 - 25 Lacs
Bengaluru
Work from Office
Hiring for expert in Semantic Web (RDF, OWL, SPARQL), API-led architecture (REST/SOAP), Graph & NoSQL DBs, Java/Python, web tech (HTML, XML), UML, CI/CD, and cloud (AWS/Azure). Experience with Neo4j, Amazon Neptune, and microservices a plus.
Posted 2 weeks ago
5.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
Job Location- Kolkata (Hybrid) Experience Level - 5+ Years Mandatory Skills -Azure Databricks +SQL +Pyspark Primary Roles and Responsibilities : Developing Modern Data Warehouse solutions using Databricks and Azure Stack Ability to provide solutions that are forward-thinking in data engineering and analytics space Collaborate with DW/BI leads to understand new ETL pipeline development requirements. Triage issues to find gaps in existing pipelines and fix the issues Work with business to understand the need in reporting layer and develop data model to fulfill reporting needs Help joiner team members to resolve issues and technical challenges. Drive technical discussion with client architect and team members Orchestrate the data pipelines in scheduler via Airflow Skills and Qualifications : Bachelor's and/or master’s degree in computer science or equivalent experience. Must have total 5+ yrs. of IT experience and 3+ years' experience in Data warehouse/ETL projects. Deep understanding of Star and Snowflake dimensional modelling. Strong knowledge of Data Management principles Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture Should have hands-on experience in SQL , Python and Spark (PySpark) Candidate must have experience in Azure stack Desirable to have ETL with batch and streaming (Kinesis). Experience in building ETL / data warehouse transformation processes Experience with Apache Kafka for use with streaming data / event-based data Experience with other Open-Source big data products Hadoop (incl. Hive, Pig, Impala) Experience with Open Source non-relational / NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J) Experience working with structured and unstructured data including imaging & geospatial data. Experience working in a Dev/Ops environment with tools such as Terraform, CircleCI, GIT. Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning and troubleshoot Databricks Certified Data Engineer Associate/Professional Certification (Desirable). Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects Should have experience working in Agile methodology Strong verbal and written communication skills. Strong analytical and problem-solving skills with a high attention to detail. Show more Show less
Posted 2 weeks ago
6.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
TCS Hiring !!!Walkin Drive ( Face to face ) on14th June 2025 iN person 6 to 8 years, Hyderabad, Role**. Python Full stack Developer Exp - 14 June -25 6 to 8 years, Hyderabad, Kolkata Please read Job description before Applying NOTE: If the skills/profile matches and interested, please reply to this email by attaching your latest updated CV and with below few details: Name: Contact Number: Email ID: Highest Qualification in: (Eg. B.Tech/B.E./M.Tech/MCA/M.Sc./MS/BCA/B.Sc./Etc.) Current Organization Name: Total IT Experience 6 to 8 years, Hyderabad, Kolkata Current CTC Expected CTC Notice period Whether worked with TCS - Y/N Location 6 to 8 year, Hyderabad, Frontend o 6+ years of overall experience with proficiency in React (2+ years), Typescript (1+ year), React hooks (1+ year) o Experience with ESlint, CSS in JS styling (preferably Emotion), state management (preferably Redux), and JavaScript bundlers such as Webpack o Experience with integrating with RESTful APIs or other web services Backend o Expertise with Python (3+ years, preferably Python3) o Proficiency with a Python web framework (2+ years, preferably flask and FastAPI) o Experience with a Python linter (preferably flake8), graph databases (preferably Neo4j), a package manager (preferably pip), Elasticsearch, and Airflow o Experience with developing microservices, RESTful APIs or other web services o Experience with Database design and management, including NoSQL/RDBMS tradeoffs Show more Show less
Posted 2 weeks ago
6.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Role**: Python,AWS,Terraform Required Technical Skill Set: Python Full stack Developer Desired Experience Range: 06 - 08 yrs Notice Period: Immediate to 90Days only Location of Requirement: Hyderabad We are currently planning to do a Walk in Interview on 14 th June 2025 (Saturday) Date – 14 th June 2025 (Saturday) Venue - Tata Consultancy Services Limited, Kohinoor Park Plot No 1, Hitech City Road, Rd Number 1, HITEC City, Hyderabad, Telangana 500084 Job Description: Primary Skill Frontend o 6+ years of overall experience with proficiency in React (2+ years), Typescript (1+ year), React hooks (1+ year) o Experience with ESlint, CSS in JS styling (preferably Emotion), state management (preferably Redux), and JavaScript bundlers such as Webpack o Experience with integrating with RESTful APIs or other web services Backend o Expertise with Python (3+ years, preferably Python3) o Proficiency with a Python web framework (2+ years, preferably flask and FastAPI) o Experience with a Python linter (preferably flake8), graph databases (preferably Neo4j), a package manager (preferably pip), Elasticsearch, and Airflow o Experience with developing microservices, RESTful APIs or other web services o Experience with Database design and management, including NoSQL/RDBMS tradeoffs Show more Show less
Posted 2 weeks ago
0.0 - 9.0 years
0 Lacs
Chennai, Tamil Nadu
On-site
IT Full-Time Job ID: DGC00669 Chennai, Tamil Nadu 3-9 Yrs ₹04 - ₹12 Yearly Job description Job Description Position: Java Software Development Engineer Location: Chennai, India Duration: Full Time Description: An exciting opportunity for a Java Software Development Engineer. This role is ideal for individuals looking to develop strategic software applications that support the corporate design community. The successful candidate will be responsible for designing, implementing, and maintaining application software solutions, with a focus on developing and supporting CAD Information Systems. This includes the design and development of Java Spring Boot microservices that support IP Lifecycle management and SoC design flows, as well as creating APIs, web interfaces, and client automation scripts for end users. Successful candidates will be able to demonstrate a strong attention to detail and an ability to multitask. Responsibilities: Participate in the development of Java Spring Boot microservices. Develop and deploy automation scripts using Python. Contribute to the implementation and improvement of design flows and methodologies. Investigate and utilize software tools/libraries to address various tasks. Test, debug, and maintain existing software tools and systems. Write and maintain technical documentation. Required Qualifications: Bachelor s degree or higher in Computer Science or a related field. 3 - 9 years of experience in Java development. Strong Object-Oriented programming skills. Proficiency in Java development and debugging. Proficiency in Spring Boot and Microservice architecture. Experience with common GNU/Linux commands and general command line usage. Preferred Qualifications: Working knowledge of MongoDB and Neo4j. Familiarity with scripting languages such as Perl, Python, and shell scripting. Personal: Excellent oral and written communication skills in English. Willingness to adopt new development tools to enhance teamwork and development flow. Strong attention to detail. Ability to multitask.
Posted 2 weeks ago
3.0 - 6.0 years
0 Lacs
Gurugram, Haryana, India
Remote
Company Description Discvr.ai (www.discvr.ai) is India's first AI search engine, emerging from stealth and recently launching our public beta! We're tackling information overload by building an intelligent search and research platform designed to deliver deep, reliable insights faster than traditional methods. Our platform features efficient web search, specialized deep research (like US equity analysis), AI report generation on any topic, and unique interactive AI articles. We're a fast-paced, early-stage startup based in Gurgaon, building cutting-edge technology with a global vision. Role Description We are seeking a AI Fullstack Engineer to join our core team. You will play a crucial role in designing, developing, and deploying key features across our entire platform, with a strong emphasis on integrating and leveraging AI capabilities. This is a unique opportunity to work at the intersection of AI, search technology, and user experience in a dynamic startup environment. CTC INR 6L/Annum Responsibilities Design, build, and maintain responsive and performant user interfaces using React/Next.js and UI frameworks like Shadcn UI. Develop robust backend services and APIs using Python. Integrate and manage various data stores including MongoDB, Redis, Neo4j (Graph DB), and Qdrant (Vector DB). Work closely with AI models: integrate third-party APIs, understand model usage patterns, and potentially assist in fine-tuning processes where required. Collaborate with the founder and potential future team members to define features and iterate quickly. Ensure the scalability, reliability, and security of the platform. Contribute to all phases of the software development lifecycle. Qualifications Experience: 3-6 years of professional experience in a fullstack development role. Frontend: Strong proficiency in JavaScript/TypeScript, React, and Next.js. Experience with state management and component libraries. Backend: Solid experience with Python and building RESTful APIs. Databases: Hands-on experience working with NoSQL databases (like MongoDB), caching systems (like Redis), and ideally exposure to vector databases (like Qdrant) and graph databases (like Neo4j). AI/ML Interest: Demonstrable understanding of AI/ML concepts, experience integrating AI APIs (e.g., Bedrock,OpenAI, Hugging Face), and a strong desire to work deeply with AI technologies. Fundamentals: Strong grasp of computer science fundamentals (data structures, algorithms). Problem Solver: Excellent analytical and problem-solving skills. Team Player: Good communication skills and ability to work effectively in a small team. Bonus Points (Preferred Qualifications): Prior hands-on experience specifically developing AI-powered applications or features. Experience with model fine-tuning concepts or tools. Familiarity with cloud platforms (AWS, GCP, Azure). Experience with UI libraries like Shadcn UI. Experience working in a fast-paced startup environment. Role Details & Culture: This is a demanding but rewarding role in an early-stage startup. We currently operate on a 6-day work week schedule, reflecting the fast pace required at this stage. Opportunity to work directly on core product features and have a significant impact. Work with cutting-edge AI technology and a modern tech stack. Learn and grow rapidly in a dynamic environment. Location: Based out of Gurgaon, India. Hybrid work or fully remote arrangements are possible for the right candidate. Equal Opportunity: Discvr.ai is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. Show more Show less
Posted 2 weeks ago
8.0 years
0 Lacs
Bengaluru, Karnataka, India
Remote
About Smart Working At Smart Working, we believe your job should not only look right on paper but also feel right every day. This isn’t just another remote opportunity - it’s about finding where you truly belong, no matter where you are. From day one, you’re welcomed into a genuine community that values your growth and well-being. Our mission is simple: to break down geographic barriers and connect skilled professionals with outstanding global teams and products for full-time, long-term roles. We help you discover meaningful work with teams that invest in your success, where you’re empowered to grow personally and professionally. Join one of the highest-rated workplaces on Glassdoor and experience what it means to thrive in a truly remote-first world. About the Role We’re hiring a Lead Frontend Engineer to architect and drive the implementation of a new frontend framework. This role is ideal for a hands-on technical leader who can collaborate cross-functionally, mentor others, and deliver high-quality, scalable UI solutions. You'll work closely with Product, UX, and Platform teams to align frontend execution with business and design goals. What You’ll Be Doing Coordinate with Product Management on timelines and clear definitions of done Deliver the UX team’s vision incrementally through iterative development Maintain high code quality while minimizing implementation complexity Architect and build a new frontend framework Set up and maintain functional CI/CD pipelines Mentor engineers and ensure the whole team can work effectively in the new framework Define and communicate frontend requirements for backend/API development Contribute to team culture by promoting collaboration, adaptability, and technical excellence Must-Have Skills 8+ years with React and TypeScript 3+ years integrating with REST and/or GraphQL APIs 3+ years of experience with AWS 2+ years of experience leading engineering teams (technical leadership) Proven ability to lead frontend projects from design through implementation Strong communication and collaboration skills Comfortable working in a fast-paced, iterative environment Nice-to-Have Skills Experience with Storybook and Miro Familiarity with Graph Databases (e.g., Neo4j) Experience with Terraform and GitHub Actions Awareness of cybersecurity principles and secure coding practices Why Smart Workers Love It Here Fixed Shifts: 12:00 PM - 9:30 PM IST (Summer) | 1:00 PM - 10:30 PM IST (Winter) No Weekend Work: Real work-life balance, not just words Day 1 Benefits: Laptop and full medical insurance provided Support That Matters: Mentorship, community, and forums where ideas are shared and exchanged True Belonging: A long-term career where your contributions are valued At Smart Working, you’ll never be just another remote hire. Be a Smart Worker - valued, empowered, and part of a culture that celebrates integrity, excellence, and ambition. If that sounds like your kind of place, we’d love to hear your story. Show more Show less
Posted 2 weeks ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description Summary The person in this role will be the technical team lead and the point of contact between the PM, Architect and People leader. This person will work closely with the Product Owner to break down features into detailed, technical, work chunks that will be implemented by the team members. This person will oversee the detailed technical designs of the individual features. This person will need to fully understand the Modeling ecosystem and where it fits in the GridOS context. Job Description Roles and Responsibilities Serve as technical lead for the Modeling Development team: Single point of contact about technical development aspects for the Architect, PO, Scrum Master and Team Manager, owns onboarding and ramp-up processes for the team members, owns efficiency and quality of the development process. Responsible for the quality of the development in terms of software performances, code quality, test automation, code coverage, CI/CD and documentation. Oversee the detailed technical designs of the individual features. High level estimates of the different features of the products. Owns technical deliverables during the entire lifecycle of the products. Keep the products development lifecycle on track in terms of budget, time and quality. Keep track of developments happening within GridOS ecosystem and build bridges with other engineering and services teams. Interact with Services teams, and partner integrator teams, to provide processes to ensure best use of GridOS Modeling products and services. Effectively communicate both verbally and in writing with peers and team members as an inclusive team member. Serves as a technical leader or mentor on complex, integrated implementations within the GridOS Modeling product teams. Work in a self-directed fashion to proactively identify system problems, failures, and areas for improvement. Track issue resolution and document solutions implemented and create troubleshooting guides. Peer review of Pull Requests. Education Qualification For roles outside USA: Bachelor's Degree in Computer Science or “STEM” Majors (Science, Technology, Engineering and Math) with significant experience. For roles in USA: Bachelor's Degree in Computer Science or “STEM” Majors (Science, Technology, Engineering and Math) Years of experience: 8+ years Desired Characteristics Technical Expertise Strong understanding of OOP concepts Strong experience with Kubernetes and microservices architectures Containers technology Strong expertise in JAVA and Python, Maven and Springboot framework REST API (OpenAPI) and event design GraphQL schemas & services design Graph technologies and frameworks: Apache Jena / Neo4J / GraphDB Experience with RDF and SPARQL Unit and integration tests design CI/CD pipelines designs JSON & YAML Schemas Events driven architecture Data streaming technologies such as Apache Kafka Microservice observability and metrics Integration skills Autonomous and able to work asynchronously (due to time zone difference) Software & API documentation Good to have Data engineering and data architecture expertise Apache Camel & Apache Arrow Experience in Grid or Energy software business (AEMS / ADMS / Energy Markets / SCADA / GIS) Business Acumen Adept at navigating the organizational matrix; understanding people's roles, can foresee obstacles, identify workarounds, leverage resources and rally teammates. Understand how internal and/or external business model works and facilitate active customer engagement Able to articulate the value of what is most important to the business/customer to achieve outcomes Able to produce functional area information in sufficient detail for cross-functional teams to utilize, using presentation and storytelling concepts. Possess extensive knowledge of full solution catalog within a business unit and proficiency in discussing each area at an advanced level. Six Sigma Green Belt Certification or equivalent quality certification. Leadership Demonstrated working knowledge of internal organization Foresee obstacles, identify workarounds, leverage resources, rally teammates. Demonstrated ability to work with and/or lead blended teams, including 3rd party partners and customer personnel. Demonstrated Change Management /Acceleration capabilities Strong interpersonal skills, including creativity and curiosity with ability to effectively communicate and influence across all organizational levels Proven analytical and problem resolution skills Ability to influence and build consensus with other Information Technology (IT) teams and leadership. Note To comply with US immigration and other legal requirements, it is necessary to specify the minimum number of years' experience required for any role based within the USA. For roles outside of the USA, to ensure compliance with applicable legislation, the JDs should focus on the substantive level of experience required for the role and a minimum number of years should NOT be used. This Job Description is intended to provide a high level guide to the role. However, it is not intended to amend or otherwise restrict/expand the duties required from each individual employee as set out in their respective employment contract and/or as otherwise agreed between an employee and their manager. Additional Information Relocation Assistance Provided: No Show more Show less
Posted 2 weeks ago
0.0 - 2.0 years
4 - 6 Lacs
Chennai
On-site
The Engineering Analyst 2 is an intermediate level position responsible for a variety of engineering activities including the design, acquisition and development of hardware, software and network infrastructure in coordination with the Technology team. The overall objective of this role is to ensure quality standards are being met within existing and planned frameworks. Responsibilities: Perform system and application monitoring, capacity planning and systems tests to ensure products meet performance requirements Evaluate technologies, develop prototypes, contribute to design issues, and implement solutions Work with various internal and external teams to identify and resolve problems Consult with end users and clients to identify and correct systems problems or propose solutions Assist in the development of software and systems tools used by integration teams to create end user packages Provide support for operating systems and in-house applications, including third party applications, as needed Perform coding, analysis, testing or other appropriate functions in order to identify problems and propose solutions Adhere to Citi technology standards, audit requirements and corporate compliance issues and requirements Apply knowledge of engineering procedures and concepts and basic knowledge of other technical areas to day to day activities Appropriately assess risk when business decisions are made, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency. Qualifications: 0-2 years of relevant experience in an Engineering role Experience working in Financial Services or a large complex and/or global environment Project Management experience Consistently demonstrates clear and concise written and verbal communication Comprehensive knowledge of design metrics, analytics tools, benchmarking activities and related reporting to identify best practices Demonstrated analytic/diagnostic skills Ability to work in a matrix environment and partner with virtual teams Ability to work independently, multi-task, and take ownership of various parts of a project or initiative Ability to work under pressure and manage to tight deadlines or unexpected changes in expectations or requirements Proven track record of operational process change and improvement Education: Bachelor’s degree/University degree or equivalent experience Roles & Responsibilities: Knowledge on APIGEE implementation and support Working experience of all CICD processes including LSE and ECS Hadoop cluster experience Cloud Computing Knowledge on AWS Must have SRE knowledge and self healing implementation Experience on Automatic Server patching and batch management Working experice on Devops tools and technologies Skillset: Bigdata, Hadoop cluster, KAFKA, GemFire, NEO4J, TEACMCITY, uDeploy, Autosys, RHEL, Oracle - Job Family Group: Technology - Job Family: Systems & Engineering - Time Type: Full time - Most Relevant Skills Please see the requirements listed above. - Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. - Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster.
Posted 3 weeks ago
5.0 - 7.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Description In This Role, Your Responsibilities Will Be: CI/CD Pipeline Management: Design, Develop and maintain CI/CD pipelines integrated with automated testing, version control and seamless deployments across multiple products. Ensure scalable, secure, and cost-effective infrastructure. Cloud Infrastructure Management: Deploy, manage containerized microservices on Azure Kubernetes Service (AKS). Optimize container performance and security. Platform Engineering: Design and implement self-service platforms that enable automated provisioning, testing, and deployment, reducing deployment time. Monitoring & Observability: Implement observability tools such as Prometheus and Grafana, creating real-time dashboards and proactive alerting mechanisms to ensure high availability (99.9% uptime). Security & Compliance: Integrate security standard methodologies into CI/CD pipelines, including secret management, role-based access control (RBAC), and automated vulnerability scanning to ensure compliance with CIS standards. Testing & Deployment Stability: Work with QA teams to integrate automated regression testing, deployments, and feature flagging, improving release stability and reducing deployment errors. Kubernetes Administration: Perform Kubernetes version upgrades, implement SSL certificates, Helm charts, and custom resource definitions (CRDs), optimizing application performance. Certificate Management: Automate tracking and renewal of SSL certificates to minimize expiration risks and ensure uninterrupted operations. Backup and Recovery: Implement Backup and Disaster Recovery policies. Ensure high availability and failover mechanisms Team Collaboration & Mentorship: Lead and mentor a team of engineers, fostering a culture of engineering excellence and continuous learning. Who You Are: You are a quick learner, Resourceful and possess productive approach towards continuous improvement. You have analytical skills, ability to manage ambiguity, mitigate risks identified in providing the solution. You always keep the end in sight; puts in extra effort to meet deadlines. You have collaborative approach inter-personal savvy and have excellent verbal and written Communication Skills. For This Role, You Will Need: Bachelor’s degree in Engineering, Computer Science, or a related technical field. 5 to 7 years of experience in Cloud, DevOps role Azure DevOps Expertise: Hands-on experience with Azure Pipelines, Application Insights, Docker, Azure Kubernetes Service (AKS), Helm, and Terraform/ARM. CI/CD Pipelines & Automation: Strong understanding of continuous integration and deployment processes. Hands-on experience with Jenkins, GitHub Actions. Version Control & Agile Development: Proficiency in Git for version control and apply standard processes for Branching, Merging and Code reviews. Security & Compliance: Working experience in DevSecOps, security standard processes, CIS compliance, and integrating security tools in DevOps workflows. Working experience in integrating tools such as Aqua, SonarQube, Blackduck, Coverity and Bright. Monitoring & Logging: Proficiency in Prometheus and Grafana for system monitoring and performance analysis. Scripting & Automation: Expertise in scripting languages such as Python, PowerShell and Bash for automation and system administration. Team Leadership & Collaboration: Working in Agile/SAFe 6.0 collaborate effectively across development, QA, and security teams. Preferred Qualifications that Set You Apart: Azure Certifications (For example AZ-400) Certified Kubernetes Administrator (CKA) Experience with RabbitMQ, PostgreSQL, Neo4j etc Familiar with process control and automation systems. Our Culture & Commitment to You At Emerson, we prioritize a workplace where every employee is valued, respected, and empowered to grow. We foster an environment that encourages innovation, collaboration, and diverse perspectives—because we know that great ideas come from great teams. Our commitment to ongoing career development and growing an inclusive culture ensures you have the support to thrive. Whether through mentorship, training, or leadership opportunities, we invest in your success so you can make a lasting impact. We believe diverse teams, working together are key to driving growth and delivering business results. We recognize the importance of employee wellbeing. We prioritize providing competitive benefits plans, a variety of medical insurance plans, Employee Assistance Program, employee resource groups, recognition, and much more. Our culture offers flexible time off plans, including paid parental leave (maternal and paternal), vacation and holiday leave. About Us WHY EMERSON Our Commitment to Our People At Emerson, we are motivated by a spirit of collaboration that helps our diverse, multicultural teams across the world drive innovation that makes the world healthier, safer, smarter, and more sustainable. And we want you to join us in our bold aspiration. We have built an engaged community of inquisitive, dedicated people who thrive knowing they are welcomed, trusted, celebrated, and empowered to solve the world’s most complex problems — for our customers, our communities, and the planet. You’ll contribute to this vital work while further developing your skills through our award-winning employee development programs. We are a proud corporate citizen in every city where we operate and are committed to our people, our communities, and the world at large. We take this responsibility seriously and strive to make a positive impact through every endeavor. At Emerson, you’ll see firsthand that our people are at the center of everything we do. So, let’s go. Let’s think differently. Learn, collaborate, and grow. Seek opportunity. Push boundaries. Be empowered to make things better. Speed up to break through. Let’s go, together. Accessibility Assistance or Accommodation If you have a disability and are having difficulty accessing or using this website to apply for a position, please contact: idisability.administrator@emerson.com . About Emerson Emerson is a global leader in automation technology and software. Through our deep domain expertise and legacy of flawless execution, Emerson helps customers in critical industries like life sciences, energy, power and renewables, chemical and advanced factory automation operate more sustainably while improving productivity, energy security and reliability. With global operations and a comprehensive portfolio of software and technology, we are helping companies implement digital transformation to measurably improve their operations, conserve valuable resources and enhance their safety. We offer equitable opportunities, celebrate diversity, and embrace challenges with confidence that, together, we can make an impact across a broad spectrum of countries and industries. Whether you’re an established professional looking for a career change, an undergraduate student exploring possibilities, or a recent graduate with an advanced degree, you’ll find your chance to make a difference with Emerson. Join our team – let’s go! No calls or agencies please. Show more Show less
Posted 3 weeks ago
175.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Purpose of Role and Accountabilities: Purpose of role The key purpose of this role is to take ownership and build solutions using opensource tools to provide OSS capabilties for Fixed Network platform. The role will work independently with some guidance when needed and can contribute to pairing with other team members. The role ensures enabling rapid business feature delivery through automation in design/dev/test and build processes. Accountabilities Supports the design and development of discreet solutions in line with cloud technologies and security standards, demonstrating basic understanding of how solution impacts the wider Technology roadmap for an application Identifies and solves routine problems with components designed, performs deep dive data analysis, potentially introducing new components or features to resolve issues. Conducts technical peer reviews within the organisation or with suppliers to ensure outputs align to quality standards and customer requirements. Designs and develops automation of services with team, ensuring and documenting the quality of delivery Contributes technical and analytical knowledge where needed to support managers, principals or technical experts to resolve issues during projects. What Do We Need For This Job Be agile: You use your knowledge of best practice to collaborate with peers across the business in different ways. Give specialised advice: You offer advice and recommendations based on your expertise both proactively and on request. Issue management: You research and recommend solutions to resolve routine problems. You escalate complex or unexpected issues as needed. Technology design: You analyse user requirements and help to design components of the architecture for software, systems and networks. Technology integration: You develop solutions, applying your knowledge of advanced technology applications, databases and platforms, containers, build tools and how various features interact. Improvement focus: You question and challenge the way things are done and work with others to make improvements. You build your understanding of the end to end processes to which your own role and the wider team contributes. Software development: You write basic code. You develop and test complex or non-routine digital software and platforms to make sure they meet design requirements. What I’ll be doing – your accountabilities: Design, Implementation or Development & Maintenance of Cloud native platforms and tools for software releases. Deep knowledge in CI/CD and Devops. Knowledge to deliver in Agile Methodology. Ability to do deep dive on technical areas and get the best outcome out of technically challenging situations. Understanding of telecom domain /OSS to help deliver customer solutions. Collaborating with rest of engineering team to achive common goal of quality & reusable deliverables. Drive strong engineering practices in the team Ability to debug complex issues and provide the right solution Manage concurrent application releases to deliver quality software releases on time. Exploring opensource technologies to provide industry standard deliverables. Stake holder engagement and deriving business cases. Skills required for the job: Hands on experience in REST Webservices, Python, Groovy, Shell, Ansible, Helm. Hands on experience in Containerisation tools, On-Prem Kubernetes. Hands on experience in CICD tools like Jenkins, Spinnaker and Sonar. Hands on experience in repository tools like GIT, Nexus and Harbor. Understandnigs on RDBMS and Nosql databases (Neo4j) Understanding of monitoring and observability tools like Dynatrace. Understanding of Object Storage, Kafka, Postgres, Vault, image scanning tools. Expertise developing and working in a devops operating model. Basic knowledge on Cloud, Openstack, SDN etc is good to have Designing scalable solution for distributed service architecture. Technical Expertise: Be a subject expert and drive shared learning and collaboration. Be a leader in the industry and recognised externally as a thought leader and for making outstanding contributions to the industry, perhaps via a professional body. Strategic / Enterprise Perspective: Able to create and articulate a compelling holistic strategic vision and roadmap to both colleagues and external contractors. This may require an ability to articulate complex information to colleagues working outside the technical space. Change Management/Agile Delivery: Able to create and sustain successful large scale and complex business change. An expert in change management. Process design & documentation: The ability to describe, map and document down to the procedural detail complete processes on a structured way. The ability to organise and structure the process flow in synchronisation with the exchanges with other processes. Connected leaders behaviours: The 3 Connected Leaders behaviours most suitable to the job: Solution focused Achiever: You deliver and coordinate activities and respond positively to challenges, clearly outlining expectations for yourself and others. Collaborative Partner: You develop great working relationships with stakeholders and colleagues, sharing knowledge and collaborating. Team Coach: You contribute to a high performing culture, empowering others through coaching and development. rpose of Role Purpose of Role and AccountabilitiesaPurpose of Role and Accountabilitiesnd Accountabilities With over 175 years of heritage, BT is now the flagship business brand of BT Group. We’ve brought together our best people and capabilities into a B2B powerhouse serving 1 -2 million business customers internationally. We’re a global leader for secure connectivity and collaboration platforms for businesses of all shapes and sizes, from big household names and government departments, right through to sole traders and new start-ups. But it’s not just the technology that matters, it’s what it can do to help them build stronger, smarter, more secure businesses. We value diversity and inclusion and believe in making a positive impact. We connect for good by championing digital inclusion and equipping people, businesses, and communities with digital skills to thrive. As a member of our team, you will be part of an organisation that celebrates difference, fosters innovation and provides you with opportunities to be your best. With millions of businesses relying on us daily, joining BT means you can be part of a diverse and multi-skilled team that makes a significant impact to society. A FEW POINTS TO NOTE: Although these roles are listed as full-time, if you’re a job share partnership, work reduced hours, or any other way of working flexibly, please still get in touch. We will also offer reasonable adjustments for the selection process if required, so please do not hesitate to inform us. DON'T MEET EVERY SINGLE REQUIREMENT? Studies have shown that women and people who are disabled, LGBTQ+, neurodiverse or from ethnic minority backgrounds are less likely to apply for jobs unless they meet every single qualification and criteria. We're committed to building a diverse, inclusive, and authentic workplace where everyone can be their best, so if you're excited about this role but your past experience doesn't align perfectly with every requirement on the Job Description, please apply anyway - you may just be the right candidate for this or other roles in our wider team. Show more Show less
Posted 3 weeks ago
3.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Location: In-Person (sftwtrs.ai Lab) Experience Level: Early Career / 1–3 years About sftwtrs.ai sftwtrs.ai is a leading AI lab focused on security automation, adversarial machine learning, and scalable AI-driven solutions for enterprise clients. Under the guidance of our Principal Scientist, we combine cutting-edge research with production-grade development to deliver next-generation AI products in cybersecurity and related domains. Role Overview As a Research Engineer I , you will work closely with our Principal Scientist and Senior Research Engineers to ideate, prototype, and implement AI/ML models and pipelines. This role bridges research and software development: you’ll both explore novel algorithms (especially in adversarial ML and security automation) and translate successful prototypes into robust, maintainable code. This position is ideal for someone who is passionate about pushing the boundaries of AI research while also possessing strong software engineering skills. Key Responsibilities Research & Prototyping Dive into state-of-the-art AI/ML literature (particularly adversarial methods, anomaly detection, and automation in security contexts). Rapidly prototype novel model architectures, training schemes, and evaluation pipelines. Design experiments, run benchmarks, and analyze results to validate research hypotheses. Software Development & Integration Collaborate with DevOps and MLOps teams to containerize research prototypes (e.g., Docker, Kubernetes). Develop and maintain production-quality codebases in Python (TensorFlow, PyTorch, scikit-learn, etc.). Implement data pipelines for training and inference: data ingestion, preprocessing, feature extraction, and serving. Collaboration & Documentation Work closely with Principal Scientist and cross-functional stakeholders (DevOps, Security Analysts, QA) to align on research objectives and engineering requirements. Author clear, concise documentation: experiment summaries, model design notes, code review comments, and API specifications. Participate in regular code reviews, design discussions, and sprint planning sessions. Model Deployment & Monitoring Assist in deploying models to staging or production environments; integrate with internal tooling (e.g., MLflow, Kubeflow, or custom MLOps stack). Implement automated model-monitoring scripts to track performance drift, data quality, and security compliance metrics. Troubleshoot deployment issues, optimize inference pipelines for latency and throughput. Continuous Learning & Contribution Stay current with AI/ML trends—present findings to the team and propose opportunities for new research directions. Contribute to open-source libraries or internal frameworks as needed (e.g., adding new modules to our adversarial-ML toolkit). Mentor interns or junior engineers on machine learning best practices and coding standards. Qualifications Education: Bachelor’s or Master’s degree in Computer Science, Electrical Engineering, Data Science, or a closely related field. Research Experience: 1–3 years of hands-on experience in AI/ML research or equivalent internships. Familiarity with adversarial machine learning concepts (evasion attacks, poisoning attacks, adversarial training). Exposure to security-related ML tasks (e.g., anomaly detection in logs, malware classification using neural networks) is a strong plus. Development Skills: Proficient in Python, with solid experience using at least one major deep-learning framework (TensorFlow 2.x, PyTorch). Demonstrated ability to write clean, modular, and well-documented code (PEP 8 compliant). Experience building data pipelines (using pandas, Apache Beam, or equivalent) and integrating with RESTful APIs. Software Engineering Practices: Familiarity with version control (Git), CI/CD pipelines, and containerization (Docker). Comfortable writing unit tests (pytest or unittest) and conducting code reviews. Understanding of cloud services (AWS, GCP, or Azure) for training and serving models. Analytical & Collaborative Skills: Strong problem-solving mindset, attention to detail, and ability to work under tight deadlines. Excellent written and verbal communication skills; able to present technical concepts clearly to both research and engineering audiences. Demonstrated ability to collaborate effectively in a small, agile team. Preferred Skills (Not Mandatory) Experience with MLOps tools (MLflow, Kubeflow, or TensorFlow Extended). Hands-on knowledge of graph databases (e.g., JanusGraph, Neo4j) or NLP techniques (transformer models, embeddings). Familiarity with security compliance standards (HIPAA, GDPR) and secure software development practices. Exposure to Rust or Go for high-performance inference code. Contributions to open-source AI or security automation projects. Why Join Us? Cutting-Edge Research & Production Impact: Work on adversarial ML and security–automation projects that go from concept to real-world deployment. Hands-On Mentorship: Collaborate directly with our Principal Scientist and Senior Engineers, learning best practices in both research methodology and production engineering. Innovative Environment: Join a lean, highly specialized team where your contributions are immediately visible and valued. Professional Growth: Access to conferences, lab resources, and continuous learning opportunities in AI, cybersecurity, and software development. Competitive Compensation & Benefits: Attractive salary, health insurance, and opportunities for performance-based bonuses. How to Apply Please send a résumé/CV, a brief cover letter outlining relevant AI/ML projects, and any GitHub or portfolio links to careers@sftwtrs.ai with the subject line “RE: Research Engineer I Application.” sftwtrs.ai is an equal-opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. Show more Show less
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
20312 Jobs | Dublin
Wipro
11977 Jobs | Bengaluru
EY
8165 Jobs | London
Accenture in India
6667 Jobs | Dublin 2
Uplers
6464 Jobs | Ahmedabad
Amazon
6352 Jobs | Seattle,WA
Oracle
5993 Jobs | Redwood City
IBM
5803 Jobs | Armonk
Capgemini
3897 Jobs | Paris,France
Tata Consultancy Services
3776 Jobs | Thane