cv

General Information

Full Name Parth Padalkar
Date of Birth 28th February 1998
Languages English, Hindi, Marathi

Education

  • 2021-present
    PhD in Computer Science
    University of Texas at Dallas, Texas, USA
    • Developing neurosymbolic systems for enhancing interpretability of deep learning models.
    • Combining LLMs and symbolic reasoing using logic programming for reliable natural language generation.
  • 2019-2020
    MS in Computer Science
    University of Texas at Dallas, Texas, USA
  • 2015-2019
    B.Tech. in Instrumentation and Control Engineering
    National Institute of Technology Jalandhar, Punjab, India

Experience

  • May '20 - Aug '20
    Computer Vision Intern
    Tech For Good Inc., Boston, MA, USA
    • Coordinated a team to annotate a 5,000-image dataset of firearms in active shooter scenarios, achieving 90% accuracy after experimenting with various object detection models such as YOLO, FastRCNN and FasterRCNN.
  • Sept '19 - May '20
    Research Analyst
    Schizophrenia and Social Cognition lab, The University of Texas at Dallas, TX, USA
    • Worked on analyzing schizophrenic patient data and develpoing a ML model to predict the occurance of the disease in subjects with an 89% accuracy.
  • May '17 - July '17
    Research Intern
    IIM Amritsar, India
    • Created a software by integrating DEMATEL, MMDE, and ISM decision-making techniques to find the degree of impact of the enablers and barriers to sustainable manufacturing.

Publications

  • 2024
    NeSyFOLD: A Framework for Interpretable Image Classification
    Parth Padalkar, Huaduo Wang, Gopal Gupta @AAAI 2024, oral presentation (<4\% selection rate)
    • Introduced a neurosymbolic framework, NeSyFOLD, aimed at creating interpretable predictions for image classification tasks, using Convolutional Neural Networks (CNNs).
    • A rule-set generated from the CNN, along with the CNN serves as the interpretable model for making predictions. Showed an average increase of 8% in accuracy and an 83% reduction in rule-set size than previous SOTA.
  • 2024
    Using Logic Programming and Kernel-Grouping for Improving Interpretability of Convolutional Neural Networks
    Parth Padalkar, Huaduo Wang, Gopal Gupta, @Practical Aspects of Declarative Languages (PADL) 2024
    • Improved on NeSyFOLD. Developed a novel algorithm for grouping the outputs of similar kernels in the CNN.
    • Showed that using groups of kernels for generating a rule-set leads to comparable performance and a 14% drop in the rule-set size on average, than using individual kernels.
  • 2024
    Automated interactive domain-specific conversational agents that understand human dialogs
    Yankai Zeng, Abhiramon Rajasekharan, Parth Padalkar, et a., @Practical Aspects of Declarative Languages (PADL) 2024
    • Developed a chat-bot using LLMs and logic programming that is more reliable than using an LLM-based chatbot.
    • Showed application as a hotel concierge that can recommend restaurants with more reliability than Bing AI.
  • 2023
    Reliable Natural Language Understanding with Large Language Models and Answer Set Programming
    Abhiramon Rajasekharan, Yankai Zeng, Parth Padalkar, Gopal Gupta, @International Conference on Logic Programing (ICLP) 2023
    • Proposed STAR, a framework that combines LLMs with Answer Set Programming (ASP) to improve reasoning in natural language understanding tasks.
    • Applied the STAR framework to tasks involving qualitative reasoning, mathematical reasoning, and goal-directed conversations and demonstrated its superior performance to vanilla LLMs.