Portfolio item number 1
Published:
Short description of portfolio item number 1
Published:
Short description of portfolio item number 1
Published:
Short description of portfolio item number 2
Published in SMM4H, NAACL 2021, 2021
Sidharth Ramesh, Abhiraj Tiwari, Parthivi Choubey, Saisha Kashyap, Sahil Khose, Kumud Lakara, Nishesh Singh, Ujjwal Verma
This paper describes our submission for the Social Media Mining for Health (SMM4H) 2021 shared tasks. We participated in 2 tasks: (1) Classificiation, extraction and normalization of adverse drug effect (ADE) mentions in English tweets (Task-1) and (2) Classification of COVID-19 tweets containing symptoms (Task-6). We stood first in task 1-a and second in task 1-b and 6.
Download here
Published in Tackling Climate Change with ML, NeurIPS 2021, 2021
Sahil Khose, Abhiraj Tiwari, Ankita Ghosh
This paper proposes a semi-supervised solution to the classification and segmentation task for a high resolution aerial imagery dataset - FloodNet.
Download here
Published in 1. ML for Creativity and Design, 2. Deep Generative Models and Downstream Applications, 3. CtrlGen: Controllable Generative Modeling in Language and Vision, and 4. New in ML workshop, NeurIPS 2021, 2021
Harsh Rathod, Manisimha Varma, Parna Chowdhury, Sameer Saxena, V Manushree, Ankita Ghosh, Sahil Khose
Sketches are a medium to convey a visual scene from an individual’s creative perspective. The addition of color substantially enhances the overall expressivity of a sketch. This paper proposes two methods to mimic human-drawn colored sketches by utilizing the Contour Drawing Dataset. Our first approach renders colored outline sketches by applying image processing techniques aided by k-means color clustering. The second method uses a generative adversarial network to develop a model that can generate colored sketches from previously unobserved images. We assess the results obtained through quantitative and qualitative evaluations.
Download here
Published in ICBINB, NeurIPS 2021, 2021
Sahil Khose, Shruti Jain, V Manushree
This paper is an ablation study of distillation in a semi-supervised setting, which not just reduces the number of parameters of the model but can achieve this while improving the performance over the baseline supervised model and making it better at generalizing. We find that the fewer the labels, the more this approach benefits from a smaller student network. This brings forward the potential of distillation as an effective solution to enhance performance in semi-supervised computer vision tasks while maintaining deployability.
Download here
Published in WASSA, ACL 2022, 2022
Aditya Kane, Shantanu Patankar, Sahil Khose, Neeraja Kirtane
Detecting emotions in languages is important to accomplish a complete interaction between humans and machines. This paper describes our contribution to the WASSA 2022 shared task which handles this crucial task of emotion detection. We have to identify the following emotions: sadness, surprise, neutral, anger, fear, disgust, joy based on a given essay text. We are using an ensemble of ELECTRA and BERT models to tackle this problem achieving an F1 score of 62.76%. Our codebase (this https URL) and our WandB project (this https URL) is publicly available.
Download here
Published:
Zero-shot learning (ZSL) has attracted significant attention due to its capabilities of classifying new images from unseen classes. In this paper, we propose to address a new and challenging task, namely explainable zero-shot learning (XZSL), which aims to generate visual and textual explanations to support the classification decision. Link for the video
Published:
Paper presentation and discussion of - Semi-Supervised Classification and Segmentation on High Resolution Aerial Images - by the authors: Sahil Khose, Abhiraj Tiwari and Ankita Ghosh Link for the video
Published:
After dominating Natural Language Processing, Transformers have taken over Computer Vision recently with the advent of Vision Transformers. However, the attention mechanism’s quadratic complexity in the number of tokens means that Transformers do not scale well to high-resolution images. XCiT is a new Transformer architecture, containing XCA, a transposed version of attention, reducing the complexity from quadratic to linear, and at least on image data, it appears to perform on par with other models. What does this mean for the field? Is this even a transformer? What really matters in deep learning? Link for the video
Published:
Deep Mind’s breakthrough paper: ‘Contrastive Predictive Coding 2.0’ (CPC 2) CPC 2 not only crushes AlexNet ‘s scores of 59.3% and 81.8% Top-1 and Top-5 accuracies with just 2% of the ImageNet data (60.4% and 83.9%) but given just 1% of the ImageNet data it achieves 78.3% Top-5 acc outperforming supervised classifier trained on 5x more data! Continuing with training on all the available images (100%) it not just outperforms fully supervised systems by 3.2% (Top-1 acc) but it still manages to outperform these supervised models with just 50% of the ImageNet data! Link for the video
Published:
This paper introduces SimCLRv2 and shows that semi-supervised learning is strongly benefited by self-supervised pre-training. Stunningly it shows that bigger models yield larger gains when fine-tuning with fewer labelled examples. Link for the video
Published:
This paper shows how tiny episodic memories are helpful in continual learning, and show that repetitive training on even tiny memories of past tasks does not harm generalization, on the contrary, it improves it! Link for the video
Published:
This paper generates high-resolution bird images from text descriptions for the first time using a multi-staged GAN approach, beating SOTA results on 3 datasets. Link for the video
Published:
This paper is the first to implement convolution NN and self-attention for the question-answering task and break the human benchmark on one of the most complex tasks in NLP. Link for the video
Published:
Instead of transferring knowledge from a teacher model to a different student model, self-distillation does it in the same model! This approach leads to faster inference and smaller models! Link for the video
Published:
This paper introduces the instruction tuning method to train their model FLAN (Finetuned LAnguage Net) in a zero-shot fashion, which is able to beat GPT-3’s zero-shot benchmarks on 19 of 25 tasks and even outperforms few-shot GPT-3 benchmarks on 10 of their tasks! Link for the video
Published:
This paper addresses one of the major challenges of Zero-Shot Object Detection - reducing ambiguity between the background class and unseen classes. With their method, they are able to get SOTA results for ZSD! Link for the video
Published:
The paper is about Continual Object Detection, determining the unknown objects in the environment, and then learning the new labeled classes without re-training on the entire dataset in an online manner. Link for the video