PinnedBhavin JawadeinTowards Data ScienceTuning-Free Longer Context Lengths For LLMs — A Review of Self-Extend (LLM Maybe LongLM)A simple strategy to enable LLMs to consume longer context length inputs during inference without the need for finetuning.Jan 41Jan 41
PinnedBhavin JawadeinTowards Data ScienceDemystifying GQA — Grouped Query Attention for Efficient LLM Pre-trainingThe variant of multi-head attention powering LLMs like LLaMA-2, Mistral7B, etc.Dec 27, 20232Dec 27, 20232
PinnedBhavin JawadeinTowards Data ScienceUnderstanding LoRA — Low Rank Adaptation For Finetuning Large ModelsFine-tuning large pre-trained models is computationally challenging, often involving adjustment of millions of parameters. This…Dec 22, 20233Dec 22, 20233
PinnedBhavin JawadeinTowards Data ScienceQuantum Computing ?/!Zeros and ones. This is how we imagined computing till now. This is what classical computing is. But a whole new concept is now changing…Jul 8, 20181Jul 8, 20181
Bhavin JawadeDeep Learning! draft article.As 2020 comes to an end, Deep learning turns out to be the most coveted jargon of this decade. Though the core foundations of deep…Jan 4, 20212Jan 4, 20212
Bhavin JawadeTraining HaarCascade Model on Microsoft Azure.In this hands-on tutorial, we will learn how to train your own haar cascade model on Microsoft Azure. To understand Haarcascade I…Dec 24, 20182Dec 24, 20182
Bhavin JawadeBlockchain ?/ ! The Start of a new revolutionToday every big company you can think of is investing in Blockchain. From Tech giants like Microsoft and IBM to Financial giants like…Oct 18, 20171Oct 18, 20171