PinnedPublished inTowards Data ScienceTuning-Free Longer Context Lengths For LLMs — A Review of Self-Extend (LLM Maybe LongLM)A simple strategy to enable LLMs to consume longer context length inputs during inference without the need for finetuning.Jan 41Jan 41
PinnedPublished inTowards Data ScienceDemystifying GQA — Grouped Query Attention for Efficient LLM Pre-trainingThe variant of multi-head attention powering LLMs like LLaMA-2, Mistral7B, etc.Dec 27, 20233Dec 27, 20233
PinnedPublished inTowards Data ScienceUnderstanding LoRA — Low Rank Adaptation For Finetuning Large ModelsFine-tuning large pre-trained models is computationally challenging, often involving adjustment of millions of parameters. This…Dec 22, 20233Dec 22, 20233
PinnedPublished inTowards Data ScienceQuantum Computing ?/!Zeros and ones. This is how we imagined computing till now. This is what classical computing is. But a whole new concept is now changing…Jul 8, 20181Jul 8, 20181
ORPO — Preference Optimization without Reference ModelCombing Instruction Fine-tuning and Preference Alignment in a single stage.Aug 29Aug 29
Training HaarCascade Model on Microsoft Azure.In this hands-on tutorial, we will learn how to train your own haar cascade model on Microsoft Azure. To understand Haarcascade I…Dec 24, 20182Dec 24, 20182
Blockchain ?/ ! The Start of a new revolutionToday every big company you can think of is investing in Blockchain. From Tech giants like Microsoft and IBM to Financial giants like…Oct 18, 20171Oct 18, 20171