PinnedPublished inTDS ArchiveTuning-Free Longer Context Lengths For LLMs — A Review of Self-Extend (LLM Maybe LongLM)A simple strategy to enable LLMs to consume longer context length inputs during inference without the need for finetuning.Jan 4, 2024A response icon1Jan 4, 2024A response icon1
PinnedPublished inTDS ArchiveDemystifying GQA — Grouped Query Attention for Efficient LLM Pre-trainingThe variant of multi-head attention powering LLMs like LLaMA-2, Mistral7B, etc.Dec 27, 2023A response icon3Dec 27, 2023A response icon3
PinnedPublished inTDS ArchiveUnderstanding LoRA — Low Rank Adaptation For Finetuning Large ModelsFine-tuning large pre-trained models is computationally challenging, often involving adjustment of millions of parameters. This…Dec 22, 2023A response icon4Dec 22, 2023A response icon4
PinnedPublished inTDS ArchiveQuantum Computing ?/!Zeros and ones. This is how we imagined computing till now. This is what classical computing is. But a whole new concept is now changing…Jul 8, 2018A response icon1Jul 8, 2018A response icon1
ORPO — Preference Optimization without Reference ModelCombing Instruction Fine-tuning and Preference Alignment in a single stage.Aug 29, 2024Aug 29, 2024
Nothing has everythingA mathematical viewpoint.Oct 30, 2019A response icon1Oct 30, 2019A response icon1
Training HaarCascade Model on Microsoft Azure.In this hands-on tutorial, we will learn how to train your own haar cascade model on Microsoft Azure. To understand Haarcascade I…Dec 24, 2018A response icon2Dec 24, 2018A response icon2
Blockchain ?/ ! The Start of a new revolutionToday every big company you can think of is investing in Blockchain. From Tech giants like Microsoft and IBM to Financial giants like…Oct 18, 2017A response icon1Oct 18, 2017A response icon1