19:02, 21 April 2024Colin Mtalkcontribs created page MMLU(←Created page with ''''Measuring Massive Multitask Language Understanding''' ('''MMLU''') is a benchmark for evaluating the capabilities of language models. It consists of about 16,000 multiple-choice questions spanning 57 academic subjects including mathematics, philosophy, law and medicine. It is one of the most commonly used benchmarks for comparing the capabilities of large language models.<ref name=nyt/> The MMLU was released by a team of researc...')
19:09, 22 March 2023Colin Mtalkcontribs created page Talk:Floral design/GA1(←Created page with '==GA Review== ((Good article tools)) ((subst:GAN/subst|((subst:PAGENAME)))) '''Reviewer:''' [[User:((subst:REVISIONUSER))|((subst:REVISIONUSER))]] ([[User talk:((subst:REVISIONUSER))|talk]] '''·''' [[Special:Contributions/((subst:REVISIONUSER))|contribs]]) ~~~~~ I can tell you've put a lot of work into expanding this article! It's thorough in its coverage of different aspects of the topic and well illustrated. Unfortunately, it's very far from meeti...')
16:38, 19 March 2023Colin Mtalkcontribs created page Vocative expression(←Created page with 'In linguistics, a '''vocative''' or '''vocative expression''', is a phrase used to identify the addressee of an utterance. The underlined phrases in each of the following English sentences are examples of vocatives: <blockquote> ((under|Sir)), your table is ready. I'm afraid, ((under|Mr. Renault)), that your card has been declined. Quit playing around, ((under|bozo)). </blockquote> Syntactically, vocatives are noun phrases which are isolated from th...')
20:12, 16 March 2023Colin Mtalkcontribs created page LLaMA(←Created page with '((otheruses|Llama (disambiguation))) '''LLaMA''' ('''Large Language Model Meta AI''') is a large language model (LLM) released by Meta AI in February 2023. A variety of model sizes were trained ranging from 7 billion to 65 billion parameters. LLaMA's developers reported that the 13 billion parameter model's performance on most NLP benchmarks exceeded that of the much larger GPT-3 (with 175 billion parameters) and...')Tag: Disambiguation links added
18:24, 14 March 2023Colin Mtalkcontribs created page Talk:K-Meleon/GA2(←Created page with '==GA Review== ((Good article tools)) ((subst:GAN/subst|((subst:PAGENAME)))) '''Reviewer:''' [[User:((subst:REVISIONUSER))|((subst:REVISIONUSER))]] ([[User talk:((subst:REVISIONUSER))|talk]] '''·''' [[Special:Contributions/((subst:REVISIONUSER))|contribs]]) ~~~~~ <!-- Please add all review comments below this comment, and do not alter what is above. So that the review can be kept within a single section, please do not use level 2 headers (==...==) be...')
16:00, 14 March 2023Colin Mtalkcontribs deleted page Talk:One-shot learning (software) (G8: Deleted together with the associated page with reason: G8: Page dependent on a deleted or nonexistent page)
17:25, 10 March 2023Colin Mtalkcontribs created page Talk:N-gram language model(←Created page with ' ==Todos== * Add a history section. Jurafsky and Martin has a short but useful section about this. * Add a section on smoothing * Add a section on applications ~~~~')
21:14, 9 March 2023Colin Mtalkcontribs created page BookCorpus(←Created page with ''''BookCorpus''' is a dataset consisting of the text of around 11,000 unpublished books scraped from the Internet. It was the main corpus used to train the initial version of OpenAI's GPT,<ref name=gpt-1-paper/> and has been used as training data for other early large language models including Google's BERT (language model).<ref name=bert-paper/> The dataset consists of around 985 million words, and the books that comprise it span a r...')