The humans that created the vast datasets that define, describe, and expand our world are doing intellectual work. Language models, at least as they stand, are not intelligent - but they do echo intelligence back at us. When you’re asking a sufficiently complex neural network to approximate language you’re asking it to approximate all the intricacies of the text, most of which you’re likely not even consciously aware of. 用很简洁的语言和比喻说出了大家的心声以及他文中要解决的问题。 Why is language, and thus language models, such a rich source of knowledge? Neural networks, plus or minus a few bajillion parameters, are theoretically capable of universal function approximation. Regardless of how you feel about language, a gambler would expect language to exist for at least a few more years and is thus a worthy field of study. Many fights against the homogenization of language by dividing and conquering as they did in the Tower of Babel era3 (see: Javascript frameworks). They claim (with scant evidence) that language could contain useful knowledge far beneath the surface wasteland of memes and colourful insults we usually see, just as life might theoretically be found deep under the ice of Jupiter’s moon Europa. Many are of the opinion that language has redeeming features. Language has been found at the core of every human conflict in history, from World Wars (culinary and otherwise) to the Great Border Skirmish (2008) between you and your loud neighbor. Language has been a thorn in humanity's side since we evolved a complex enough audio and graphics processing unit to grunt, let alone write cryptocurrency whitepapers or opinion columns. Arxiv上周刚挂上的新鲜论文SHA-RNN(Single Headed Attention RNN: Stop Thinking With Your Head)
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |