Interaktivní osnova
👷 Readings in Digital Typography, Scientific Visualization, Information Retrieval and Machine Learning
[Petr Sojka et al.]: The Representations of Language which Allow Thinking, Fast and Slow 7. 1. 2021
![](/imgx/1/z/e/uwc7pyfnykicwkbx4ma_b/Almeida_Junior_-_Moca_com_Livro.jpg)
[By José Ferraz de Almeida Júnior]
Abstract
The brainstorming will be introduced by a short train of thought:
natural language is "embedded" into the human brain. Using the metaphor
of a graph of neurons (nodes) and synapses (connections, edges) we may
theorize how to implement fast[text] retrieval capable System 1 and slow
but inference capable System 2 that represent key qualities of natural
language to solve NLP-based tasks as Information Retrieval, Question
Answering, or Text Summarization.
Readings and literature
Prepare for the brainstorming by reading some of these, ordered by relevance:
- Daniel Kahnemann: Thinking, Fast and Slow, at least chapter one about System 1 and 2.
- Lex Fridman with DK: watch some YouTube videos with Kahnemann, e.g. the one with Lex Fridman, with thoughts about the neuronal representation of language in Systems 1 and 2 of the human brain.
- Terry Sejnowski: The unreasonable effectiveness of deep learning in artificial intelligence and Leslie S. Smith's response
- Julian Michael et al: Asking without Telling and
- Deep Embedded Non-Redundant Clustering as a way to discretize to deduct (to travel from System 1 to System 2) as opposed to learning (to travel from System 2 to System 1).
An Outline
- Language as a carrier of thought (visual abstract :-), The Neural Theory of Language
- Brain-inspired AI is a hot topic today
-
BL0, Systems 1 and 2 (fast and shallow fastText and slow and deep BERT metaphor)
-
BL1, Lazy mind, the law of least effort (optimization and usage of S1)
- BL2, Autopilot: why not only conscious S2, the concept of priming concept (smooth learning from S2 to S1)
- BL3, Snap judgments: Hello effect (emotions trigger S1 rather than rational S2) and confirmation bias (seeking confirmation of previously learned S1)
- BL4, Heuristics: simplification by S1 (substitution heuristics), and availability heuristics (overestimation of the probability of something we read more often or find easy to remember)
- BL5, Understand statistics: base-rate neglect mistake
- BL6, Past imperfect memories: experiencing self, remembering self
- BL7, Attention is all you need: cognitive ease and cognitive strain to balance loads between S1 and S2
- BL12, Summary: the S1 and S2 cooperation and the missing link
- Sejnowski: The unreasonable.. (Figs 1 and 5) and Leslie S. Smith's response
- Julian Michael et al: Asking without Telling, Figs 1 and 2
- Recent Manning papers on probing: A Structural Probe for Finding .... recent papers on language representations
- Brainstorming Question: What S1+S2 metaphor brings to a) womankind knowledge of language representation on general NLP tasks b) research directions of our group (shallow and deep language representation models, fastText+attention->Big, Singing and Dancing Bert :-)
- Brainstorming discussion with MIRO board
- Summary, conscious evaluation, next steps.
Chyba: Odkazovaný objekt neexistuje nebo nemáte právo jej číst.
https://is.muni.cz/el/fi/podzim2020/PV212/um/video/zoom_0.mp4