👷 Readings in Digital Typography, Scientific Visualization, Information Retrieval and Machine Learning

[Petr Sojka et al.]: The Representations of Language which Allow Thinking, Fast and Slow 7. 1. 2021

[By José Ferraz de Almeida Júnior]


Abstract

The brainstorming will be introduced by a short train of thought: natural language is "embedded" into the human brain. Using the metaphor of a graph of neurons (nodes) and synapses (connections, edges) we may theorize how to implement fast[text] retrieval capable System 1 and slow but inference capable System 2 that represent key qualities of natural language to solve NLP-based tasks as Information Retrieval, Question Answering, or Text Summarization.

Readings and literature

Prepare for the brainstorming by reading some of these, ordered by relevance:

  1. Daniel Kahnemann: , at least chapter one about System 1 and 2.
  2. Lex Fridman with DK: watch some YouTube videos with Kahnemann, e.g. the one with Lex Fridman, with thoughts about the neuronal representation of language in Systems 1 and 2 of the human brain.
  3. Terry Sejnowski: The unreasonable effectiveness of deep learning in artificial intelligence and Leslie S. Smith's response
  4. Julian Michael et al: Asking without Telling and
  5. Deep Embedded Non-Redundant Clustering as a way to discretize to deduct (to travel from System 1 to System 2) as opposed to learning (to travel from System 2 to System 1).

An Outline

  1. Language as a carrier of thought (visual abstract :-), The Neural Theory of Language
  2. Brain-inspired AI is a hot topic today
  3. BL0, Systems 1 and 2  (fast and shallow fastText and slow and deep BERT metaphor)
  4. BL1, Lazy mind, the law of least effort (optimization and usage of S1)
  5. BL2, Autopilot: why not only conscious S2, the concept of priming concept (smooth learning from S2 to S1)
  6. BL3, Snap judgments: Hello effect (emotions trigger S1 rather than rational S2) and confirmation bias (seeking confirmation of previously learned S1)
  7. BL4, Heuristics: simplification by S1 (substitution heuristics), and availability heuristics (overestimation of the probability of something we read more often or find easy to remember)
  8. BL5, Understand statistics: base-rate neglect mistake
  9. BL6, Past imperfect memories: experiencing self, remembering self
  10. BL7, Attention is all you need: cognitive ease and cognitive strain to balance loads between S1 and S2
  11. BL12, Summary: the S1 and S2 cooperation and the missing link
  12. Sejnowski: The unreasonable.. (Figs 1 and 5) and Leslie S. Smith's response
  13. Julian Michael et al: Asking without Telling, Figs 1 and 2
  14. Recent Manning papers on probing: A Structural Probe for Finding ....  recent papers on language representations
  15. Brainstorming Question: What S1+S2 metaphor brings to a) womankind knowledge of language representation on general NLP tasks b) research directions of our group (shallow and deep language representation models, fastText+attention->Big, Singing and Dancing Bert :-)
  16. Brainstorming discussion with MIRO board
  17. Summary, conscious evaluation, next steps.
Chyba: Odkazovaný objekt neexistuje nebo nemáte právo jej číst.
https://is.muni.cz/el/fi/podzim2020/PV212/um/video/zoom_0.mp4