Posted by Nodus Labs | April 3, 2026
Landscape Reading Model — Text Network Representation

How do we perceive a text when we read it? The answer to this question can help us understand how attention functions and identify various strategies to enhance our writing. Additionally, it can inspire new methods of reading that increase engagement and accessibility. In this article, we’re going to demonstrate how the landscape reading model can be represented using text networks and how this representation helps generate new ideas and insights in InfraNodus.
Reading comprehension is closely related to working memory. We keep in mind the part of the text that we are reading at this very moment, but we are less aware of what we read just half a minute ago. This is a limitation of our “working memory” but it can be resolved through network graphs.
Nelson Cowan’s research [2] [3] revised the classical account of working memory capacity. Where Miller [17] proposed seven plus or minus two items, Cowan showed — by experimentally controlling for rehearsal and chunking — that the focus of attention holds approximately four independent chunks. This is the part of working memory where items are simultaneously available for association and comparison.
This constraint has direct implications for how we process text. It also provides a cognitive grounding for why representing language as a co-occurrence network — specifically with a 4-gram sliding window, as InfraNodus does [5] [6] — produces structurally meaningful results.
The Landscape Reading Model and Its Relation to Short-Term Memory
The Landscape Model emerged from a tension between two positions in reading research. The memory-based view held that comprehension is largely automatic — concepts activate passively through pre-existing associations, and the reader is carried forward by resonance with prior knowledge [4]. The constructionist view argued the opposite: readers strategically retrieve background knowledge to build coherence, and this active process is what makes understanding possible [1].
Each account was partially right and collapsed where the other was strong. Passive activation alone cannot explain how a reader recovers when they lose the thread. Active construction alone cannot explain how someone gets absorbed in a novel for three hours without deliberate effort.
Van den Broek and colleagues [8] proposed a structural integration rather than a compromise. Both processes operate, but on different triggers. Semantic neighbors of a concept activate automatically whenever it enters a processing cycle — cohort activation fires continuously. Strategic retrieval engages conditionally, only when the passive process fails to meet the reader’s coherence standards. The reader’s goals, domain expertise, and available cognitive resources modulate the threshold [7] [9].
The accumulated result across an entire text is what van den Broek called a “landscape” — an activation terrain where some concepts peak through repetition and reinforcement, while others fade into the background. This landscape, built cycle by cycle, becomes the reader’s memory of the text.
From Metaphor to Measurable Structure
The landscape metaphor is productive, but it remains theoretical. What if the landscape could be made visible and inspectable?
InfraNodus constructs an actual graph of the text [5] [6]. A sliding window of four words advances through the document one token at a time. At each position, every word within the window connects to every other. As the window progresses, connections accumulate — frequently co-occurring terms develop strong edges; isolated mentions remain peripheral.

The choice of four is not arbitrary. Baddeley and Hitch’s multi-component model [15] describes the architecture of working memory — phonological loop, visuospatial sketchpad, central executive, episodic buffer [16] — but does not specify its capacity limit. Cowan [2] provided that specification: approximately four independent chunks held in concurrent awareness at any instant. Combined with a temporal decay horizon of roughly ten seconds for unattended sensory information [12], these two constraints define the cognitive channel through which comprehension must pass.
The 4-gram sliding window operationalizes this channel computationally. It moves through the text the way the focus of attention moves through reading — at each position, a small set of concepts share the same processing space, form associative links through co-presence, then give way to the next set. The accumulated graph, after the window has traversed the entire document, is the frozen trace of all local co-activation events.
Where van den Broek’s landscape model generates a theoretical activation matrix [10], the text graph generates an observable, inspectable structure.
What Clusters Reveal: The Cohort Made Visible
One of the Landscape Model’s key mechanisms is cohort activation: when a concept activates during reading, its semantic neighbors light up as well [9]. These neighborhoods form through both the reader’s prior knowledge and the co-occurrence patterns accumulated within the text itself.
In a text graph, community detection algorithms do something analogous: they partition the network into densely interconnected groups. Terms that frequently appear together within local windows get clustered. These clusters are the topological equivalent of cognitive cohorts — thematic neighborhoods that the text constructs through repeated co-activation [5] [6]. Visualized with color-coding and force-atlas spacing, they become immediately legible both to human readers and to LLMs.

The landscape model infers cohorts from behavioral data. The graph makes them explicit — colored, measurable, with clear boundaries.
The Difference Between Peaks and Crossroads
This is where the network representation goes beyond what the Landscape Model describes.
In van den Broek’s framework, concepts that maintain high activation across many processing cycles become central to the resulting memory representation [7] [8]. They are the peaks of the landscape — essentially a frequency-based account of centrality.
Betweenness centrality captures something different [5] [6]. A node with high betweenness is not necessarily the most frequently activated — it is the one that sits on the shortest paths between different clusters. It is a bridge, not a peak. It connects semantic territories that would otherwise remain separate.

A concept can appear constantly but only within one cluster (high frequency, low betweenness). Another can appear less often but serve as the sole connection between two major thematic regions (lower frequency, high betweenness). The Landscape Model would predict the first concept dominates memory. The network analysis suggests the second is structurally more important — the crossroad where meaning from different parts of the text can flow and recombine.
Grasping the bridges reveals a text’s architecture. Remembering only the peaks reveals its surface.
Structural Absence as a Cognitive Operator
Van den Broek’s model maps what activates and how intensely [7]. It has no mechanism for representing what’s missing.
In a text graph, the space between clusters is not nothing. It’s a structural feature [6] — a region where edges could plausibly exist but don’t. These voids signal unexplored relationships, implicit questions, potential connections that neither the author nor the reader has yet made explicit.
This is the most significant departure from the landscape framework. The landscape shows terrain as it exists. The graph shows terrain and the unmarked paths between its elevations — where new conceptual routes might be built. Detecting these gaps shifts reading from passive absorption to active generation.
Consider what perceiving a gap demands cognitively: holding two separate clusters and the absence between them simultaneously — at least three of the available four chunks consumed by something that doesn’t exist. This is why structural gaps remain cognitively invisible during normal reading. InfraNodus addresses this by converting the absence into a concrete bridging suggestion — a node or question that can be attended to directly, occupying one chunk instead of three.

There is a deeper implication. What if the real unit of comprehension is not the concept, nor the link, but the felt absence that reorganizes attention?
In that view, a gap is not a defect in the graph. It is a higher-order operator: passive activation propagates through existing structure; strategic retrieval begins when what is missing becomes cognitively tangible. Meaning does not emerge from connected concepts alone, but from the mind’s ability to stabilize an unconnected possibility long enough for a new bridge to form.
- clusters = what cognition can already hold
- betweenness = what cognition can traverse
- gaps = what cognition isn’t yet organized enough to grasp
Comprehension follows existing edges. Insight traces the contours of edges that haven’t formed.
When the Window Is Smaller but the Reach Is Wider: ADHD
This analysis has a clinical dimension that complicates the bottleneck story in a productive way. ADHD is associated with reduced working memory capacity — closer to two or three chunks rather than four [19] [20]. The straightforward prediction would be narrower bottleneck, greater gap-blindness, less cross-cluster awareness.
The research reveals something more nuanced. White and Shah [21] found that adults with ADHD show a broader scope of semantic activation — their word associations reach more semantically distant concepts than those of non-ADHD peers, and this wider activation scope mediates their higher scores on creative flexibility measures. The default mode network, normally suppressed during focused tasks, stays partially active in ADHD — allowing associative bleed between semantic territories that disciplined attention would fence off [22].
ADHD, then, does not simply narrow the sliding window. It changes the distribution of what the window captures. A neurotypical four-chunk focus tends to hold items from within the same local cluster — the cohort. An ADHD three-chunk focus holds fewer items, but those items are drawn from more distant regions of the semantic network. The window is smaller but its reach is wider. This is why ADHD is associated with enhanced divergent thinking — the ability to connect remote concepts — while simultaneously being associated with difficulty in sustained, coherent reading comprehension [23].
In graph terms, the ADHD reader gravitates naturally toward what betweenness centrality measures — cross-cluster bridging — while struggling with what cluster density represents — local thematic coherence.
A tool like InfraNodus serves ADHD readers not as a prosthetic for cross-cluster connection — they already do that spontaneously — but as a scaffold for local coherence. The clusters that the tool makes visible are exactly what the ADHD reader’s broader activation scope tends to skip over. The graph provides the structure that defocused attention dissolves. The gap detection feature, conversely, may be less necessary for ADHD readers who are already perceiving connections between distant semantic territories — but more useful as a way of validating and structuring the cross-cluster intuitions that their broader activation pattern produces.
Scaling the Bottleneck: From Reading to Institutional Strategy
The cognitive bottleneck does not disappear when individuals form organizations. It scales.
An executive can attend to the innovation pipeline, or regulatory risk, or inequality trends, or market stability — but perceiving connections between all four exceeds the same capacity constraint that limits a reader parsing a paragraph [2] [3]. A strategic landscape is no more transparent to a decision-maker than a textual landscape is to a reader.
Ray Dalio’s organizational principles [24] arrive at this conclusion from the opposite direction. Where cognitive science identifies the bottleneck, Dalio specifies the institutional response: build systems that detect connections individuals cannot perceive. The prescription is precise — design around bridges, not silos. Place decision rights and review processes around the concepts that connect otherwise separate domains, because these bridging concepts fall exactly into the gap between any single person’s attentional focus.
This is betweenness centrality as organizational design. The strategically crucial nodes — where technological change meets employment displacement, where capital flows intersect political legitimacy, where short-term returns collide with long-term resilience — connect clusters that no single focus of attention can span simultaneously. Organizations must externalize these connections: dashboards, cross-functional reviews, knowledge graphs that expose the full topology.
InfraNodus does for a text what Dalio argues institutions should do for strategic complexity: partition into perceivable clusters, highlight the bridges, expose the gaps. A reader using the tool on a difficult paper and an organization running cross-cluster review processes on a complex market execute the same operation — externalizing structure that exceeds the ~4-chunk bottleneck.
Dalio’s emphasis on tracking second-order effects — delayed consequences propagating across cluster boundaries (inequality degrading markets degrading confidence degrading capital allocation) — describes precisely the multi-hop paths that working memory can’t sustain. These are the longest shortest paths in the graph: most likely to be overlooked, most consequential to detect, most urgently needing externalization.
The bottleneck constrains not just reading but thinking, strategy, collective intelligence. The same structural tools address it at every scale: render the network visible, trace the bridges, surface the gaps, build systems that hold the full landscape in view.
Reader-Dependent Landscapes vs. Text-Intrinsic Graphs
A tension worth noting. The landscape model insists that the same text produces different activation patterns in different readers — a scientist and a casual reader generate distinct landscapes from the same paragraph, shaped by their coherence standards, goals, and expertise [8] [9].
A text graph yields one network per document [5]. It represents the co-occurrence structure intrinsic to the text — the landscape any reader could traverse, not the landscape any particular reader does traverse.
This could be seen as a limitation or as a feature. The single graph shows the text’s structural affordances — all available connections and gaps. Shifting the analytical lens (entering through clusters, through gaps, through latent nodes) simulates the effect of different reading stances.
Collapsing Time into Space
The Landscape Model is fundamentally temporal — it tracks activation fluctuations cycle by cycle, and the memory representation emerges as a cumulative trace of these dynamics [7] [10]. The text graph collapses this temporal dimension into a spatial topology that can be inspected, measured, and navigated all at once.
The sequence of activation may be lost in this transformation — the dynamic unfolding that makes reading a process rather than a product. InfraNodus partially recovers this through its dynamic trend activation feature, which shows how the graph evolves over the text’s progression. But the graph offers what the temporal model fundamentally cannot: a structural overview that reveals the architecture of meaning in its totality [5] [6]. The reader cannot easily see the landscape while traversing it. The graph offers the view from above — including parts not yet walked through.
The graph works like a map. The interface features and analytical hints work like GPS — directing attention through the landscape.
Not a replacement for the landscape model. A complementary projection. One captures how reading unfolds in time. The other captures the structure that reading, over time, constructs.
The Neural Architecture In Between
There is a neural architecture that occupies precisely the intermediate position between temporal process and spatial structure.
Long Short-Term Memory networks (Hochreiter & Schmidhuber, 1997 [18]) process sequences one token at a time, carrying a hidden state forward. At each step: a forget gate discards elements from prior state (analogous to activation decay between processing cycles), an input gate incorporates new information (the current cycle’s textual input), an output gate selects what becomes available downstream (carryover to the next cycle). The LSTM traverses. Its cumulative representation, shaped by every prior gating decision, directly parallels the landscape model’s claim that memory emerges from the dynamics of activation and deactivation [7] [8].
Transformer architectures abandoned sequential processing entirely. Computing attention across all tokens in parallel, they gained simultaneous visibility over the entire sequence — at the cost of the temporal dynamics that made LSTMs a plausible model of how humans actually read. The architectural shift from LSTM to transformer recapitulates, in compressed form, the conceptual shift from landscape model to text graph: traversal becomes topology, process becomes structure.
The LSTM reads. The transformer sees. The text graph maps.
Beyond the Bottleneck — Variability as Navigation Strategy
The solutions proposed so far — text network visualization, organizational bridge-detection, cross-functional teams — share a common logic: externalize the landscape so it can be perceived despite the bottleneck. They are prosthetics. They work. But they leave the bottleneck itself untouched.
There is another approach. Not expanding the window, but changing how the window moves.
The EightOS framework [25], developed as an adaptive movement practice, proposes cognitive variability as the key navigation principle. It identifies a progression:
- uniform variability — establishing common ground, shared rhythm
- regular variability — introducing patterned differences that generate controlled tension
- fractal variability — distributing variation across scales so local structure echoes global form
- complex variability — allowing novel combinations that no single agent predicts
The system cycles through these states, returning to uniform variability for regeneration before beginning again.
This is a navigation protocol for the ~4-chunk bottleneck.
A reader — or a thinker, or a strategist — who stays in one cluster sees no gaps.
One who jumps chaotically between clusters sees no coherence.
But one who oscillates through the variability sequence can progressively build a representation of the full landscape without ever needing to hold it simultaneously.
Settling into a cluster builds local coherence (uniform). A bridging concept pulls attention toward a neighbor (regular). Recognizing the same structural pattern at a different scale or in a different cluster — that recognition itself functions as the bridge (fractal). Novel cross-domain combinations emerge (complex). Then consolidation and restart.
The bottleneck remains. Four chunks is still four chunks. But the trajectory through the landscape compensates for the constraint.
This connects to the A-R-D sequence: assimilation, redirection, dissipation. When the reader — or the organization, or the body in movement — encounters a structural gap, the gap manifests first as tension: something does not fit, a transition fails, coherence breaks. The standard response is to force coherence (assimilate harder) or abandon the thread (dissipate prematurely). The A-R-D sequence proposes a third path: absorb what fits the current frame, redirect toward what does not integrate directly (approach the gap obliquely rather than head-on), and release the rigidity that prevents the network from reorganizing.
In reading terms: when a text resists comprehension, the reader who forces local coherence misses the gap entirely. The reader who abandons the thread loses the local structure. The reader who redirects — pauses, shifts to a different entry point, approaches the difficult passage from the perspective of a different cluster — allows the gap to become perceptible without exceeding the bottleneck’s capacity. The gap does not need to be held in working memory. It needs to be circled, approached from multiple directions, until its shape becomes inferable from the surrounding terrain.
This is what fractal variability offers that linear reading cannot: the recognition of structural isomorphisms across scales. When the same dynamic appears in the Landscape Model’s processing cycles, in the text network’s co-occurrence topology, in the transformer’s attention patterns, and in organizational decision-making, the reader does not need to hold all four domains in the focus of attention simultaneously. They need only recognize the recurring pattern — co-activation within a limited window producing cumulative structure — and let the pattern itself serve as the bridge between domains. The cross-domain echo is the bridge. The isomorphism replaces the need for simultaneous representation.
This article has been performing exactly this operation. The 4-chunk bottleneck [2] [3] was first recognized in the Landscape Model’s processing cycle [8]. Then the same structure was identified in the text network’s sliding window [5] [6]. Then in the transformer’s attention architecture [18]. Then in organizational strategy [24]. At no point did the argument require holding all four domains simultaneously. It required recognizing the same pattern moving through different substrates — and letting each recognition deepen the understanding of the ones that came before.
The text network makes the landscape visible. The organizational protocol makes the gaps actionable. But the practice of cycling through variability — moving between scales, between clusters, between states of tension and release — makes the mind itself a more capable navigator of the landscapes it cannot fully see.
The bottleneck does not go away. But the landscape, approached fractally, reveals more of itself than any single view could contain.
References
[1] Graesser, A. C., Singer, M., & Trabasso, T. (1994). Constructing inferences during narrative text comprehension. Psychological Review, 101(3), 371–395.
[2] Cowan, N. (2001). The magical number 4 in short-term memory: A reconsideration of mental storage capacity. Behavioral and Brain Sciences, 24(1), 87–185.
[3] Cowan, N. (2010). The magical mystery four: How is working memory capacity limited, and why? Current Directions in Psychological Science, 19(1), 51–57.
[4] Myers, J. L., & O’Brien, E. J. (1998). Accessing the discourse representation during reading. Discourse Processes, 26(2–3), 131–157.
[5] Paranyushkin, D. (2011). Identifying the pathways for meaning circulation using text network analysis. Nodus Labs, Published online at noduslabs.com.
[6] Paranyushkin, D. (2019). InfraNodus: Generating insight using text network analysis. Proceedings of the World Wide Web Conference (WWW ’19), 3584–3589. ACM.
[7] van den Broek, P. (1995). A ‘landscape’ model of reading comprehension: Inferential processes and the construction of a stable memory representation. Canadian Psychology, 36(2a), 53.
[8] van den Broek, P., Young, M., Tzeng, Y., & Linderholm, T. (1999). The landscape model of reading. In H. van Oostendorp & S. R. Goldman (Eds.), The construction of mental representations during reading (pp. 71–98). Erlbaum.
[9] Yeari, M., & van den Broek, P. (2011). A cognitive account of discourse understanding and discourse interpretation: The Landscape Model of reading. Discourse Studies, 13(5), 635–643.
[10] Yeari, M., & van den Broek, P. (2016). A computational modeling of semantic knowledge in reading comprehension: Integrating the landscape model with latent semantic analysis. Behavior Research Methods, 48(3), 880–896.
[11] Cowan, N. (2005). Working-memory capacity limits in a theoretical context. In C. Izawa & N. Ohta (Eds.), Human learning and memory: Advances in theory and applications. The 4th Tsukuba International Conference on Memory. Erlbaum.
[12] Cowan, N., Lichty, W., & Grove, T. R. (1990). Properties of memory for unattended spoken syllables. Journal of Experimental Psychology: Learning, Memory, and Cognition, 16(2), 258–269.
[13] Linderholm, T., Virtue, S., Tzeng, Y., & van den Broek, P. W. (2004). Fluctuations in the availability of information during reading: Capturing cognitive processes using the landscape model. Discourse Processes, 37(2), 165–186.
[14] Singer, M., Graesser, A. C., & Trabasso, T. (1994). Minimal or global inference during reading. Journal of Memory and Language, 33(4), 421–441.
[15] Baddeley, A. D., & Hitch, G. (1974). Working memory. In G. H. Bower (Ed.), The Psychology of Learning and Motivation (Vol. 8, pp. 47–89). Academic Press.
[16] Baddeley, A. (2000). The episodic buffer: A new component of working memory? Trends in Cognitive Sciences, 4(11), 417–423.
[17] Miller, G. A. (1956). The magical number seven, plus or minus two: Some limits on our capacity for processing information. Psychological Review, 63(2), 81–97.
[18] Hochreiter, S., & Schmidhuber, J. (1997). Long short-term memory. Neural Computation, 9(8), 1735–1780.
[19] Martinussen, R., Hayden, J., Hogg-Johnson, S., & Tannock, R. (2005). A meta-analysis of working memory impairments in children with attention-deficit/hyperactivity disorder. Journal of the American Academy of Child & Adolescent Psychiatry, 44(4), 377–384.
[20] Kasper, L. J., Alderson, R. M., & Hudec, K. L. (2012). Moderators of working memory deficits in children with attention-deficit/hyperactivity disorder (ADHD): A meta-analytic review. Clinical Psychology Review, 32(7), 605–617.
[21] White, H. A., & Shah, P. (2016). Scope of semantic activation and innovative thinking in college students with ADHD. Creativity Research Journal, 28(3), 275–282.
[22] Hoogman, M., Stolte, M., Baas, M., & Kroesbergen, E. (2020). Creativity and ADHD: A review of behavioral studies, the effect of psychostimulants and neural underpinnings. Neuroscience & Biobehavioral Reviews, 119, 66–85.
[23] Martinussen, R., Hayden, J., Hogg-Johnson, S., & Tannock, R. (2005). A meta-analysis of working memory impairments in children with attention-deficit/hyperactivity disorder. Journal of the American Academy of Child & Adolescent Psychiatry, 44(4), 377–384.
[24] Dalio, R. (2017). Principles: Life and Work. Simon & Schuster.
[25] Paranyushkin, D. (2016). EightOS: Bodily Adaptive Intelligence. Nodus Labs. Published online at 8os.io
