Learning as adding new functions to a concept network
In concept-network mediums such as the human brain, a zettelkasten, or an artificial neural network (ANN), learning entails integrating new ideas into an existing network.
A key aspect of this learning mechanism is treating ideas as functions with dependencies. These dependencies are existing concepts that a new idea relies upon for its comprehension. For instance, to grasp the concept of inertia, it must be connected to the pre-existing idea of mass. Identifying these dependencies is crucial to the learning and contextualisation of new ideas.
Conversely, the new idea must also serve as a dependency for other concepts in the network, allowing for its utilisation. Therefore, an idea being learned is not simply a transition between the two possible states “not learned” and “learned”, but rather is an integration process involving the connection of ideas through dependencies. Without this process, an idea may exist within the network, but its meaning, implications and function remain unrealised and inaccessible to the rest of the concept network.
Apart from dependencies, ideas also have inputs and outputs, similar to functions in programming. An idea processes an input and generates an output. For instance, the idea that living with serial killers is risky may process “Bob is living with a serial killer” as an input and yield “Bob’s living situation is risky” as an output.
The zettelkasten reifies this concept of ideas having dependencies. Every note, or ‘zettel’, acts as an idea. These zettels are then linked to other notes, establishing a network of dependencies. In this system, ideas gain context and purpose, their value derived from the interdependencies they share with others.
With respect to inputs and outputs, Language Models (LLMs) operationalize ideas encapsulated in a zettel. The integration of these ideas with LLMs enables the execution of these ideas on given inputs, generating corresponding outputs. This functionality elevates the role of each zettel in a zettelkasten, rendering each one not just a node in a network of dependencies, but also a processing unit.
Some knowledge will be best represented in mediums other than text, particularly “procedural memory”, which in the context of the human brain, refers to knowledge of how to carry out certain actions and tasks. Such knowledge will often be best represented as executable code, as a result of the reliability and speed advantages associated with program code relative to LLM inference. The representation of such knowledge as code however in now way precludes it from being employed by LLMs nor composed with textual knowledge, as LLMs are capable of interfacing with external tools. More conceptual procedural knowledge, such as research skills, will likely be best represented partially in the form of code and partially in the form of text due to the
“Procedural memory”, or knowledge of executing specific tasks, is often best encapsulated as executable code, benefiting from the reliability and speed of code, compared to LLM inference. This doesn’t limit its composability with LLMs or its integration with textual knowledge, as LLMs are capable of interacting with external tools. Conceptual procedural knowledge, like research skills, may be optimally represented as a combination of code and text.
In addition to the obvious ability for “programmatic ideas” to refer to other programmatic ideas as dependencies, as is common in code, they can also both be used as a dependency by and employ as a dependency textual ideas. An example might be a textual idea referring to a programmatic idea as relevant to solving a certain problem or in the opposite direction, a programmatic idea referring to a textual idea as context for understanding the purpose or output of the programmatic idea.
If you found this interesting, have feedback or are working on something related, let’s chat: twitter (@0xdist) or schedule a 20 min call