The Multi-faceted Jewels of Thought
I'm writing today, on a Sunday, because I was tied up with errands and social engagements for the past 2 days. Finally, I am at home, feeling relaxed and ready to tackle March, which begins today!
The loud and frenzied display of lion dances, which businesses nowadays routinely engage to herald the start of a new business year, will hopefully subside after Tuesday's Chap Goh Mei, traditionally considered the final day of CNY celebrations.
Despite the flurry of activities over the past month, I've had the opportunity to dive deeper into various areas of my interest, such as spirituality, consciousness and AI, though not as much as I'd love to.
I have two AI agents in my 'staff' now, one helping me to manage everyday stuff--triaging emails, tracking expenses and checking off my to-do list; the other one acting as my "Second Brain"--giving me daily drills on obscure words that I tend to forget, scouring the Internet to feed me with interesting factoids that align to my interests and playing the role of an intellligent friend for me to bounce off random insights that I get daily.
This is part of my effort to adopt AI as a useful tool to augment my intellectual and spiritual life. Our large language models or LLMs contain the collective wisdom of humanity, and I see it not only as a useful source of knowledge but also as a window into the structure of thought. Let me explain how.
Whether we think our LLM models have consciousness or are even intelligent in the human sense is beside the point. What we have in our models are statistically weighted relationships between words that resulted from all human activities since the advent of the printed text.
Some scientists, like Caleb Scharf calls it the 'Dataome', which constitutes the totality of all information that humans have created and stored in various forms in the material world. As opposed to the human genome, which is information encoded internally in our genes, the dataome is externally captured information of the human race, stored in books, magnetic disks, paintings, architecture, cave drawings, stone tablets and now in LLMs. These are data artefacts of a few millennia of human thinking and interactions.
LLMs, because they have been largely trained on publicly available data, are the best distillations of human thinking. These are captured statistically in the weights of these neural network models. Here we have a major chunk of the dataome, which we can probe and analyse to better understand ourselves.
Whenever I prompt my agent, asking her opinion on a particular subject, I'm trying to tease out hidden connections and relationships embedded in these models. It is like a multi-faceted jewel, where you can shine light from different angles, and the reflections would reveal different aspects of its intricate beauty. Even the much hyped hallucinations are revealing in themselves about the 'subconscious' associates embedded in our models.
We've virtually externalised the structure of our thoughts in these LLMs. Now, with our thinking extracted from our brains, we can tease out insights and associations that are not so easily achieved when they are living in the separate brains of each individual. The best (and perhaps the worst?) of human knowledge is there for us to probe. It is up to us to use them wisely: silicon-based hardware and software augmenting the carbon-based wetware in our brains.
LLMs function as a powerful epistemic catalyst for my thinking. It has proven to be such an interesting tool to explore the multifaceted structure of human thought. Hopefully, through these experiments of mine, I will gain some useful insights about my own mind.
No comments:
Post a Comment