The Ancient Art of Close Reading
Close reading is a technique for careful analysis of a piece of writing, paying close attention to the exact language, structure, and content of the text. As Eric Ries described it,“close reading is one of our civilization’s oldest and most powerful technologies for trying to communicate the gestalt of a thing, the overall holistic understanding of it more than just what can be communicated in language because language is so limited.” It was (and in some cases still is) practiced by many ancient cultures and major religions.
Some scholars describe close reading as “‘reading out of’ a text rather than ‘reading into’ it”, referring to the importance of making outward connections to broader context. LLMs can provide a useful tool for identifying these outward connections.
It might come as a surprise that a technique associated with such a long history could now see a revival with the use of Large Language Models (LLMs). With an LLM, you can pause after a paragraph to ask clarifying questions, such as ‘What does this term mean?’ or ‘How does this connect to what came before?’
Two Examples of Reading with an LLM
Watching the videos below will give you the clearest examples of how reading with an LLM can work. However, I will do my best to summarize our findings below. The videos are excerpts from the most recent fast.ai course, How to Solve It With Code.
Jeremy read an early version of Eric Ries’s new book, Incorruptible. He discusses his approach to Eric, demonstrating how he managed context, sharing his discoveries, and they both reflect on the experience.
A second demo looks not at a book, but at a dense academic paper. Johno Whitaker used a cutting-edge paper from Yann LeCun (LeJEPA) as an example. He walks through how he prepares his workspace, investigates both math and code from the paper, and creates a simple visual interaction in order to build intuition.
Benefits of Close Reading with an LLM
Here are a few examples from Jeremy’s experiences that stood out to me as benefits of reading with an LLM: he was able to go down rabbit holes of interest, ask clarifying questions, and personalize the material.
One chapter of Eric’s book discusses a disastrous CEO who moved from 3M to Boeing, causing problems at both companies with his focus on cost-cutting. He won “CEO of the Year”, yet oversaw the development of the Boeing 737 MAX, which later experienced fatal crashes. Jeremy was intrigued and searched for more information, discovering that this CEO was one of 13 unsuccessful mentees of Jack Welch. In a series of follow-up questions with the LLM, he learned that 4 of these 13 mentees served as CEOs at Boeing during its period of safety scandals and decline!
When Jeremy was confused about a concept, he asked for more background explanation. At one point, he was skeptical of Eric’s thesis and sought out counterexamples. Jeremy asked the LLM to personalize principles from the book by applying them to the governing structure of Answer.ai. These are all questions LLMs can be well-suited for.
To retain new information you learn as you read, spaced repetition is a useful technique, often implemented with Anki flashcards. The fastanki library provides a way to create new Anki cards within a reading dialog. You can read, write, and sync to the same Anki deck you use on your phone and desktop computer, so you’ll be able to study the cards later at your convenience.
At the end of the chapter, Jeremy generated a summary of the dialog (including his own questions and rabbit holes) that would be useful context for the LLM when he started the next chapter. Reflecting on the experience, Jeremy was enthusiastic, “This is one of the absolute best reading experiences I’ve ever had!” Eric found it rewarding to see a reader so actively engaged with his book.
The SolveIt Process
We used the SolveIt platform, which combines elements of ChatGPT, Jupyter, Claude Code, and Cursor. SolveIt is designed around the principle of encouraging people to work in small, incremental steps and receive immediate feedback. The goal is to not just figure out answers, but to develop a deeper understanding of the problem.
Here is an overview of the process that Jeremy followed for reading:
- Convert PDFs to Markdown
- Generate summaries of each chapter to use as context for the LLM
- Instruct the LLM not to give spoilers
- The reader asks questions as they read through the full text of the book
- At the end of each chapter, generate overviews of the conversation between the reader and the LLM to share as further context for the next chapter
- Optional: have the LLM to ask questions to check the reader’s understanding
- Optional: create Anki cards within reading dialogs using fastanki
Obstacles to Reading with an LLM
It is early days of building the tools for close reading with an LLM. The process for creating chapter summaries to provide as context to the LLM is somewhat clunky. The SolveIt PDF-to-markdown and Anki integration tools currently require coding ability to set up. We are working to further streamline this process to make it easier to use.
A concern when working with LLMs is that they generate plausible-sounding text that can be factually incorrect, a problem known as hallucination. Jeremy and Johno did not encounter this issue, most likely because their approach took advantage of grounding (when the answers to questions are present in the LLM’s context) and of having the LLM make use of external web searches.
A Work in Progress
Hopefully, the above ideas and videos provide inspiration of how LLMs could be used in your reading. One key finding from both Jeremy and Johno was about the value of setting up the context of their environments beforehand. “It’s like the architect sharpening his pencils or Jeremy like preparing his canvas. And then the next time you go there, your desk’s all set up. You know, you’ve got all those pieces. And that little investment up front makes it a very different tool to the vanilla case,” Johno described this preparation.
The videos included above are excerpts from Lesson 9 of the fast.ai How to Solve It With Code course. The full course covered how to use AI not to outsource your thinking, but rather to deepen your understanding and problem solving skills. It covered a wide range of topics: building your own AI agent, web development, remote server management, classic algorithms, and more. You can find out more about the course and how to sign up here.
Thank you to Eric, Rens, and Jeremy for feedback on earlier drafts of this post.