Back to all posts

AI Should Make You Smarter, Not Lazier

I've been thinking a lot about how we use AI. And honestly, I'm worried we're doing it wrong.

Not "wrong" in a moral panic, robots-are-coming-for-us way. Wrong in a much more boring way: we're using tools that could make us smarter to instead make us lazier. And in the process, we're handing over our data, our thoughts, and our learning to companies that don't have our best interests at heart.

This post is about why I built ReadingBuddy, a Chrome extension for AI-powered reading assistance. But more than that, it's about the philosophy behind it.


The Privacy Problem Nobody Talks About

Here's something that bothers me: every time you paste something into ChatGPT, you're sending your thoughts to OpenAI's servers. Every Google search, every Claude conversation, every Copilot suggestion. It all gets logged, processed, and stored somewhere you don't control.

Most people shrug at this. "I have nothing to hide." But that misses the point.

The issue isn't that you're hiding something. The issue is that your learning process is deeply personal. The questions you ask, the concepts you struggle with, the things you're curious about. It's a map of your mind. And you're handing that map to corporations whose business model is monetizing your data.

When you ask an AI to explain quantum entanglement, you're revealing that you don't understand quantum entanglement. When you ask it to help debug your code, you're revealing what you're building and where you're stuck. When you ask it to summarize an article about a medical condition, you're revealing your health concerns.

None of this is "bad" in isolation. But aggregated over time, it's an incredibly detailed profile of who you are, what you know, what you don't know, and what you care about.

The alternative exists. You can run AI models locally. Ollama makes it dead simple to run Llama, Mistral, Qwen, and dozens of other models on your own machine. Your questions never leave your computer. No logs. No tracking. No "we may use your conversations to improve our models."

This was the first principle behind ReadingBuddy: privacy by default. Local models first. Cloud APIs as an option for people who want them, not as the only choice.


AI as a Crutch vs. AI as a Partner

Here's where it gets more nuanced.

I've watched people use AI in two very different ways. The first way: "Write this email for me." "Summarize this article so I don't have to read it." "Generate a blog post about X." The AI does the thinking. The human copies and pastes.

The second way: "I don't understand this paragraph. Can you explain what the author means by 'eventual consistency'?" "I'm trying to decide between these two approaches. What are the tradeoffs I might be missing?" "Here's my rough draft. What's unclear or unconvincing?"

The difference is subtle but massive. In the first mode, you're outsourcing cognition. In the second mode, you're augmenting it.

I'm not being preachy here. I've done plenty of the first mode. Sometimes you just need to get something done and you don't care about learning. That's fine.

But when it becomes your default mode, something atrophies. You stop developing the mental muscles that come from struggling with hard problems. You become dependent on the tool in a way that makes you worse at thinking without it.

The engineers I most respect use AI differently. They use it to accelerate learning, not bypass it. They ask it questions. They have it explain concepts. They use it as a sparring partner for ideas. But when it comes to the actual thinking, the synthesis, the judgment calls, the creative leaps? They do that themselves.


The Reading Problem

I read a lot. Technical blogs, research papers, documentation, long-form journalism. And I kept running into the same friction: I'd hit a paragraph I didn't fully understand, and I had two choices.

Option one: open a new tab, search for an explanation, wade through SEO garbage, maybe find something useful, lose my place in the original article, forget what I was reading about. A five-minute detour that breaks my flow.

Option two: just keep reading and hope it makes sense later. Sometimes it does. Often it doesn't, and I finish the article with gaps in my understanding.

Neither option is great. What I wanted was something like having a knowledgeable friend sitting next to me who I could tap on the shoulder and ask "hey, what does this mean?" without it being a whole thing.

That's what ReadingBuddy is. Highlight text. Click. Get an explanation. Ask follow-up questions if you need them. Stay in the flow of what you're reading.

It's not about having AI summarize articles so you can skip reading them. It's about removing the friction that makes deep reading harder than it needs to be.


Why Not Just Use ChatGPT?

Fair question. You could copy text, paste it into ChatGPT, ask your question, copy the response back. People do this all the time.

Three reasons I wanted something different:

1. Context Switching Kills Focus

Every time you leave the page you're reading, you pay a cognitive tax. You have to remember where you were, what you were thinking about, what question you had. By the time you get the answer and switch back, you've lost momentum.

ReadingBuddy keeps you on the page. The explanation appears right there, next to the text you highlighted. Your eyes never leave what you're reading.

2. Privacy (Again)

With ReadingBuddy running on Ollama, nothing leaves your machine. Not the article you're reading, not your questions, not your learning history. With ChatGPT, everything goes to OpenAI's servers.

For a lot of reading, that matters. Technical docs at work, research in sensitive areas, anything you'd rather keep private.

3. Customization

I wanted short, focused explanations by default. Not the verbose, hedge-everything style that ChatGPT defaults to. ReadingBuddy gives you a 2-3 sentence explanation and then asks if you want more detail. Most of the time, you don't.

You can also choose your model. Smaller models for speed, larger ones for complex topics. Local models for privacy, cloud APIs when you need more capability. Your choice, not someone else's default.


The Technical Bit

If you're curious about how it works:

The Ollama integration was the trickiest part. Chrome extensions have CORS restrictions that make it annoying to talk to localhost services. We handle this by setting OLLAMA_ORIGINS on the Ollama server side.

Streaming was important. Nobody wants to wait 10 seconds staring at a spinner. You want to see the response forming, word by word. This required port-based communication between the content script and service worker, with chunked messages as the LLM generates tokens.

The whole thing is open source: github.com/ShedBoxAI/reading_buddy


The Bigger Picture

I think we're at an inflection point with AI tools. The technology is good enough now that it can genuinely accelerate learning and thinking. But the default way most people use it, and the way most products encourage you to use it, optimizes for convenience over growth.

The question isn't "should I use AI?" It's "how should I use AI in a way that makes me better, not more dependent?"

For me, that means:

ReadingBuddy is my attempt to build a tool that embodies these principles. It's small and focused. It does one thing. It respects your privacy. And it's designed to help you learn, not to learn for you.

Try it out. Break it. Tell me what sucks. Make it better.


ReadingBuddy is free and open source. Install it from the GitHub repo, or wait for the Chrome Web Store listing (coming soon).