Natural Opinions
Ai-Ai, Captain.
I was asked to write an AI Playbook for a dinner I was invited to tonight. I don’t think I know what an AI playbook is. But my best guess is that it’s something that can be referred to when building AI functionality. Or maybe when you’re selling it? It’s unclear to me. But what is clear is that it’s unclear to others as well. So this quick (handwritten) post is to help spray some Windex on what I think about AI, for all facets of what an AI Playbook may entail. And, much like LLM generative responses, I may be hallucinating.
Functionality
AI functionality has been criminally overcooked and oversold. When you wince and wipe your eyes for a second, you see the same primitives that build great web applications: a browser window, a text box, and a submit button. The key difference is that a lot of these web elements now have a sparkle emoji (✨) next to them to indicate “this is powered by AI.”
AI is not a new engineering architecture in many ways. Engineers will have to write HTTP handlers, push messages to queues, and respond back to requests from users. The main difference is that they aren’t writing the “if/else” statements anymore. This is where “AI powered” comes in most of the time.
“AI powered” is effectively a magical box that can take text (or base64-encoded bits) and figure out a stream of if and else statements for you. Engineers pre-2023 would have to write billions of conditional logic statements (by hand no less) to achieve the same functionality that inference models achieve for us now. Trained models + situational awareness + context engineering is the ultimate if/else statement in an engineer’s toolbox now.
From an engineer’s point of view, they are replacing if statements with “agentic loops” that can call a chunk of code (or tool, in LLM parlance). These agentic loops have a single point of failure with HTTP requests to OpenRouter, Claude, OpenAI, etc. But I digress.
The point I’m driving home is that “AI Powered,” while exciting, isn’t that much different than the applications of old. In many ways, AI-powered features are trading code complexity for accuracy complexity.
LLM Interfaces matter more than model intelligence
AI is magical. Despite my tone in the above section, I do believe we’re entering a new era of software. From my perspective, AI will only take off if the user interfaces to access it enter their own new era as well.
For example, OpenAI (and ChatGPT) caught fire because someone had the idea to create a chat interface for their model. They created a simple HTTP API, a submit button, and a simple 2008-esque web page and BAM! Lightning struck. OpenAI was several versions (and years) deep into their model development, but the barrier to value was lowered to anyone that had an embarrassing AIM screen name in 2003. (Like me: Gsur6). LLMs then became: type a few words, hit submit, and wait for the page to “load” the response... seems familiar. The thing that is missing is the hissing and squawking of a 56k modem.
For AI to reach the heights of possibility the world is anticipating, I’m assuming we will need to reimagine the interfaces that allow humans to interact with AI in the first place. Will we even need keyboards in 10 years, for example? Or will tools like Wispr Flow become the predominant way to control our computers? Neuralink, a reimagined “user interface” to computers, might seem bananas now, but what if that is the way AI can achieve its full potential? Would users prefer that?
The interfaces humans use will have to change for the advancements of AI to provide more value to said humans. It’s unlikely that smarter models will do us much good until we improve the interfaces to them (again).
Selling AI
AI is exciting to build, market, and sell. But the proclivity to buy AI does not match yet. After three years, the osmosis of excitement has not happened between sellers and buyers. Skepticism is rampant with AI, and not only from software engineers. Even my girlfriend, who works in PR and Communications, has her own doubts about AI.
There’s a certain level of dissonance surrounding AI to me. If you drive up the 101 towards San Francisco from the airport, you’d be right to assume that AI is the only thing that matters right now. Yet your Lyft driver is still manually controlling their car, using the gasoline they pumped themselves, using the cash they received as a tip. To me, the biggest thing that has changed due to AI (up to this point) is the shift in marketing energy to it, rather than the sheer value I receive from it. Even Dell has admitted that AI computers are not selling because of AI itself.
When it comes to selling AI, I’ve seen one selling strategy that works best: building pragmatic AI systems. Customers, by and large, are inundated with AI in every corner of their lives right now. If they are seeking out AI features, they will tell you. I promise. It’s more important to sell the value of your platform like the sales gods intended, not the AI that maybe(?) improves the core functionality a prospect is seeking in the first place.
Even the AI model providers themselves follow this principle. Simple APIs. Simple user interfaces.
What AI features you can build now
Reading comprehension scores are high for LLMs, and you are better off leveraging AI to read than to write in the immediate term. What that means in the real world is that using AI to transform information is better than using AI to create information. You need to find creative solutions when applying this principle to feature development. But one that I’m actively exploring is using AI to migrate between systems.
For example, an agentic system that can read an API response from, say, PagerDuty can quite easily transform that information into, oh I don’t know... FireHydrant. AI might be the best solution to a robust ETL system you could build today.
By using AI to read and transform information, you’re providing a known “system” that humans would have to go through anyway. This proves out the value of an agentic system with less “selling” than an innovative one that may not work in the first place.
Again: Selling practical AI features will triumph over innovative but misunderstood functionality. Bring people along with AI. Preferably not by force.
Where Is This All Going?
I don’t know. Neither do you. My best guess is that AI will find a way to improve most aspects of life by an unknown percentage, with some areas more than others. That’s why there’s so much money being poured into AI startups: even a marginal improvement of humanity will have an enormous return.
Said another way, I wouldn’t mind a 10% increase in a better life for everyone on Earth if it means a few years of annoying billboards, AI in my refrigerator for some fucking reason, or an AI music “artist.” I don’t need to open my wallet for those things. I’m confident they’ll die off on their own. Me not caring is the best assist I can give to ensure that. Ironic.
I believe AI will unlock innovations in medicine, transportation, food production, energy, and law (to name a few) far faster than it ever improves our sales funnels. I want a better sales funnel. But I also want food production to be stable for centuries to come. AI will be a part of the solution for both. Hopefully more for the food problem, though.
AI will change the world. Slower than businesses want, and always at the tolerance that we, the buyers, want AI in our lives. Swinging between impatient overestimation and practical underestimation is what creates lasting progress.
Now, to get to my dinner party. I’m late. If only AI could’ve called me a car.

