Embrace the Suck
On call duty doesn’t have to be a burden. It can be quite the career accelerator in software engineering. Here’s 4 reasons why y’all might want to embrace those midnight alerts.
1. Deepen Your Understanding of the Entire System
When you’re on call, you’re exposed to parts of the system you might not interact with during your daily routine. This exposure strengthens your ability to troubleshoot effectively and contributes to a more robust system design.
2. Enhance Cross-Functional Collaboration
Incidents often require coordination with alot of different stakeholders: product managers, customer support, devops, etc. All of this interaction sharpens your communication skills and fosters a collaborative environment, which is key for swift incident resolution.
3. Cultivate a Mindset for Resilient Coding
Experiencing real life fires just reinforces the importance of writing resilient, fault-tolerant code. It instills a proactive approach to anticipate potential failures. And drives home the need to design systems that can gracefully handle unexpected issues.
4. Build Strong Partnerships with SREs
Collaborating with site reliability engineers during incidents creates great partnerships. SREs bring specialized expertise in system reliability, monitoring, and performance optimization. By working alongside these folks during incidents, you’ll gain insights into observability best practices and develop a shared understanding of your company’s reliability needs. These partnerships often lead to better architectural decisions long before any incident occurs.
So, being on call isn’t just about responding to late night alerts, it’s a great experience that fosters growth and resilience. It’s an opportunity to grow as an engineer and contribute to building reliable systems at scale.

My parent’s house was about a 5 hour drive away from where I went to college. And every semester, my mom would make that drive, round-trip in one day, so roughly 10 hours of driving. Just so she could make sure I got to school or home safely. Thanks, mom. Happy Mother’s Day.
My LLM Ate My Homework
A college education is meant to shape how we think, not just what we know. At least, that’s what we’re told. The act of writing essays and solving math problems aren’t meant to be busywork. They’re exercises in critical thinking, the mental equivalent of weightlifting.
This is why AI tools present such a problem in higher education. A NYMag feature showed that many students now treat AI not as a learning aid but as a “get out of jail free” card, allowing them to cheat on just about any assignment. Short term, this seems efficient, maybe. But long term, it’s like hiring someone to do your pushups and expecting to get stronger. How will tomorrow’s college grads think critically if they didn’t master it while at university?
But there’s a greater issue here that predates AI cheating by several decades: college isn’t about learning anymore, it’s about high stakes. Ethan Mollick noted that the modern university experience has become transactional. Students are acutely aware that good grades can unlock internships, jobs, scholarships, etc. The system pressures students to produce results, not necessarily to understand the material.
If ChatGPT can generate a passable term paper in 10 minutes, and that paper gets the same grade as one that took 20 hours, any rational student weighs the costs and benefits and decides they’re better off taking the risk with ChatGPT doing the heavy lifting. When incentives reward output over process, the result isn’t surprising.
My take is that we can’t ban AI tools in education. That ship has sailed. Rather than resist this shift, educators, students, and parents should adapt. That adaptation will require changes in how educators assess learning. It may involve more oral exams, in-class writing assignments, or coursework that explicitly asks students to critique or build upon AI generated work. It also means educators must teach students how to use AI responsibly: as a thought partner rather than as a ghostwriter.
For parents, the challenge is to reinforce the value of learning over simply achieving. I have tried to practice this myself with my own kids. I know they’re going to use AI tools to help them finish their schoolwork. I just ask that they use it as a learning tool and not merely as a crutch to finish the work faster. Time will tell if this was good advice.
Sources:
www.oneusefulthing.org/p/post-ap…

OpenAI buys Windsurf.
OpenAI currently has codex
which is a command-line tool. Windsurf is a full blown IDE.
https://finance.yahoo.com/news/openai-reaches-agreement-buy-startup-000054157.html
NotebookLM App
Just announced: Google’s NotebookLM app is coming “May 20th on iOS and Android.” It’s being touted as one of their best AI tools yet, offering users tons of flexibility.
For those unfamiliar with NotebookLM, it’s essentially Google’s answer to AI-assisted research and note taking. The tool has previously existed in a more limited form (only on the web), but this standalone app release shows Google’s confidence in its capabilities. I particularly enjoy using NotebookLM because it understands your documents, allowing you to have conversations about your content instead of just searching through it. There’s a podcast feature as well, although I haven’t tried that bit yet.
My biggest use case for it so far has been with random user manuals for home appliances, tools, and other household doodads. We have a cabinet where we keep these, but have not once gone back to reference them. With NotebookLM, I can scan the manuals in as PDFs and then ask questions in chatbot form later, which decreases friction tremendously.
For those of us who’ve been waiting for the app version of NotebookLM, the wait appears to be nearly over.
Sources: www.tomsguide.com/ai/google… x.com/OfficialL…

The OpenAI Mafia
Recently, I was thinking about all the new AI startups that are in the news pretty much constantly. Many are founded by Open AI alumni. It turns out OpenAI isn’t just a leading AI company. It’s also become Silicon Valley’s newest “mafia,” much like the OG PayPal network. Over the past few years, about 70 alumni have ventured out to launch 30+ startups, covering AI safety, search, robotics, edtech, climate tech, enterprise tools… You name it.
Some examples:
- Anthropic (natch) (Dario and Daniela Amodei and John Schulman) - tackling next‑gen safety challenges.
- Safe Superintelligence (Ilya Sutskever) - also tackling safety challenges.
- Perplexity (Aravind Srinivas) - AI‑powered search.
- Thinking Machines Lab (Mira Murati) - “customizable” AI.
- Cleanlab (Anish Athalye), Symbiote AI (Taehoon Kim) and Aidence (Tim Salimans) - basically a grab bag of applications for AI: data‑quality tooling to real‑time 3D avatars to medical imaging.
- Several more, such as Covariant, Prosper Robotics, Living Carbon, Daedalus, Eureka Labs , Pilot, Cresta, and Adept AI Labs.
Essentially, OpenAI’s blend of mission‑driven R&D, a collaborative culture, and early exposure to bleeding edge models is effectively acting as a de facto incubator. No formal accelerator needed.
The “OpenAI Mafia” is a real thing, and its ripple effects are definitely being felt. Watching these founders go from colleagues to competitors is pretty exciting. And frankly tough to keep up with. But it’s cool to watch it all unfold.
Sources: techcrunch.com/2025/04/2… analyticsindiamag.com/global-te…

New Pixar movie on the way: Titled “Mysterious and Important”. 4 friends go on an adventure.

More on the Apple v Epic Ruling
Alot has already been said about this.
Here’s Gruber.
Here’s 9to5mac.
Here’s Primary Technology’s podcast where Stephen and Jason spend the first 35 minutes on it.
As far as Apple itself, we’re seeing some immediate effects of this where they are changing the App Store rules to allow outside purchases, something judge Gonzalez Rogers emphasized in her ruling that was something the company was supposed to do in the first place.
But I want to talk about something a little different. After digesting this news for a day or two and listening to some podcasts about it, my main takeaway was the reaction to the hugeness of it. Most folks who cover Apple love their products. There’s a genuine enjoyment in showing others how easy and fun it is to touch and feel and use iPhones, Macs, iPads, etc. The software is intuitive, beautifully designed, almost eye candy. Additionally, there’s some joy in contrasting with other companies' products where the same design and joyfulness is decidedly absent.
However, now these same people can no longer ignore the fact that Apple has behaved in a manner that is clearly anti-user, not to mention anti-developer. One commenter on 9to5mac summed it up pretty well:
“Apple makes some of my favorite technology products, while at the same time engaging in some of the most anti-competitive, anti-consumer practices we see today. I celebrate this ruling. I believe it’s possible both to love Apple products and want the company to do better.” - Ryan W.
When other Big Tech companies are naughty, that’s big news too. And it’s covered just as thoroughly. But the reaction is a little more muted, to my ears at least. Nowadays, in 2025, we almost expect Google, Meta, Amazon, et al. to misbehave from time to time. Their revenue model doesn’t align perfectly with user experience, and so it’s no surprise that Google wants all of our data or Meta wants to show us more targeted ads. And it’s not really much of a shock when they’re accused of lying and cheating the system. “Oh, Zuck weaseled his way through that senate hearing? Cool. What’s for lunch?”
But when Apple ignores a judge’s ruling and an Apple executive is allegedly caught lying under oath, the subtext of the reaction is different: Apple is better than this. They should have known better. Indeed, Phil Schiller appealed to the other execs' better natures back when the first ruling occurred in 2021 and basically said (paraphrasing): “guys, let’s just do what they’re asking and move on”.
It’s an interesting contrast. Apple is so huge nowadays that it really shouldn’t be a surprise that they’re engaging in anti-competitive behavior. Just like all the others, they can afford an army of analysts and lawyers to figure out ways to game the system. What’s different though, is when the Googles and Metas of the world do it, we seem to shrug our shoulders. But when Apple does it, we’re genuinely hurt because our expectations of the company that was founded by Steve Jobs are much higher.