Sam and Satya

Two interesting news morsels coming out of Redmond.

  1. Microsoft is re-negotiating their partnership with OpenAI to clear a path for an OpenAI IPO.

  2. Microsoft just announced layoffs across LinkedIn(!) and GitHub, with a focus on reducing management layers and streamlining operations. It amounts to about 3% of the workforce.

So, are these two news stories related in any way?

Microsoft has invested north of $13 billion in OpenAI. With OpenAI exploring an IPO, MS likely wants to ensure its continued access to bleeding edge AI technology.

Given that they reported record revenue ($62 billion) and profit in the latest quarter, the Microsoft layoffs appear targeted, because clearly there isn’t financial distress here.

Freeing up headcount and budget allows Microsoft to redirect resources to AI and other growth initiatives.

Revisiting my Vibe Coding Prediction.

About a year ago, I made a prediction: that the rise of LLMs like ChatGPT would lead to fewer programming languages over time. If AI agents can translate high-level instructions into code, then language complexity becomes an implementation detail rather than a developer concern.

Fast forward to today, with tools like Cursor and Windsurf, and we’re closer to that reality. For example, I can ask Cursor to “build a weather app in React with Redux” and get a runnable scaffold without touching the docs. In a matter of seconds.

So far, we haven’t seen any significant drop in the number of languages. But I think the original intuition holds: we’re moving toward a world where the choice of language matters less because the AI abstracts it away. Hence: vibe coding.

Embrace the Suck

On call duty doesn’t have to be a burden. It can be quite the career accelerator in software engineering. Here’s 4 reasons why y’all might want to embrace those midnight alerts.

1. Deepen Your Understanding of the Entire System

When you’re on call, you’re exposed to parts of the system you might not interact with during your daily routine. This exposure strengthens your ability to troubleshoot effectively and contributes to a more robust system design.

2. Enhance Cross-Functional Collaboration

Incidents often require coordination with alot of different stakeholders: product managers, customer support, devops, etc. All of this interaction sharpens your communication skills and fosters a collaborative environment, which is key for swift incident resolution.

3. Cultivate a Mindset for Resilient Coding

Experiencing real life fires just reinforces the importance of writing resilient, fault-tolerant code. It instills a proactive approach to anticipate potential failures. And drives home the need to design systems that can gracefully handle unexpected issues.

4. Build Strong Partnerships with SREs

Collaborating with site reliability engineers during incidents creates great partnerships. SREs bring specialized expertise in system reliability, monitoring, and performance optimization. By working alongside these folks during incidents, you’ll gain insights into observability best practices and develop a shared understanding of your company’s reliability needs. These partnerships often lead to better architectural decisions long before any incident occurs.

So, being on call isn’t just about responding to late night alerts, it’s a great experience that fosters growth and resilience. It’s an opportunity to grow as an engineer and contribute to building reliable systems at scale.

My parent’s house was about a 5 hour drive away from where I went to college. And every semester, my mom would make that drive, round-trip in one day, so roughly 10 hours of driving. Just so she could make sure I got to school or home safely. Thanks, mom. Happy Mother’s Day.

My LLM Ate My Homework

A college education is meant to shape how we think, not just what we know. At least, that’s what we’re told. The act of writing essays and solving math problems aren’t meant to be busywork. They’re exercises in critical thinking, the mental equivalent of weightlifting.

This is why AI tools present such a problem in higher education. A NYMag feature showed that many students now treat AI not as a learning aid but as a “get out of jail free” card, allowing them to cheat on just about any assignment. Short term, this seems efficient, maybe. But long term, it’s like hiring someone to do your pushups and expecting to get stronger. How will tomorrow’s college grads think critically if they didn’t master it while at university?

But there’s a greater issue here that predates AI cheating by several decades: college isn’t about learning anymore, it’s about high stakes. Ethan Mollick noted that the modern university experience has become transactional. Students are acutely aware that good grades can unlock internships, jobs, scholarships, etc. The system pressures students to produce results, not necessarily to understand the material.

If ChatGPT can generate a passable term paper in 10 minutes, and that paper gets the same grade as one that took 20 hours, any rational student weighs the costs and benefits and decides they’re better off taking the risk with ChatGPT doing the heavy lifting. When incentives reward output over process, the result isn’t surprising.

My take is that we can’t ban AI tools in education. That ship has sailed. Rather than resist this shift, educators, students, and parents should adapt. That adaptation will require changes in how educators assess learning. It may involve more oral exams, in-class writing assignments, or coursework that explicitly asks students to critique or build upon AI generated work. It also means educators must teach students how to use AI responsibly: as a thought partner rather than as a ghostwriter.

For parents, the challenge is to reinforce the value of learning over simply achieving. I have tried to practice this myself with my own kids. I know they’re going to use AI tools to help them finish their schoolwork. I just ask that they use it as a learning tool and not merely as a crutch to finish the work faster. Time will tell if this was good advice.

Sources:

nymag.com/intellige…

www.oneusefulthing.org/p/post-ap…

OpenAI buys Windsurf.

OpenAI currently has codex which is a command-line tool. Windsurf is a full blown IDE.

https://finance.yahoo.com/news/openai-reaches-agreement-buy-startup-000054157.html

NotebookLM App

Just announced: Google’s NotebookLM app is coming “May 20th on iOS and Android.” It’s being touted as one of their best AI tools yet, offering users tons of flexibility.

For those unfamiliar with NotebookLM, it’s essentially Google’s answer to AI-assisted research and note taking. The tool has previously existed in a more limited form (only on the web), but this standalone app release shows Google’s confidence in its capabilities. I particularly enjoy using NotebookLM because it understands your documents, allowing you to have conversations about your content instead of just searching through it. There’s a podcast feature as well, although I haven’t tried that bit yet.

My biggest use case for it so far has been with random user manuals for home appliances, tools, and other household doodads. We have a cabinet where we keep these, but have not once gone back to reference them. With NotebookLM, I can scan the manuals in as PDFs and then ask questions in chatbot form later, which decreases friction tremendously.

For those of us who’ve been waiting for the app version of NotebookLM, the wait appears to be nearly over.

Sources: www.tomsguide.com/ai/google… x.com/OfficialL…

The OpenAI Mafia

Recently, I was thinking about all the new AI startups that are in the news pretty much constantly. Many are founded by Open AI alumni. It turns out OpenAI isn’t just a leading AI company. It’s also become Silicon Valley’s newest “mafia,” much like the OG PayPal network. Over the past few years, about 70 alumni have ventured out to launch 30+ startups, covering AI safety, search, robotics, edtech, climate tech, enterprise tools… You name it.

Some examples:

  • Anthropic (natch) (Dario and Daniela Amodei and John Schulman) - tackling next‑gen safety challenges.
  • Safe Superintelligence (Ilya Sutskever) - also tackling safety challenges.
  • Perplexity (Aravind Srinivas) - AI‑powered search.
  • Thinking Machines Lab (Mira Murati) - “customizable” AI.
  • Cleanlab (Anish Athalye), Symbiote AI (Taehoon Kim) and Aidence (Tim Salimans) - basically a grab bag of applications for AI: data‑quality tooling to real‑time 3D avatars to medical imaging.
  • Several more, such as Covariant, Prosper Robotics, Living Carbon, Daedalus, Eureka Labs , Pilot, Cresta, and Adept AI Labs.

Essentially, OpenAI’s blend of mission‑driven R&D, a collaborative culture, and early exposure to bleeding edge models is effectively acting as a de facto incubator. No formal accelerator needed.

The “OpenAI Mafia” is a real thing, and its ripple effects are definitely being felt. Watching these founders go from colleagues to competitors is pretty exciting. And frankly tough to keep up with. But it’s cool to watch it all unfold.

Sources: techcrunch.com/2025/04/2… analyticsindiamag.com/global-te…