Yesterday we talked about AI agents needing babysitters. Today's follow-up: the real money isn't in replacing jobs with AI. It's in creating new bottlenecks that need humans to solve.
01
Box CEO Aaron Levie: AI will create jobs by creating new problems
Levie shared a counterintuitive take on AI's job impact, using legal services as an example. When AI agents make it easier for people to ask legal questions, you don't get fewer lawyers. You get more lawyers fielding those questions downstream. AI accelerates one part of the process, which just moves the human bottleneck somewhere else. He also pointed to AI accelerating business formation and patent applications as other examples.
Why it matters: If your company is planning layoffs because "AI will handle this," you're probably creating a different staffing problem six months from now. The work doesn't disappear, it shifts.
Replit CEO Amjad Masad wants GitHub to track security spending
Masad proposed that GitHub should show how much compute has been spent securing each open-source package, displayed alongside stars. His example: the Linux kernel might show "$239M" in security investment. He's betting that AI will fully automate vulnerability discovery, making this kind of metric both possible and necessary for trust.
Why it matters: Your company's security team is about to care a lot more about which open-source packages you're using. "It's popular" won't be enough anymore when you can see exactly how much security analysis each project has received.
Peter Steinberger details four months of AI security work
Steinberger described building comprehensive security for what appears to be an AI coding tool, including sandboxes, allow-lists, and per-action permission prompts. He noted that "hundreds of security researchers" have pen-tested the system. This follows his December concerns about AI code execution risks.
Why it matters: The companies rushing AI coding assistants to market without this level of security work are about to look very irresponsible. Steinberger just set the bar for what proper AI tool security looks like.
Developer advocate Swyx highlights agent management problem
Swyx pointed out that while 2026 is "the year of subagents," the real challenge isn't building more agents but building systems that can manage and coordinate them. He mentioned advising on a new "Spaces concept" for agent composition, suggesting this is still an unsolved capabilities problem, not just an optimization one.
Why it matters: Every company deploying multiple AI agents is about to discover they need an "agent manager" role. Someone has to decide which agent handles what, and what happens when they disagree.
Woodward shared a link with no context beyond "Get it here." Given his role at Google and yesterday's NEET exam prep announcement, this could be another Gemini feature launch.