Written by Nadia Kim (2026.02.27)
A new wave of tools is changing what it feels like to do research. In the past year or two, “vibe coding” has spilled out of software engineering and into academic workflows. The headline is simple: AI is no longer only responding to prompts. It is starting to take initiative, carry out multi-step tasks, and deliver finished artifacts. That shift is often described as the rise of agentic AI.
Most people are familiar with chat-style AI: you ask, it answers. Agentic AI goes a step further. You give it a goal, and it can execute a sequence of actions to reach that goal. It might search, read, extract, compare, draft, revise, and produce artifacts, all at once, not just a single response.
In plain terms, it is the difference between a tool that talks and a tool that tries to do.
The ecosystem is moving fast, but a few patterns are already clear. Tools like Codex, Claude Code, and agent-first environments such as Antigravity are increasingly used as build partners. Instead of asking for a snippet, you can ask for an entire workflow: set up a project, implement a feature, refactor, run tests, debug, and produce something shippable.
This “quality jump” is also visible in slides. People used to treat AI-generated decks as a joke. Now, workflows like Claude Code based slide generation make it realistic to produce a decent deck quickly from a prompt plus a few materials. Slides stop being the part that drains energy and slows everything down.
Where it gets most relevant for academics, though, is the personal research copilot idea. When you feed an agent your reading list, saved articles, notes, and even preferred writing style and recurring critiques, it can do higher-level synthesis.
For example, people use agentic AI to:
It compresses the distance between reading and producing, why so many scholars are excited about it.
Another detail in the hype is where this work happens.
A lot of people assume using agents means uploading papers and notes into some web AI product. That is not always necessary anymore. You can run agentic workflows directly on your own computer, or at least connect agents to local folders. You point the agent to PDFs, notes, a Zotero export, a project directory, or a writing folder. It reads, indexes, drafts, and writes outputs without you manually uploading each document into a web interface.
For research, that matters for two reasons: