TalkingCode

by Chidi Nweke
Technical Deep Dive

Exploring
Agentic Techniques

A look at the patterns I’ve been learning and refining.

Hi, I'm Chidi 👋. I’ve been working with GenAI for the past years, both professionally and as a hobby. I believe that the future of software development lies in systems that don't just search, but reason alongside us.

I built TalkingCode to share some of the state-of-the-art techniques I've been learning and refining in my daily work. My goal here is transparency: I want to share exactly how these loops work, from planning to execution, as a way for us to learn together.

Multi-step Reasoning

The heart of this project lies in the planning phase. Rather than a linear search, the agent uses multi-step reasoning to decompose your request. It treats the underlying codebase as a structured dataset, deciding which tools and "research" steps are necessary to build a complete answer.

Grounded Tool Calling

To ensure the agentic loops stay grounded, I've provided the system with direct access to the raw data of my repositories. This includes semantic search for context and git-level metadata to understand the history and evolution of the code.

Dataset Precision

By treating GitHub as a high-fidelity dataset, we can use metadata filtering to provide the precision required for complex reasoning. The agent narrows its focus before retrieval, ensuring the reasoning engine works with the most relevant "facts" from my work.

Structural

Filtering by file types, paths, and repository scopes.

Contextual

Filtering by language features, symbol hints, and tags.

By narrowing the search space before retrieval, we achieve higher accuracy and lower latency, ensuring the LLM only sees the most relevant snippets of my code.

"Building systems that reason about systems."