I tried building this tool a couple of months ago, definitely helps if we can visualize what the AI agents are building. How does this keep its diagrams accurate as the codebase evolves (like pytorch refactors)? In particular, do you diff the static-analysis graph against the live repo on each commit, or do you have another way to flag drift and re-validate the LLM layer?
So in order to keep the diagram up-to-date with commits we use the git difference of python files. An agent is tasked to firstly evaluate if the change is big enough to trigger a full clean analysis.
If the change is not big enough we start doing the same thing component by component recursively and update only components which are affected by the new change.
But comparing the control-flow-graph probably makes more sense for a big refactor commits, as it might blow the context. However so far we haven't seen this be an issue.
Curious to hear what was your approach when building diagram represnetation!
I tried building this tool a couple of months ago, definitely helps if we can visualize what the AI agents are building. How does this keep its diagrams accurate as the codebase evolves (like pytorch refactors)? In particular, do you diff the static-analysis graph against the live repo on each commit, or do you have another way to flag drift and re-validate the LLM layer?
So in order to keep the diagram up-to-date with commits we use the git difference of python files. An agent is tasked to firstly evaluate if the change is big enough to trigger a full clean analysis. If the change is not big enough we start doing the same thing component by component recursively and update only components which are affected by the new change.
But comparing the control-flow-graph probably makes more sense for a big refactor commits, as it might blow the context. However so far we haven't seen this be an issue.
Curious to hear what was your approach when building diagram represnetation!