Supercharging AI with structured data

Artificial intelligence is very good at producing answers. The problem is, without structured data, it often doesn’t know if those answers are right.
Our colleagues at XBRL US, led by Michelle Savage, put this to the test in a way that regulators and market participants should pay attention to.
They asked Anthropic’s Claude to analyse Johnson & Johnson’s 2024 financials. Without XBRL, the model pulled together a serviceable summary: total revenues, segment splits, a broad geographic breakdown. Then came the structured version. Using XBRL, Claude produced detailed subsegments, product-level revenues, and regional splits. In other words, actual analysis, not just headline figures dressed up as insight.
Without XBRL, Claude leaned on third-party sites like Yahoo, Motley Fool, and even Wikipedia to fill in the gaps. With XBRL, it drew directly from the company’s official filings. If you’re running the analysis, which version would you rather base your decisions on?
This is the point regulators and market participants need to absorb. AI doesn’t remove the need for standards; it exposes it. A taxonomy isn’t a bureaucratic checklist, it’s the structure that tells machines what they are looking at and how the pieces fit together. Without it, AI is effectively operating without bearings. That might be fine for casual commentary, but it’s not a basis for reliable financial analysis.
There’s also a wider point. AI will, of course, continue to improve. But even the most advanced model is only as good as the information it is given. We wouldn’t trust a self-driving car on roads without signs or markings, and we shouldn’t trust financial AI without structured data.
Michelle’s experiment makes the point in practice: give AI structure, and you get precision. Leave it to scavenge unstructured data, and you get noise. Regulators who want reliable machine analysis should be drawing the same conclusion.
You can read Michelle Savage’s full article and see the results for yourself here.