Exploring liquidity risk disclosures with LLMs and XBRL
This week sees the second entry in our blog series “Using LLMs to Analyse Narrative Disclosures.” This time, XBRL International’s Revathy Ramanan dives into how large language models (LLMs), combined with XBRL tagging, can reveal both common patterns and outliers in how companies discuss liquidity risk.
Using a targeted prompt, Revathy applied topic modelling to liquidity disclosures, quickly surfacing recurring themes such as repayment schedules, reliance on credit lines, and cash flow strategies. The ability to isolate only the ‘liquidity risk’ sections using XBRL tags significantly streamlined the process, delivering faster, cleaner insights with minimal noise. Structured data played a vital role in making the analysis both efficient and repeatable.
The post also takes a look at anomaly detection, flagging companies whose disclosures deviate from typical patterns. Some firms, for instance, lean heavily on qualitative narratives with little or no quantified risk data. Identifying such outliers is not only analytically useful but also potentially revealing for investors and regulators monitoring disclosure practices.
This second instalment highlights the power of combining high-quality, tagged data with LLMs to deepen understanding of corporate reporting. It shows how accessible AI tools can support faster, more nuanced analysis, and help identify both trends and gaps in financial communication.
Read this week’s blog post on our website.

