One of the more important, but seldom seen, bodies in the financial reporting world is the XBRL Data Quality Committee: a group supported by XBRL US to promote “validation rules” that help reduce the rate of errors in XBRL filings. (Disclosure: our own Pranav Ghai, CEO of Calcbench, is a member of the committee.)
We recently chatted with Susan Yount, director of reporting practices at Workiva and staff to the Data Quality Committee, to hear her thoughts on what the committee is trying to accomplish and the specific projects it’s been tackling lately. (Hint: the committee has just published a series of proposed validation rules. You can review them at https://xbrl.us/data-quality/public-review and public comment is welcome.)
An excerpted interview is below.
So, the Data Quality Committee—tell us what its mission is.The committee is concerned with improving the usability of XBRL data. What we’ve heard from investors is that there are quality issues in the data, and we believe those quality issues can be fixed, through automated validation rules and through guidance. Our mission is to improve the data by putting out these rules and new guidance.
How would you position the Data Quality Committee relative to FASB and the SEC?
We’re complementary to them. There’s plenty of work to be done, and while their role is largely regulatory, we’ve shown now that we can influence filer behavior to enhance quality. It’s a supporting role.
Tell us more about the first set of validation rules the committee published last November. Errors fell by 64 percent after that.
Yeah, that was pretty exciting. We put out seven rules that were effective Jan. 1, and our experience at Workiva has been that it takes filers two quarters really to digest and implement a set of validation rules.
The largest impact from the rules came from negative value errors—where you see people entering, for example, their interest expense as a negative value. That’s pretty hard to interpret. If you were to ask me, “What was your interest expense for the period?” I wouldn’t give you a negative number. That’s how we want filers to be looking at this: what is the data actually representing? In this case it’s an interest expense and that can’t be a negative value. We also often see negative dividends.
These are just structural errors in a filing sometimes. Some of it has to do with the way XBRL works, and some of it has to do with a filer’s approach. We really just want to raise awareness around these issues—hey, you’re showing a negative interest expense or a negative dividend. It’s pretty clear that filers understood that. Filers pay attention to automated rules.
So negative values were the biggest share of improvements from those validation rules. What other fixes were included?
We started putting in validations to check the quality of the document and entity information—the “header” information, I’m this company filing this document for this period; basic demographic information about the filing. We’ve seen some errors that I think will be easy to fix once people understand that they’re errors. It’s not the way that accountants are trained to think, so providing them tools to point out what isn’t right will help.
One rule, for example, was that when you put together one of these filings, you say what time period it covers. Now, filers roll these documents forward from period to period, so every once in a while someone will forget, and use the prior period dates. That makes the data really hard to interpret. So one rule catches that.
What’s next for a second batch of rules? How would you like XBRL filers, or investors consuming the data, to provide feedback or input to the committee?
We just put these rules out for a public comment period, and it’s really important that we get feedback from people who will actually be using the data, about whether these rules will work for them… We’re looking at getting them out as fast as we can get them through the system. We’re not working on a calendar system per se, but we are sensitive to filing periods.
So from investors or non-filers, we really need to hear what you think about these rules. We’re about to put out some guidance that talks about actual element selection. We know there are a lot of unnecessary extensions. Now, we can identify those—but then what do people use instead? So we really need to help people pick elements that promote comparability. We’re on the verge of issuing some guidance on how to do that. It should result in a pretty significant improvement in the data.
Or log in with:
No Account? JOIN FOR FREE