You are currently browsing the monthly archive for November 2010.

The US Securities and Exchange Commission’s (SEC) “Interactive Data” initiative continues to progress forward. Thus far, some 1,500 corporations have filed their financial information using XBRL tags to facilitate review and analysis, of which almost 400 have had done detailed tagging of their footnotes. By June 2011 all public companies will have to provide an XBRL-tagged, interactive version of their financial statements. As I’ve noted in the past, I think companies should find ways to automate the XBRL tagging process to make it as efficient as possible and make this a part of a close-to-report process automation effort that can lower the cost of compliance, and give companies more time to review the substance (not just the details) of their filings.

The SEC’s Division of Risk, Strategy and Financial Innovation recently released a report summarizing their analysis of XBRL-tagged filings submitted in the June to August 2010 period. Their purpose was to identify common issues that have come up as companies implement their compliance efforts. Corporations have a two-year grace period from their initial filing obligation in which they are not liable for mistake made in their submissions, so making common errors known is a benefit to all parties both now and in the future.

So how are things going? As one might expect there are some teething pains underway as corporations feel their way through a process that can be quite complex when it comes to the details. And even though the issues center on small details, it’s precisely those small details that are important. In that respect, I believe that one of the main reasons to automate the tagging process is to limit the possibility of making mistakes. Financial statements and the types of information in the footnotes that require tagging change very slowly. Once an automated tagging system is configured, the same (correct) tagging process happens as a matter of course. The broader the scope of tagging automation, the more you limit the chance of mistakes happening. Moreover, making the tagging aspect an integral part of automating the close-to-report process allows more time to review the statements and uncover any errors that might have crept in.

The SEC invested in upgrading the initial 2005 US-GAAP (generally accepted accounting principles) taxonomy to ensure that it contained enough detail to cover the needs of just about any company, regardless of the industry. The purpose was to ensure a high degree of comparability between companies in similar industry and simplify the tagging process by obviating the need for companies to develop custom tags to cover items that are specific to a particular type of business. The 2005 US-GAAP taxonomy handled industrial companies reasonably well but services, some financial services and even some basic materials industry definitions were not at all robust. (The IFRS taxonomy, incidentally, is in need of beefing up.) One of the areas noted by the SEC where companies have commonly erred is creating a custom tag where an existing element already exists. The SEC thinks this may reflect an insufficient search effort; I wonder if this reflects time pressures or a lack of understanding of what’s required. The SEC notes that some elements that can exist in multiple financial statements. For instance, some components of interest expense may be found in the “disclosures” section, not the income statement.

Another one of the most common mistakes companies have been making is the inappropriate use of negative values. This is the result of collision of common accounting practices and how values are handled by the US-GAAP taxonomy. Some financial reporting elements may be either positive or negative. For example, the bottom line could be net income or a net loss, or changes in balance sheet items can add or subtract from cash. Those entering values have at times entered dividend or share repurchase amounts as a negative number (which usually makes sense in a paper-based report) but does not because of the way the values are handled by the taxonomy.

Although there have been issues raised in the initial stages of financial statement tagging, it’s hard to imagine that this is going to be a large issue in the future. I expect that companies will apply a high degree of automation in their tagging process and by the time they become liable for mistakes, most will have baked in the correct tags and other elements correctly. Structural changes to the financial statements that occur as a result of mergers, divestitures, accounting regulations are likely to be limited from one period to the next. Moreover, audited annual tagged filings almost certainly will be reviewed by the outside auditors and the same would apply to companies that publish audited quarterly statements.

Companies have multiple options for automating the tagging process. Vendors, including Host AnalyticsIBM (through its recent acquisition of Clarity), Longview, and Oracle, offer systems for managing a company’s closing process and automating the tagging of financial statements. I also recently wrote about XBRL technology providers UBmatrix and Edgar Online merging to provide further leverage in their efforts for providing software for automating XBRL tasks and UBmatrix who has licensed its technologies to Oracle and SAP.

Let me know your thoughts or come and collaborate with me on Facebook, LinkedIn and Twitter.


Robert D. Kugel CFA – SVP Research

Ventana Research has just announced its Value Index for Financial Performance Management (FPM) for 2010. Our value indexes are user-focused assessments of how well software vendors and packages enable companies to improve their execution of core processes. This one is designed to help businesses, especially the finance organization, evaluate the FPM software suites offered by major vendors in the context of their specific needs. Ventana Research defines financial performance management as the practice of managing the efficiency and effectiveness of financial processes including analytics, budgeting, consolidation, planning, reporting and strategy. The methodology we use to produce the Value Indexes involves evaluating in detail aspects of product functionality and suitability-to-task as well as the effectiveness of vendor support for the buying process and customer assurance.

This year’s Value Index for Financial Performance Management found SAP to be the company delivering the highest value on an overall weighted evaluation, which earned it the Hot Vendor rating. It is followed closely by InforIBM, Clarity Systems (which was recently acquired by IBM), Host AnalyticsSAS and Longview, all of which were rated as Hot Vendors, as well as Oracle, which earned the Warm Vendor rating. More than our other Value Indexes, most of the offerings here were very close in their overall ratings. This largely reflects the maturity of the FPM category. Because this evaluation covers the application suites and not solely an individual component, the software offerings are aimed mainly at large and midsize organizations.

From my perspective, the biggest challenge in doing an objective assessment of business software is in structuring the assessment. Unlike IT infrastructure software and tools, most of the business capabilities important to users are not about measuring the “speeds and feeds” that define their capabilities. The purpose of FPM software is to help business achieve greater efficiency and effectiveness. Since in almost all cases there are at least several ways (if not dozens or scores) to define and organize the work that needs to be done, there can be many different ways to support those efforts. Consequently, I have found that in many cases the main issue is whether the software will support the users’ performance management efforts. Whether a package has nine or 19 different ways of spreading values across budget months, for example, is not as important as having the right nine.

Given how close the rankings are, you may ask whether this means that one offering is as good as another. It absolutely does not. Every business has its own FPM processes as well as different IT infrastructure requirements. It is very likely that as your organization winnows down the long list to the short one you will find some very specific differentiators that tilt the decision to the ultimate winner of your evaluation. These determinants may include specific features or bits of functionality, the specific progression of screens or steps in executing a process. If everything else is equal, you may base your choice on existing license arrangements. Your company may want a software-as-a-service SaaS solution or may be dead set against that. But in any case, doing a structured evaluation, one that methodically assesses the characteristics your organization needs in FPM software, will produce the best results. We believe that this Value Index will help guide you to the selection that is most right for you. If you want download the executive report, just download it at our FPM Value Index site.

I express my thanks to the vendors that invested their time and energy to ensure that the evaluation was accurate, comprehensive and unbiased.

Let me know your thoughts or come and collaborate with me on Facebook, LinkedIn and Twitter.


Robert D. Kugel CFA – SVP Research

RSS Robert Kugel’s Analyst Perspectives at Ventana Research

  • An error has occurred; the feed is probably down. Try again later.

Twitter Updates


  • 127,122 hits
%d bloggers like this: