International Journal of Finance and Accounting

p-ISSN: 2168-4812    e-ISSN: 2168-4820

2014;  3(6): 341-348

doi:10.5923/j.ijfa.20140306.02

Accounting for Intellectual Capital: Investigating Reliability

Marco Giuliani

Department of Management, Università Politecnica delle Marche, Ancona, Italy

Correspondence to: Marco Giuliani, Department of Management, Università Politecnica delle Marche, Ancona, Italy.

Email:

Copyright © 2014 Scientific & Academic Publishing. All Rights Reserved.

Abstract

The purpose of this study is to analyse to which extent Intellectual Capital(IC) measurements can be considered as reliable and how it is possible to affect it. In order to achieve this aim a deductive approach will be adopted. The study relies on the idea that reliability is influenced by the characteristics of the item, of the applied method and of the environment. The paper shows that IC measurements present a physiological limited reliability, due to their specific characteristics, and proposes some suggestions useful to improve it.

Keywords: Intangibles, Reliability, Intellectual capital, Accounting

Cite this paper: Marco Giuliani, Accounting for Intellectual Capital: Investigating Reliability, International Journal of Finance and Accounting , Vol. 3 No. 6, 2014, pp. 341-348. doi: 10.5923/j.ijfa.20140306.02.

1. Introduction

Together with various factors that presumably make up good accounting information, reliability can be considered as particularly important. In fact, if reliability is missing completely from a piece of information, the information will not be useful, i.e. measurement errors affect the ability to find significant results and, at worst, they can alter the interpretation of a phenomenon [26, 55, 62]. In general, reliability can be defined as a quality that relates to the capacity of a measure to represent a phenomenon over time without errors or bias, i.e. reliability can be seen as an “indicator” of faultless consistency. Even if the concept of reliability has been deepened regarding traditional accounting measures, critiques regarding the reliability of “new” accounting measures have arisen [33, 41].
In fact, according to an evolutionary theory perspective and considering the deterioration of traditional financial information usefulness, recent accounting studies have led to the development of “new” measures beneficial to support the decision-making process of the users better. Among these new measures, a relevant role is played by the ones regarding Intellectual Capital (IC), i.e. by the attempts made to extend the boundaries of accounting in order to make invisible (intangible) assets visible [21, 33]. One of the main obstacles for a wider and more intense use of IC measurements is considered to be the difficulties in assigning meaningful and reliable quantitative values [42].
In light of these considerations, the purpose of this study is to analyse to which extent IC measurements can be considered as reliable and how it is possible to affect it. In order to achieve this aim a deductive approach will be adopted. This study relies on the idea that reliability is influenced by the characteristics of the item, of the applied method and of the environment [8, 12, 60].
This study can contribute to a better understanding of the accounting measures referred to IC, of their potential and limitations and consequently to a more adequate use of them for internal or external reporting purposes. Moreover, this study can contribute to the discussions about accounting qualities and it can enrich the critical discussions on IC measurements and about the opportunity to extend the boundaries of accounting [21, 33].
In comparison to previous studies, this one does not investigate single aspects of reliability but tries to offer a systematic perspective. Furthermore, it deepens the idea of “fragility” of IC measurements proposed by Mouritsen [42] and the analysis of the qualities of IC measurements [20]. Beside, this study does not focus on a perceived reliability, i.e. the credibility that users assign to a measure, but on the one strictly related to the object and the accounting information itself.
The main limitations of this study are the following. First, this study is theoretical and therefore it can suffer the lack of an empirical testing or analysis. Second, it refers to specific dimensions of reliability that some consider within the social science discourse but that we are not acquainted that some has fully considered in accounting.
The structure of the study is outlined as follows. The next section develops the concept of reliability used for this investigation. In the central part, after a brief review of the state of the art of the studies on how to account for IC, an attempt will be made to make sense out of the findings and to develop the theoretical arguments of the study. Finally, some valuable insights are extracted and systematized to draw some conclusions and to propose future research opportunities.

2. Framing the Idea of Reliability

The aim of measuring practices is to make an evaluation about the units, events or objects that are subject to measure and to make decisions depending on the results of evaluation. The correctness and appropriateness of the decisions depend on the evaluation results, the measurement results and appropriateness of the measurement tool. In other words, a measure, to be useful, has to be faultless, i.e. reliable [26, 55, 62].
According to previous studies, the concept of reliability can be assumed as a property that relates to the capacity of a measure to represent the underlying economic construct or phenomenon over time without errors or bias and it tends to be matched with the ones of stability, reproducibility and accuracy [e.g. 31]. Stability refers to the ability of a judge to code data the same way over time. It is the weakest of reliability tests. Assessing stability involves a test-retest procedure. The aim of reproducibility is to measure the extent to which coding produces the same results when multiple coders code the text. In this case, reliability is often referred to as inter-rater reliability. The accuracy measure of reliability regards assessing the coding performance of coders against a set or predetermined standard set by a panel of experts or known from previous experiments and studies.
In accounting, reliability seems to assume partly different meaning and content. In financial accounting, the importance of reliability arises in large part because the recognition of an item allows it to be aggregated into the computation of summary measures such as earnings and book value. Thus, items not meeting an appropriate level of reliability are not recognized in the financial statements, but may still be communicated to investors through the footnotes or through other means of disclosure [54]. Even if the concept of reliability clearly matters, some would argue that there is no clear definition for reliability provided in any main accounting standards (such as, for that matter, US or UK or Australian standards) [16]. Nevertheless, the “fil rouge” between these definitions seems to be the idea of accuracy and fairness of presentation.
Management accounting information must be reliable in order that confidence can be put in it. More in depth, in this field it is assumed that a lower degree of reliability can be justified by the need to accomplish timeliness and relevance; consequently, accounting information can be more subjective and judgmental [4]. The acceptance for a more limited reliability can be due to these two considerations [6]. First, the measures used for managerial purposes must be considered in their totality to obtain a richer understanding of the organization so that precision and reliability of the single indicator may be offset to some degree by the holistic impression offered by the panel of measures. Secondly, managerial measures are generally not chosen based on their presumed precision and reliability for public report but on their capacity to crave the attention of the managers.
Although in accounting the importance of reliability is known and the analyses of the role played by this quality in different accounting contexts has been examined, there are few investigations that have the ambition to explore the factors that affect it in order to understand if and how it is possible to improve the reliability of accounting measures. In this analysis, moving from studies about science and social science, we argue that the factors that can affect reliability are the following [8, 12, 60]: the item to measure; the measurement method adopted; and the environment within the measurement is carried out.
An item that can be reliably measurable should be stable over time, definite, clear (not ambiguous), homogeneous (and therefore comparable) and its definition should be objective. This idea has several implications such as the one that an abstract item necessarily tends to be less reliably measured than concrete items because concrete items are usually more stable, comparable, clear and objective. In accounting, it is possible to refer to the connections between the accounting construct and the related phenomenon. In particular, the reliability of an accounting construct is limited whenever the phenomenon itself is not clear due to its randomness or indefiniteness. In other words, it seems that the degree of certainness of the relationship between the phenomenon and the value creation process can affect the level of reliability of the accounting construct. Of course, the quality of the accounting construct is also affected, as it will be shown later, by the environment, i.e. by the human perception and judgment [35].
With reference to the measurement method, even if it is not possible to drawn the line between “what is and what is not a reliable calculative technology” [46], the following characteristics are generally considered to have an impact on the reliability of a measure: length, understandability, equivalence, stability and objectivity. Errors are smaller in measurements obtained from long scales than short scales, i.e. a measurement method that considers more aspects of a phenomenon is more reliable than one that considers less aspects. This is because the consideration of a wider variety of items allows to achieve a more detailed and isomorphic representation of the phenomenon and consequently a more reliable measure.
Understandability constitutes an aspect of ease of use, namely the effort required to read and correctly interpret the model [2]. In general, a method which can be clearly understood and which is supported by adequate instructions tends to produce more reliable results than an unclear or unsupported one. It has to be pointed out that since users can have different backgrounds (such as math or non-math), what is understandable to one user may not be as understandable to another.
Equivalence regards to the amount of agreement between two or more methods that are administered at nearly the same point in time. Johnson [28] classifies the definitions of this term into two categories: definitions of the interpretative kind versus definitions at the procedural level. The former category pertains to types of equivalence that deal with similarities in the way abstract, latent concepts are interpreted among different cultures or cultural groups. One of the most cited forms of interpretative equivalence is “concept equivalence” [25]. This type of equivalence implies that the concept can be meaningfully discussed in the cultures or cultural groups concerned. Procedural equivalence refers to types of equivalence that are concerned with the measurements and procedures used to make comparisons. The basic level of procedural equivalence is called “construct equivalence” [61] that implies that the instrument measures the same latent trait in all of the items under investigation. In accounting, this dimension is the one that allows for inter-company comparisons and therefore allows formulating a judgment.
Stability occurs when the same or similar results are obtained with testing and retesting the same phenomena, i.e. the method should allow reproducing comparable results over time. It is also known that observations are related over time and consequently the closer in time a measurement is carried out the more similar are the factors that contribute to error and therefore a higher level of reliability is achieved [41]. This property can be referred to the fact that accounting methods should be consistent over time in order to have understandable and valuable data.
Objectivity, also known as intra-observer reliability, is the extent to which equally competent measurers get the same results. Methods which are based on objective data rather than on personal assessments and perceptions tend to be more unbiased and consequently more reliable because they do not involve any element of subjectivity and therefore they eliminate any element of measurer inconsistency. Objectivity in social science has already been questioned [34]. Accounting is not an absolute objective technology because measures are taken observing an object from a point of view, motivated by all sorts of personal factors within a certain cultural and historical context [39, 52]. In all, accounting is a matter of interpretation and judgments and therefore cannot be fully objective.
The last aspect to consider is the environment that refers to the conditions, under which the measurement is done, i.e. the context in which the measurement is carried out, the characteristics of the measurer, the time limits imposed and the quality of the data used. Regarding the context, the possibility to count on a friendly and collaborative environment and the absence of obstacles in the measurement process contribute to achieve a higher level of reliability [7, 14]. This idea is well represented by Power [47] who highlights that sometimes the increased reliability of a measure can be due not to an evolution of the technology adopted but just to a change in the climate. Some studies [39, 52] have also underlined that accounting is a matter of interpretations of phenomena and consequently the role, knowledge and competences of the accountant are central. Time availability influences reliability because the insufficiency of time decreases the reliability of the result, i.e. the method should be compatible with the time available for the measurement. Data limitations can reduce reliability when information systems fail to capture the data needed to develop the measurement process or when the data produced by the system are not correct [35].
Moving from this conception of reliability, in the following paragraph intangibles’ accounts will be investigated.

3. Defining and Measuring IC

Nowadays organizations need to exploit all their resources to sustain success. Along with tangible, physical assets, also intangible, nonphysical assets have become important for an organization. Today some argue that intangibles are critical success factors, not only for knowledge-intensive organizations, but also for most other types of organizations and, therefore, that they matter [17, 56].
Since the ‘90s, the idea of intangibles evolved into the one of IC, considering it the whole of the (strategic) intangibles that a firm needs to create value. This approach has moved the attention from the measurement of the single resource to the measurement of a system such as IC [9, 27].
Although IC has been discussed for more than a decade, it still seem not to be well understood because this term normally accompanies different concepts, such as assets, investments, resources or other phenomena [9, 27]. The different approaches adopted (accounting, management, etc.) to examine IC and the different research frameworks used have led to a plethora of definitions without gaining a consensus. This situation can be also due to the phenomenon itself. In fact IC is invisible, dynamic and firm specific and this makes it difficult to identify their boundaries and to understand them [21].
Individual organizations and authors have suggested numerous IC measurement methods and tools for internal (managerial) and external (disclosure) purposes, other scholars still consider measuring IC as a problematic and open issue [21, 30, 51].
According to part of the literature [3, 59], four main types of measurement approaches applied to IC have been found:
• when the criterion of value is expressed in monetary terms, the method to determine value is a financial valuation method;
• if a nonmonetary criterion is used and this can be translated into observable phenomena the method is a value measurement method;
• if the criterion cannot be translated into observable phenomena but instead depends on personal judgment by the evaluator, then the method is a value assessment method;
• if the framework does not include a criterion for value but does involve a metrical scale that relates to an observable phenomenon, then the method is a measurement method.
Another classification is by Sveiby [59] who distinguishes between Direct Intellectual Capital Methods (DIC), Market Capitalization Methods (MCM), Return on Assets Methods (ROA) and Scorecard Methods (SC). In particular, DIC methods estimate the value of IC by identifying its various components and evaluating them directly either individually or as an aggregated coefficient. MCM methods instead calculate the difference between a company's market capitalization and its stockholders' equity as the value of its IC. ROA methods are based on the discount of the future extra-incomes that can be referred to IC. Finally, SC methods identify and measure IC with indicators and indices. While the first three approaches achieve a financial value, the last one determines a non-financial value or a measure.
Another distinction can be made between static and dynamic approaches. In the first case, IC is mainly approached as a stock, as something that can be identified, located, measured and valued, just as any other resource, and is useful to visualize and understand the gap between market value and book value [17, 58]. Dynamic approaches [15, 37] instead, focus on the transformation processes, connections and causal relationships between IC and organizational outcomes [29, 44]. Hence, the contribution of IC to the value creation process becomes central [15, 43, 53].
The last distinction proposed in this paper is between a holistic and an analytical approach where the first aims to value the whole IC while the second to value specific IC components [32, 59].
Even if accountants, financial analysts and banks seems to be generally sceptical regarding IC measurements because they perceive it as not completely reliable and too subjective[1, 9, 18], the author is not acquainted of studies which investigates in depth if IC measurements have a ontological limited reliability or if it is just a problem of perceptions, i.e. of credibility. The distinction is relevant because the policy activities to make IC measurements more reliable or more credible are different and reliability can be considered as condition to achieve credibility and, then, value relevance [54]. Thus, in this study we investigate the reliability dimension.

4. Thinking about the Reliability of IC Measurements

In order to analyse the reliability of IC measurements the framework presented above will be used. As mentioned, the framework focuses on the characteristics of the item, of the method and of the environment.

4.1. Item

IC is an abstract accounting object that cannot be touched or seen but just perceived by the beholder. This can lead to the idea that measures regarding IC are naturally less reliable than the ones referred to tangibles items. Investigating more in depth the single dimensions, IC is not stable but highly dynamic [37] because it changes over time on two levels. The first one refers to the fact that they have a dynamic nature in dependence of the development or the regress of their interactions with the other firm resources [11]. The second follows the idea that they can also change because the way they are perceived changes [19]: the object is the same but it is perceived differently. Therefore, all this can lead to errors due to instability [12].
IC is also not definite and clear because it lacks of a generally accepted definition and thus its boundaries cannot be drawn exactly [21]. Even if there is a convergence on the essence of IC (knowledge) there is still confusion regarding how to define it and how to draw its boundaries: in fact, under the same label different concepts and objects can be included [5] making this resource, as well as its measurments, not self-evident [38]. Thus, its understandability and the related reliability can be different between an insider and an outsider who does not know the rules for decoding the label [22].
IC is also a heterogeneous entity because its components present different properties [23, 44]. By example, human capital and relational capital are not owned and controlled by the company while structural capital is; the degree of volatility and predictability of some IC resources are different from others as well as their contribution to the value creation process. Moreover, IC is also heterogeneous because, as said, it is firm-specific and tightly related to the company through multiple connections [11]. All this can cause errors depending by non-homogeneity [12].
Finally, as IC can only be perceived by the beholder, this accounting object is necessarily subjective and influenced the “construction” realized by the accountant [20]. Thus, the more experienced and professional is the beholder the more reliable will be the identification of the object and consequently its measure. In addition, considering that IC is firm specific, a standardization of the item to account for does not seem possible and consequently subjectivity cannot be avoided [5].

4.2. Methods

According to Roos and Roos[49] there are many analytical difficulties in handling measures useful to visualise and understand IC. The reliability of the method is a relevant issue and it is influenced by the length, understandability, equivalence, stability and objectivity of the method chosen. Thus, a deeper understanding of these dimensions should allow achieving more reliable measurements.
The length of the methods proposed to measure IC is highly variable. Direct methods [59], as well as the market-to-bookvalue methods [56], which are based on few parameters are certainly shorter than the ones based on the real option theory [57] that, instead, requires to consider several variables. According to theory a longer methods, i.e. with more variables, should lead to more reliable measurements than one with less variables. In the case of IC, this idea partly struggles with the fact that some of the variables required by complex methods are often not available in companies and therefore it is needed to use perceptions instead of real data and this can impair the real level of reliability of the determined measurement [20].
Understandability is a quality often recalled in the descriptions of guidelines or models [see e.g. 40] because “the readership also wants to be able to understand the intellectual capital statement through analytical lenses. The problems of reading intellectual capital statements are quite understandable, because they are rarely in similar formats since there is no generally accepted format for the presentation of intellectual capital information, which makes the reading of the text, the numbers and the images a difficult activity” [43]. In case of IC, understandability seems to be difficult to be operationalized for several reasons [5, 45]. First, the terminology adopted is not consistent: it is possible to find the terms IC, knowledge capital, intellectual assets, etc. Second, the translation from English to other languages of the methods and of the parameters considered can be a cause of misleading. Further, nearly each approach is based on individual definitions and classifications; consequently, this diversity can reduce the understandability and interpretability of the method and of the measurement determined. Finally, understandability is not just a theoretical concept but it has to be related to the level of knowledge and competences of the users of the information, i.e. to the environment (the managers, the investors, the market, etc.). In all, IC measurement methods and tools seem to be difficult to understand in depth and this limited understandability can also be one of the causes to a limited comparability of their measures and reports and consequently to a limited credibility [24, 45].
Equivalence is also a problematic aspect as none of the proposed methods has still achieved a wide consensus in the academia and in practice, probably because the topic is interdisciplinary, recent and with little empirical applications [36]. In addition, even if theoretically all the measurement methods and tools should lead to a common range of measures, in practice the equivalence between the different IC measurement methods and tools seems to be difficult to achieve even if applied by the same accountant in dependence of the different considered variables. Probably, a higher level of equivalence can be determined applying the same type of methods (e.g. income based or cost based). Moreover, according to Johnson [28], IC measurement methods have a “construct equivalence” problem because the instrument are not only different in how they are designed but they also make reference to different constructions and objects.
As stability relates to consistency, it leads to two thoughts. The first one is that it is difficult to develop a test-retest in social science because the underlying phenomenon is continuously changing and therefore it is not possible to verify in practice if the same method can produce similar results over time. This is particularly true regarding IC: considering their dynamicity, it appears difficult that the same measurement method applied in different moments in time allows achieving similar results. In all, stability can be verified only at a theoretical level and it is generally assumed as a property of all IC measurement methods. The second aspect is that to approach IC, measurement methods and tools should change over time [19]. Therefore, even if the single measurment method is stable, it seems that approaching IC requires instability, i.e. the application of different methods over time.
Regarding the objectivity, it has to be premised that all accounting data have a more or less intense subjectivity. Using the expression “hard data” to refer to the data collected from the accounting system to distinguish them from the data collecting using perceptions, it is possible to notice that while some methods are mainly based on hard data (e.g. KCV, MV-BV, MCM, etc.), others tend to focus more on perceptions (e.g. IC index, Technology Broker, etc.). This implies that the application of some methods and the measure determined are strictly related to the eye of the beholder, i.e. to the personality of the accountant. Apart from the specificities of each method, it has to be considered that in measuring IC the use of perception can be recalled by the method itself or by the lack of hard data useful to visualize, understand and report IC and consequently by the need to make up them with perceptions [20]. By way of example, the impact of IC on the value creation process is usually mapped using subjective methods because of companies do not have enough data to develop statistical models which, combined with perceptions can lead to a more reliable result.

4.3. Environment

This section will focus on the specificities of the relevance of the context in influencing IC measurements’ reliability. In developing measurement processes, the people involved play a relevant role and this has been highlighted in several studies. By way of example, Chaminade and Roberts [10] explore the importance of the “point of entry” of projects which aim to measure IC in order to determine their possible developments; Mouritsen [42] highlights that IC is not relevant per se but as something useful to crave the managerial attention and therefore to understand the past and to influence the future; Chiucchi [13] examines the organizational factors which determine the “lock in” phenomenon in IC measurement projects; Giuliani and Marasca [20] highlight the role of focus groups in determining the construction of IC and development of their measurement process. Moreover, the fact that IC cannot be touched but they are a “result” of perceptions enforces the idea that IC is tied to the organization [44]. In all, it seems that the context can influence IC measurements.
Determining a reliable measure of a company’s IC requires the availability of reliable input data. Measurement methods may have as input data information extracted from the financial accounting system, from the management accounting system or elaborated ad hoc. The degree of reliability of these three sources appears to be inconstant in dependence of several variables such as the adopted technology, the competences available in the company, the presence of auditors, etc. In theory, financial accounting data tend to be more reliable then management accounting ones where the need to have reliable data can be mitigated by the need to have economic, timeliness, understandable and relevant data [48, 50]. At the same time, traditional management accounting data can be more reliable than the specific ones for IC because they are coming from more consolidated and structured systems, used more frequently, verified and monitored.
Moving from these thoughts, it emerges that methods based on financial accounting data (e.g. KCV, ROA, etc.) tend to be potentially more reliable than the ones also based on IC specific data (e.g. weightless wealth toolkit, IC index, etc.). It has to be reminded that the measurment process should be considered as a never-ending process in which the regular use of the system can lead to deeper knowledge and, consequently, to a more reliable measurements [20, 53].

5. Conclusions

The purpose of this study was to analyse from a theoretical perspective to which extent IC measurements can be considered as reliable and how it is possible to affect their reliability.
Moving from the conception of reliability adopted, the results achieved show that IC measurements seem to present an ontological limited reliability, partly due to its nature and partly due to the absence of consensus regarding the characteristics of the measured objects and the methods to be adopted. In fact, IC seems to present some characteristics, such as stability and heterogeneity, which unavoidably may limit the degree of reliability while the weaknesses related to other peculiarities, like understandability and objectivity, can be influenced by the policy makers through the release of measurement guidelines that can reduce the uncertainties related to the IC visualisation process.
Another aspect that emerges is that the three factors considered (item, method and environment) as well as the sub-factors mentioned are strictly related one another. For example, the absence of a generally accepted definition implies the introduction of subjectivity in the method and highlights the relevance of the environment. More, a measurement method that considers more aspects, on the one hand, tends to be more reliable than one that considers fewer aspects but, on the other hand, can be more subjective, less understandable and more expensive to implement. In all, it emerges that the level of reliability of a measure is the result of how the trade-off between the considered single dimensions are taken into account in the measurement process. Consequently, the level of reliability of an IC measurement can be managed not only considering the three factors per se but also managing their interrelations, i.e. adopting a systemic perspective.
In summary, IC measurements seem to present a “physiological” limited level of reliability. In order to improve this level and considering that it is not possible to influence the object, actions that can influence the other two factors should be carried out. A first research avenue is represented by an in depth investigation of how to influence each factors and which the effect could be. This implies not only theoretical studies but also empirical studies adopting the test useful to measure reliability (e.g. test-retest, inter-rater test, etc.).
Among the activities that can improve reliability, a particular role is played by guidelines. In fact, the definition of general guidelines can of course improve the reliability of IC measurements because they can reduce the difficulties related to understandability, consistency, equivalence, etc. of the methods even if they cannot solve the problems related to some specific aspects like stability or the subjectivity related to the measurement process. Moreover, the intervention of standard setters can influence the behaviour of the accountants and the environment making it more suitable for IC measurements. Another research avenue is represented by the study of how guidelines useful to influence the reliability of IC account should be structured.
The findings provided by this research should be useful to those interested in studying measurement and valuation processes and in understanding and testing the reliability and the improvement of measures and values. Moreover, considering that the literature regarding accounting for IC often seems less than clear on the reliability implications of the single phases of the measurement and valuation process proposed there is the hope that this study has added some clarification to this issue. In addition, it offered a preliminary framework, which can be of help to understand the level of reliability of a measure regarding IC, determined adopting a specific method and within a specific context. More, this research should contribute to the debate about the need to develop new accounting measures and to evolve the existing ones and the request of reliable information coming from the firms’ stakeholders. Finally, considering that the policy makers, like in the case of IC, can influence some of the factors affecting reliability this can represent an incentive for policy makers to draw up useful rules to make measurements more reliable and to identify best practices.

References

[1]  Amir, E., Lev, B., and Sougiannis, T., 2003, Do financial analysts get intangibles?, European Accounting Review, 12(4), 635-659.
[2]  Anderson, J. R., 2000, Cognitive Psychology and its Implications. 5/e, New York, Worth.
[3]  Andriessen, D. G., 2004, Making sense of intellectual capital, Burlington, MA, Butterworth-Heinemann.
[4]  Atkinson, A., Banker, R., Kaplan, R., and Young, S. M., 2004, Management Accounting, New York, Prentice Hall.
[5]  Brännström, D., and Giuliani, M., 2009, Accounting for intellectual capital: a comparative analysis, VINE: The journal of information and knowledge management systems, 39(1), 68-79.
[6]  Brignall, S., and Modell, S., 2000, An institutional perspective on performance measurement and management in the ‘new public sector', Management Accounting Research, 11(3), 281-306.
[7]  Burns, J., and Scapens, R. W., 2000, Conceptualizing management accounting change: an institutional framework, Management Accounting Research, 11(1), 3-25.
[8]  Carmines, E. G., and Zeller, R. A., 1982, Reliability and validity assessment, Berverly Hills, Sage Publications.
[9]  Catasús, B., and Gröjer, J.-E., 2009, Intangibles and credit decisions: results from an experiment, European Accounting Review, 12(2), 327-355.
[10]  Chaminade, C., and Johanson, U., 2003, Can guidelines for intellectual capital management and reporting be considered without addressing cultural differences?, Journal of Intellectual Capital, 4(4), 528-542.
[11]  Chaminade, C., and Roberts, H., 2003, What it means is what it does: a comparative analysis of implementing intellectual capital in Norway and Spain, European Accounting Review, 12(4), 733-751.
[12]  Chau, P. Y. K., 1999, On the use of construct reliability in MIS research: a meta-analysis, Information and Management, 35(4), 217-227.
[13]  Chiucchi, M. S., 2008, Exploring the benefits of measuring intellectual capital. The Aimag case study, Human Systems Management, 27(3), 217-230.
[14]  Cooper, D. J., Everett, J., and Neu, D., 2005, Financial scandals, accounting change and the role of accounting academics: A perspective from North America, European Accounting Review, 14(2), 373-382.
[15]  Cuganesan, S., 2005, Intellectual capital-in-action and value creation. A case study of knowledge transformation in an innovation process, Journal of Intellectual Capital, 6(3), 357-373.
[16]  Dahmash, F. N., Durand, R. B., and Watson, J., 2009, The value relevance and reliability of reported goodwill and identifiable intangible assets, British accounting review, 41, 120-137.
[17]  Edvinsson, L., and Malone, M. S. (eds.), Intellectual Capital, New York, Harper Business, 1997.
[18]  Flöstrand, P., 2006, The sell side – observations on intellectual capital indicators, Journal of Intellectual Capital, 7(4), 457-473.
[19]  Giuliani, M., 2009, Intellectual capital under the temporal lens, Journal of Intellectual Capital, 10(2), 246-259.
[20]  Giuliani, M., and Marasca, S., 2011, Construction and valuation of intellectual capital: a case study, Journal of Intellectual Capital, 12(3), 377-391.
[21]  Gowthorpe, C., 2009, Wider still and wider? A critical discussion of intellectual capital recognition, measurement and control in a boundary theoretical context, Critical Perspectives on Accounting, 20(7), 823-834.
[22]  Graham, C., 2008, Fearful asymmetry: The consumption of accounting signs in the Algoma Steel pension bailout, Accounting Organizations and Society, 33(7-8), 756-782.
[23]  Gröjer, J. E., 2001, Intangibles and accounting classifications: in search of a classification strategy, Accounting, Organizations and Society, 26(7), 695-713.
[24]  Guthrie, J., and Petty, R., 2000, Intellectual capital: Australian annual reporting practices, Journal of Intellectual Capital, 1(3), 241-251.
[25]  Hui, C. H., and Triandis, H. C., 1985, Measurement in cross-cultural psychology. A review and comparison of strategies, Journal of Cross-cultural Psychology, 16(2), 131-152.
[26]  Ijiri, Y., 1967, The Foundations of Accounting Measurement, Englewood Cliffs, Prentice hall.
[27]  Johanson, U., Mårtensson, M., and Skoog, M., 2001, Measuring to understand intangible performance drivers, Erupean Accounting Review, 10(3), 407-437.
[28]  Johnson, T. P., Approaches to equivalence in cross-cultural and cross-national survey-research, edn, Harkness, JA, Cross-cultural survey equivalence, Mannheim: ZUMA; 1998: 1-40.
[29]  Kaplan, R. S., and Norton, D. P., 1992, The Balanced Scorecard - measures that drive performance, Harvard Bus Rev, 70(1), 71-79.
[30]  Kaufmann, L., and Schneider, Y., 2004, Intangibles. A synthesis of current research, Journal of Intellectual Capital, 5(3), 366-388.
[31]  Krippendorff, K., 1980, Content analysis. An introduction to Its Methodology, Thousand Oaks, Sage Publications.
[32]  Lev, B., and Zambon, S., 2003, Intangibles and intellectual capital: an introduction to a special issue, European Accounting Review, 12(4), 597-603.
[33]  Lev, B., and Zarowin, P., 1999, The boundaries of financial reporting and how to extend them, Journal of Accounting Research, 37(2), 353-385.
[34]  Madill, A., Jordan, A., and Shirley, C., 2000, Objectivity and reliability in qualitative analysis: realist, contextualist and radical constructionist epistemologies, British Journal of Psychology, 91(1), 1-20.
[35]  Maines, L. A., and Wahlen, J. M., 2006, The Nature of Accounting Information Reliability: Inferences from Archival and Experimental Research, Accounting Horizons, 20(4), 399-426.
[36]  Marr, B., and Chatzkel, J., 2004, Intellectual capital at the crossroads, Journal of Intellectual Capital, 5(2), 224-229.
[37]  Marr, B., Schiuma, G., and Neely, A., 2004, The dynamics of value creation: mapping your intellectual performance drivers, Journal of Intellectual Capital, 5(2), 224-229.
[38]  Mårtensson, M., 2009, Recounting counting and accounting. From political arithmetic to measuring intangibles and back, Critical Perspectives on Accounting, 20(7), 835-846.
[39]  McKernan, J. F., 2007, Objectivity in accounting, Accounting, Organization and Society, 32(1-2), 155-180.
[40]  Meritum: Proyecto Meritum: guidelines for managing and reporting intangibles. In. Madrid; 2002.
[41]  Milne, M. J., and Adler, R. W., 1999, Exploring the reliability of social and environmental disclosures content analysis, Accounting, Auditing & Accountability Journal, 12(2), 237-256.
[42]  Mouritsen, J., 2006, Problematising intellectual capital research: ostensive versus performative IC, Journal of Intellectual Capital, 19(6), 820-841.
[43]  Mouritsen, J., and Larsen, H. T., 2005, The 2nd wave of knowledge management: The management control of knowledge resources through intellectual capital information, Management Accounting Research, 16(3), 371-394.
[44]  Mouritsen, J., Larsen, H. T., and Bukh, P. N. D., 2001, Intellectual capital and the 'capable firm': narrating, visualising and numbering for managing knowledge, Accounting Organizations and Society, 26(7-8), 735-762.
[45]  Petty, R., Cuganesan, S., Finch, N., and Ford, G., 2009, Intellectual Capital and Valuation: Challenges in the Voluntary Disclosure of Value Drivers, Journal of finance and accountancy, 1(1), 1-7.
[46]  Power, M., 1992, The politics of brand accounting in the United Kingdom, European Accounting Review, 1(1), 39-68.
[47]  Power, M., 1996, Making things auditable, Accounting, Organization and Society, 21(2-3), 289-315.
[48]  Riahi-Belkaoui, A., 2002, Behavioral management accounting, London, Quorum books.
[49]  Roos, G., and Roos, J., 1997, Measuring your company’s intellectual performance, Long Range Planning, 30(3), 413-426.
[50]  Rosenfield, P., 2006, Contemporary issues in financial reporting, London, Routledge.
[51]  Seetharaman, A., Sooria, H. H. B. Z., and Saravan, A. S., 2002, Intellectual capital accounting and reporting in the knowledge economy, Journal of Intellectual Capital, 3(2), 128-148.
[52]  Shapiro, B., 1998, Objectivity, relativism and truth in external financial reporting: What’s really at stake in the disputes, Accounting Organizations and Society, 22(2), 165-185.
[53]  Skoog, M., 2003, Visualising value creation through the management control of intangibles, Journal of Intellectual Capital, 4(4), 487-504.
[54]  Sloan, R., 1999, Evaluating the reliability of current value estimates, Journal of Accounting and Economics, 26(1), 193-200.
[55]  Snavely, H., 1967, Accounting Information Criteria, The Accounting Review, 42(2), 223-232.
[56]  Stewart, T. A., 1997, Intellectual Capital, New York, NY, Bantam Doubleday Dell Publishing Group.
[57]  Sudarsanam, S., Sorwar, G., and Marr, B., 2006, Real options and the impact of intellectual capital on corporate value, Journal of Intellectual Capital, 7(3), 291-308
[58]  Sveiby, K. E., 1997, The Intangible Assets Monitor, Journal of Human Resource Costing & Accounting, 2(1), 73-97.
[59]  Sveiby, K. E.: Methods for Measuring Intangible Assets. In.; 2004.
[60]  Traub, R. E., 1994, Reliability for the social sciences, London, Sage Publications.
[61]  Van de Vijver, F. J. R., Towards a theory of bias and equivalence., edn, Harkness, JA, Cross-cultural survey equivalence, Mannheim: ZUMA; 1998: 41-65.
[62]  Vatter, W., 1963, Postulates and principles., Journal of Accounting Research, 1(2), 179-197.