A Critical Exploration of Collected Data in Business Research: Is Data Trustworthy? A Comparison of a Survey and Interviews

Despite its crucial role within a research community, trustworthiness in data collecting has received surprisingly little scientific attention in research articles. Therefore, the overall purpose of this paper is to explore and discuss trustworthiness in collected data through both interview and questionnaire methods. The results of these methods are reported as a methodological experiment. The answers from the questionnaires and the interviews were compared and illustrated using a “Divergence Index,” which illustrates the coherence (trustworthiness) of the two data-collection methods. The two data-collection methods tested in the study provided different results and hence present different factors of importance for SMEs. The present paper concludes that (1) there are few reflections on the trustworthiness of collected data in prior research, and (2) the responses for two data collection methods show great divergence, which can have consequences on CEOs’ and other decision-makers decisions based on collected data.


Introduction
The research study discussed in this paper is centered on the importance to reflect upon the chosen data collection method within business research and for researchers to enhance trustworthiness their data.
Depending on the purpose of the research, well-designed data collection methods are required in order to ensure the validity of the findings and contributions to theory and practice. Researchers can use various methods of data collection -such as observation, questionnaires, and interviews (see Figure 1)-and the proper choice of data-collection method is critical for the issue of trustworthiness in research. Note. Based on Ghauri and Grønhaug (2005, p. 113).

Data-collection methods
Two questions can be formulated regarding the choice of data-collection method: "Why are there different methods?" and "Is it possible to use only one?" The simple answer to these questions is that different situations (i.e., different research problems) require different methods of data collection. In other words, the selection of the most suitable method(s) for data collection in business research depends on the research problem and its purpose (i.e. the current understanding of the research problem). In any case, the collected data must be trustworthy and trustworthiness in the collected data, and how researchers should reflect upon the same in their collected data. Issues that are discussed concern ways to characterize trustworthiness by defining the concept by credibility (internal validity), transferability (external validity), dependability (reliability), and conformability (objectivity) Guba (1981). For instance, Lincoln and Guba (1985) argue that ensuring credibility is one of most important factors in establishing trustworthiness, like promoting the adoption of research methods well, understanding the research setting, iterative questioning etc. Concerning transferability, Merriam (1998) states that it "is concerned with the extent to which the findings of one study can be applied to other situations". Dependability means that the processes within the study should be reported in detail, thereby enabling a future researcher to repeat the work, (Shenton, 2004). Finally, conformability relates to steps taken to ensure as far as possible that the work's findings are the result of the experiences and ideas of the informants, rather than the characteristics and preferences of the researcher, i.e. triangulation (Shenton, 2004).
In the research literature, aspects of trustworthiness are highlighted that are relevant to both quantitative and qualitative studies: (a) truth value, (b) applicability, (c) consistency, and (d) neutrality (Krefting, 1991). Krefting (1991) also highlights the different strategies of assessing these criteria, which are important to researchers in designing ways of increasing the rigor of their qualitative studies (Krefting, 1991). For instance, prolonged and varied field experience, Time sampling, Reflexivity (field journal), Triangulation, Peer examination, Interview technique , Establishing authority' of researcher, Structural coherence, Referential adequacy (Krefting, 1991).
Trustworthiness is also discussed as with a focus on the auditing process (Creswell & Millar, 2000) and shared ground rules for drawing conclusions and verifying their sturdiness. Miles and Huberman (1994). Miles and Huberman identified four characteristics that are necessary to assess the trustworthiness of the so called human instrument: (a) the degree of familiarity with the phenomenon and the setting under study, (b) the ability to conceptualize large amounts of qualitative data, (c) the ability to take a multidisciplinary approach, and (d) good investigative skills.
In order to increase trustworthiness researchers in business research can use different strategies, for example triangulation (Krefting, 1991). One way to enhance trustworthiness is by triangulation of data sources and to apply 'mixed methods research' (Molina-Azorin & Cameron, 2010). But the use of mixed methods in business studies has seldom been studied . Hurmerinta-Peltomäki and Nummela (2006) conclude that it could be argued that in international business research, as in other social sciences, the roles of qualitative and quantitative methods also seem quite fixed (cf. Teddlie & Tashakkori, 2006). They state that researchers seem to have rather "institutionalized mindsets in terms of empirical research designs" (p. 454), focusing on one traditional method (qualitative or quantitative). An interesting finding in their systematic review of reported empirical studies in international business research journals was the fact that the most severe problem was the fact of the insufficient description of empirical designs in the reviewed articles.
To understand the extent to which researchers, within the business research field, have discussed trustworthiness in collected data, the authors of the present paper carried out a review of six selected journals (the Journal of Developmental Entrepreneurship; Entrepreneurship Theory and Practice; the Journal of Business Venturing; the Journal of Small Business Management; Small Business Economics; the Journal of Small Business and Enterprise Development). The review covered volumes of these journals published in the five-year period from 2005 to 2009), for a total of 1,060 articles. The journal articles were reviewed in accordance with the following three-step process: 1) The abstract was read so as to identify the study's aim and the method used; 2) The method section was read in order to analyze the researchers' discussion of the construction of the questionnaire and the characteristics of the questions; 3) The research limitation section (or equivalent) was read in order to identify any discussion regarding the trustworthiness of the data (e.g., the need for triangulation).
The review indicated that a very limited number of these articles (1%) discussed problems in the collected data, and most researchers did not discuss methodological considerations pertaining to 1) their data-collection process and 2) the reliability of their data sources. The review showed that the issue of "trustworthiness in collected data" is discussed in a general way but is not frequently focused on as a specific topic.
The above discussion show in our opinion the necessity of the present study and therefore, the present paper explore and discuss the trustworthiness of data collected in both interviews and questionnaire-based surveys in business research.

Research Design
The journal review revealed a lack of discussion of the trustworthiness of collected data in the scientific journals.
www.ccsenet.org/ijbm International Journal of Business and Management Vol. 10, No. 8;2015 4 This result provided important input toward fulfilling the purpose of the current study. Our intention in initiating the study was not to generalize or to produce a statistical analysis but rather to explore and raise awareness of the topic of data collection trustworthiness. As researchers, we wanted to investigate whether a study would obtain the same data and reach the same conclusions regardless of its data collection method. Specifically, the main interest of the study was to explore whether the interview method and the questionnaire survey method would generate the same data (i.e., the same answers) when using exactly the same questions. We assumed that the same answer would be generated by a demographic question like "How old are you?" regardless of the collection method. Similarly, we assumed that if the same answer (i.e., the same data) were not received, this would indicate a problem in the trustworthiness of the collected data. We further assumed that any divergence in the answers would not be primarily related to the particular data-collection methods; instead, it could be due to the respondents' willingness and earnestness in answering the question. However, divergence could also be related to who was actually answering the question, since when we distributed the questionnaire we could not be sure of the identity of the respondents; that is, we could not with absolute certainty state that a person answering the questionnaire was actually the one to whom it was addressed. Initially, we did not know what kinds of answers we would receive when we performed an interview, but our assumption was that if we posed exactly the same questions, we would receive the same answers regardless of the method used.
To fulfill the research purpose, the present research was based on two separate studies: (1) a survey involving a questionnaire, and (2) a study based on interviews. To investigate the lack of discussion of the issue of the trustworthiness of collected data", the following steps were followed: 1) The exact same questions were posed to the same companies in the written survey and in the face to face interviews; 2) The survey was sent to the company CEOs and interviews were carried out with each CEO; 3) The questions posed were both qualitative (open) and quantitative (scale measurements) in both the survey and the interviews; 4) The answers to the questions were compared; and 5) Similarities and divergences between the answers were identified.
6) In addition, the questions in both the survey and the interviews covered the same period of the company's operations (2008)(2009).
In order to enable the comparison of the data collected in the two studies, it was essential that the researchers posed exactly the same questions in the interviews as in the questionnaire; however, the aim was to compare the data obtained using the two data-collection methods rather than to evaluate collection methods or individual respondents. The interview guide and the questionnaire were constructed in two steps.
As a first step, the interview guide was designed. Forty questions were constructed and divided into the following categories: 1) Background information (e.g., turnaround, number of employees, type of product and production characteristics); 2) Characteristics of customers (e.g., number and type of customers); 3) Characteristics of competitors and competition (e.g., number of competitors and strength of competition); and 4) Success factors (e.g., in product development and in business development).
As a second step, a questionnaire was constructed consisting of 25 questions derived from the interview guide. Thus, 25 of the 40 questions in the interview guide were repeated exactly in the questionnaire. Four types of questions were defined (see Table 1). The four questions in Table 1 are examples of the exact same questions used in both the survey and the interviews, which consisted mostly of quantitative questions. To obtain a high response rate, the majority (21) of the questions in the questionnaire were of a simple character.
Both the interview guide and the questionnaire were pretested on two CEOs, which resulted in some alterations and clarification of the questions. A Swedish database (Affärs data) was used to identify and select companies for participation. A total of 36 small manufacturing companies were selected based on the inclusion criteria of 1) small companies (3-30 employees) and 2) companies representing different industries.
The questionnaire was distributed to the CEOs of the selected companies in two mail-outs carried out in May 2009. A total of 19 questionnaires were returned, representing a response rate of 53%. Since two responses were incomplete, a total of 17 questionnaires were useable.
The interviews with the CEOs were performed in September/October 2009. The participants in the interviews where all selected based on their participation in the survey. Since one company had ceased to exist, the sample for interviews included only 16 companies. Of the 16 respondents who were approached, 12 agreed to participate in the study. Each of the 12 interviews conducted lasted approximately one hour.
In order to fulfill the purpose of this paper (i.e., to explore and discuss the trustworthiness of collected data in interviews and in questionnaire-based surveys), three types of comparisons were performed of the data from the interviews and the data from the questionnaires. The three aspects of the data that were compared were the company descriptions, the question types, and the question categories.
As mentioned earlier, it should be noted that we had no intention of conducting a statistical analysis in this paper; rather, our purpose was merely to explore and discuss.
The first comparison consisted of company descriptions drawn from the questionnaires and the interviews. The company descriptions were structured according to the four categories of questions: background information, characteristics of customers, competitors and competition, and success factors. These descriptions were created separately by the two researchers and then compared in order to identify any similarities or divergences in the descriptions resulting from the use of two types of data collection.
The second comparison involved the answers in the questionnaires and the answers in the interviews in terms of the type of question (i.e., simple, complex, not well-defined, and well-defined).
Finally, the third comparison was made in the same way but according to the categories of questions. In other words, the answers in the interviews and the questionnaires concerning background information, characteristics of customers, competitors, and success factors were compared to determine whether there were any similarities or divergences among the answers.
To make it possible to compare the data, the 12 questionnaires completed by the 12 interview participants were utilized. Furthermore, only those questions in the questionnaire that were identical to those in the interview guide were compared. Most of the questions required answers on a Likert scale ranging from 1 to 5. Other answers were coded as numbers when the data were entered into SPSS (Statistical Package for the Social Sciences) (e.g., yes = 1; no = 0). This made it possible to perform statistical calculations such as mean values for every answer.
A divergence index (DI) was then designed to illustrate possible divergences between answers given on the www.ccsenet.org/ijbm International Journal of Business and Management Vol. 10, No. 8;2015 questionnaires and in the interviews. To construct the DI, the mean value for a question in the questionnaire was divided by the mean value for the same question in the interview guide. For example, to the question "What was the amount of products in 2008?" one respondent answered 930 products on the questionnaire but only 13.5 products in the interview. The DI for this question was thus calculated as 930/13.5 = 68.9. If the DI was 1.0, no divergence was present. If the DI was less (greater) than 1.0, the interview answer had a higher (lower) value than the survey answer. Therefore, in this example, the DI value of 68.9 shows that the reported number of products was 68.9 times greater in the questionnaire than in the interview.
The respondents' answers in both the interviews and the questionnaire were coded in SPSS 17.0, because the purpose of the research was to investigate the similarities and divergences between the two research methods, and this purpose is achieved by comparing means and standard deviations.
It should be noted that in this paper no attempt is made to investigate whether one data-collection method is superior to the other. Our sole purpose in this study is to highlight the divergences in the collected data and the absence of discussion in previous studies regarding trustworthiness in collected data. The data collected in this study are drawn from a small population, so we cannot show a pattern or conduct a statistical analysis; thus, we intend only to highlight issues of concern and to raise awareness regarding the issue of trustworthiness in data collection.

Empirical Findings
The empirical data were compared according to the three classifications: company descriptions, question typologies and categories.

Company Descriptions
Below, we present two sketches of a "typical" company, based on the two studies. These descriptions are based, respectively, on the questionnaire and the interviews completed by the 12 CEOs of small manufacturing companies who participated in the study.

A Brief Picture of a Typical Company Based on the Questionnaire
Based on the questionnaire, the companies had a mean turnover of EUR 6.4 million in their first year of operation and EUR 8.0 million in 2008 (the year of the study), representing a 25% increase. Also, the average number of employees increased during the same period from 21 to 27 (a 29% increase). The typical company in this study was an SME, according to the EU definition. The average number of products manufactured by the company was 930. One third of the companies were not independent but were rather part of a corporate group. The typical company also had one large and many small customers, for a total of 685 customers. The companies' turnover originated partly from their biggest customer (23%) and partly from all other customers (77%). Of the total turnover, 63% came from customers within the municipality or county in which the company was located. The typical company's main competitors, of whom there were less than 10, were local; despite having foreign customers, the company had no international competitors. Competition was perceived to be moderately intense.
The main competition generated business development, which means that the typical company had a positive view of competition. The typical company stressed knowledge of its own technology as being the most important factor in developing its own products, its own production as being the most important factor in its business development, and delivery reliability as being its most important means of competition. Evidently, then, the typical company would seem to have been production-oriented.

A Brief Picture of a Typical Company Based on the Interviews
Based on the interviews, the companies had a mean turnover of EUR 0.7 million in their first year of operation and EUR 2.9 million in 2008 (a 75% increase). Also, the average number of employees increased from 4 to 13 during the same period (a 70% increase). The typical company in this study was a micro firm, according to the EU definition. The average number of products manufactured by the company was 14. The number of companies that were part of a corporate group had doubled since the foundlings of the companies. In 2008, one third of the companies were part of a corporate group, and 72% of the companies stated that their turnover originated came from customers in the municipality or county in which the company was located. The companies' turnover was shared between the biggest customer (28%) and all other customers (72%). The typical company had many small customers, amounting on average to a total of 648 customers. The main competitors, of whom there were fewer than 10, were small local firms, but the studied companies also perceived competition from big companies in Sweden and from international small and large companies. The competition was perceived to be relatively challenging, but most companies had a positive view of competition; ten companies stated that the main competition generated business development. The typical company highlighted knowledge of its own technology www.ccsenet.org/ijbm International Journal of Business and Management Vol. 10, No. 8;2015 as the most important factor for the development of its own products, understanding customer needs as the most important factor in business development, and delivery reliability as its most important means of competition. It seems evident that the typical company relied on a mix of market-oriented and production-oriented factors, important for business development and product development, respectively.

Analysis of the Company Descriptions
A summary of the divergences in company descriptions is shown in Table 2. These descriptions are divided into five areas: background information, customers, competitors/competition, success factors, and orientation. The table highlights the main information within each area and the differences found in the description of a "typical" firm, which in turn resulted in different tentative implications and advice to the CEO on issues such as company growth. Table 2. Description of a typical company generated from the questionnaire and the interviews Categories Questionnaire Interviews

Background information
The typical company: • was a SME.
The typical company: • was a micro firm • had a turnover of EUR 2.9 million (75% increase).

Customers
The typical company: • had 685 customers.
• had a turnover of 23% generated by the biggest customer • had a turnover of 63% generated by local customers The typical company: • had 648 customers • had a turnover of 28% generated by the biggest customer • had a turnover of 72% generated by local customers

Competitors and competition
The main competitors were local firms.
67% of the companies felt that the main competition generated business development.
The perceived competition was moderately intense.
The main competitors were local small firms, but large Swedish companies, international small and large companies were also identified.
83% of the companies felt that their main competition generated business development.
The perceived competition was moderately hard.

Success factors
The typical company highlighted its own production as the most important factor for its business development The typical company highlighted understanding customers' needs as the most important factor for its business development.

Orientation
The typical company was production-oriented. The typical company relied on a mix of market-oriented and production-oriented factors.
Viewing the collected data, it is obvious that the company descriptions differ between the two data-collection methods, which can have consequences on research results and hence for the resultant guidelines, managerial implications, and advice to CEOs.

A Comparison of Answers by Question Type
In a second comparison, answers to questions in the questionnaires and interviews were compared. The comparison was made for each of the four main question types: 1) complex and not well-defined, 2) complex and well-defined, 3) simple and well-defined, and 4) simple and not well-defined.
As explained earlier, the DI we designed shows divergences in the data in cases where the answers in the questionnaire differ from those given in the interviews. If the DI is 1, there are no divergences. The answer to one specific question or part of a question in the questionnaire is illustrated by one "x." In Figure 3, 47 "x"s appear, of which only 3 are equal to 1; this shows that only 6% of the answers given in the questionnaire and interviews are identical. For the other 44 questions (94%), the "x"s are less or more than 1, showing that the answers are not www.ccsenet.org/ijbm International Journal of Business and Management Vol. 10, No. 8;2015 8 identical. The comparison of all 44 questions shows that the DI varies between 0.6 and 68.9. The figure also indicates that simple questions have greater DI than complex questions; in other words, the answers to complex questions in the interviews and questionnaires are more similar to each other than are the answers to simple questions. Table 3 summarizes the total DI for each type of question and their means. In quadrant I, the summarized DI is 7.2 and the mean is 0.9, which are the lowest values of the four types presented. This shows that the answers in the interviews and in the questionnaire, to questions that are both complex and not well-defined, have the highest similarity in the study. Conversely, the answers to the questions that were simple and well-defined (quadrant III) have the highest DI (29.3) and the highest DI mean (2.0). This indicates a difficulty for respondents in answering this type of question. In other words, it seems that it was more difficult for respondents to answer simple and well-defined questions than it was to answer complex and not well-defined questions. When compared to our expectation that simple and well-defined questions would be easier to answer than complex and not well-defined questions, this is a very noteworthy result.

A Comparison of Answers by Category of Question
Answers concerning the four categories addressed in the questionnaire and interviews were compared: 1) Background information; 2) Customers; 3) Competitors and competition; 4) Success factors.
The comparisons are illustrated using the Divergence Index. Figure 4 shows the DI of the four categories of questions presented in the questionnaires and interviews. Each of the 47 "x"s in the figure corresponds to an answer to one specific question or part of a question in the questionnaire and interview. If the DI is 1, there is no divergence between the answer in the survey and the answer in the interview. Six out of 49 "x"s are equal to 1, which means that 12% of the answers in the survey and interviews are identical. The DI values of the other 43 questions (88%) are greater than or less than 1, which indicates that the answers are not identical. The DI varies  also shows that the answers to the questions that are linked to the category "background information" (e.g., turnaround, number of employees, type of product and production characteristics) have the highest DI.

Analysis of Data according to Question Category
The answers provided in the questionnaire and in the interviews resulted in two different company descriptions. These descriptions were based on the four question categories background information, customers, competitors and competition, and success factors. In other words, these four question categories constitute the foundation of the presented images. However, the company descriptions diverge in some areas (see Table 2); these divergences can be seen as a result of divergences among the answers to the four categories of questions. These divergences are visible in all four categories of questions (Table 4); however, the DI differs greatly concerning the answers to the questions in the four categories. The category "background information" has the highest summarized DI and also the highest DI mean. This category consists of questions of the simple type (e.g., "How old are you?"). This may explain the high DI in the background information, since the answers to questions of the simple type also show high DI. However, divergences are visible in all categories of questions, which raises at least two questions: 1) What creates these divergences? And 2) Do these divergences matter?
In answer to the first question, the divergent descriptions result from different answers given to the same questions in the questionnaire and the interviews. The DI for the four categories of questions is shown in Figure 4. The DI varies between 0.4 and 80.4. How do such variations in DI influence the descriptions of the studied companies? Does every variation in DI affect the result? In Figure 5 above, we can observe a "comfort zone" where a variation in the DI does not influence the descriptions. In this study, the comfort zone is the area between DI 0.7 and DI 1.3 in Figure 5. In other words, divergences inside the comfort zone do not influence the descriptions, but divergences outside the comfort zone do. Therefore, the answer to the second question stated above is this: yes, the divergences do matter, but only when they fall outside the comfort zone (i.e., only when the DI has reached sufficient values to influence the descriptions).

Analysis-Scenarios of Results
In light of the results from the two data-collection methods, the following implications and advice for CEOs concerning sustainable business and growth can be derived.
Regarding potential implications of the results, the answers given in the questionnaire indicate a scenario in which the typical company has many products and many customers (see Table 2). The typical company also has a local market and many local competitors. In other words, the typical company is active in a local market with many products and customers. In sum, the results of the questionnaire imply the following advice to managers: reduce the number of products and customers to focus on profitable ones, but also broaden the geographical market and develop new competitive means in order to obtain business growth. To be able to meet the demand from, for example, new customers in new markets, the typical company must also change its approach from product orientation to market orientation.
The interviews, on the other hand, indicate a scenario in which the typical firm has few products but many customers. The typical company is busy in a local market but also has national and international competitors in addition to local competitors. In other words, the typical company is active in a local market with many customers and many competitors. The managerial recommendations implied by the interviews are as follows: develop differentiated products, which in turn will reduce the number of competitors; reduce the number of customers; and develop new competitive means in order to obtain business growth. The typical company as presented in the interviews relied on a mix of product and market orientation to develop differentiated products; thus, advice for such companies should highlight the need for a more market-oriented approach. Thus, the results of the two data-collection methods present different implications and advice to CEOs for company growth, which certainly represents a problematic situation considering that the same companies responded to both the questionnaires and the interviews, see   Table 5. Scenarios and implications derived from the two data collection methods

Products
Reduce the number of products to focus on the most profitable.
Maintain the present number of products if they are profitable. Develop differentiated products.

Customers
Reduce the number of customers, acquire more big customers, and focus on profitable customers.
Reduce the number of customers, acquire more big customers, and focus on profitable customers.

Market
Increase the geographical markets. The company should not be active only in the local market.
Reduce the number of markets and concentrate the firm's activities in fewer markets.

Competition
To confront new and probably harder competition in new markets, the company must use new competitive means.
To strengthen the market development, the company must use new competitive means.

Orientation
The company must change its approach from production orientation to market orientation.
The company should focus more on market-oriented factors.

Summary and Conclusions
The literature review presented earlier in this paper revealed that only a limited number of research articles discuss the trustworthiness of collected data. Therefore, the present paper set out to explore and discuss the trustworthiness of data collected in both interviews and questionnaire surveys by using data generated through both these means.
Our assumption was that we would receive the same answer to a given question regardless of the method used for data collection. In this study, we have compared the collected data and identified the managerial implications from two separate studies based on two different data collection methods. The results show divergences in answers concerning company descriptions, types and categories of questions. Some of the divergences are negligible (i.e., those inside the comfort zone), but those outside the comfort zone influence descriptions as well as have managerial implications.
This finding indicates that there are some ambiguities (i.e., lack of trustworthiness) in the results of the two data collection methods. These ambiguities may be caused by, for example, a person other than the CEO to whom the questionnaire was addressed actually responding to the questionnaire. This divergence and resultant lack of trustworthiness in the collected data in business research can potentially lead to consequences in research results and hence in the managerial implications and advice to CEOs drawn from the research.
As an example, the results of the questionnaire emphasized that (1) the typical company in the study was a SME, (2) its main competitors were other local companies, and (3) it was production-oriented. A potential piece of advice to a CEO derived from the questionnaire findings could be formulated as "increase the geographical markets." The results of the interviews, on the other hand, indicated that: (1) the typical company was a micro firm, (2) its main competitors were local companies, Swedish big companies, and international small and big companies; and (3) it relied on a mix of market-oriented and production-oriented factors. The advice that emerged from the interviews could be formulated as "reduce the number of markets." Such deviations among the implications and advice drawn from the studies using the two methods represent a serious problem that can generate further problems for CEOs, shareholders, and other decision-makers.
When developing this study, we expected that the types of questions would play a role in the compliance or divergence of the empirical findings of the two data-collection methods used. For example, we expected that answers to simple questions (e.g., "How old are you?", "What was the company turnover in 2008?" "How many yearly employees did the company have in 2008?" and "How many different competitors does the company have today?" would be similar in the questionnaire and the interview. In contrast, complex questions requiring a range of interpretation were expected to lead to a higher degree of divergence in the comparison of the two studies. Contrary to our expectations, however, the comparison showed that data related to the questions that were "simple and well-defined" were less consistent (i.e., less trustworthy) than were answers to questions identified as "complex and not well-defined." Notably, simple questions related to background information had the highest Divergence Index. Why was so much divergence found in this category of questions? Had the respondents not truthfully or earnestly responded to the questionnaire? The answers to these questions are beyond the scope of the present paper, but the issue of the possible occurrence of divergence in the data collection needs to be discussed.
www.ccsenet.org/ijbm International Journal of Business and Management Vol. 10, No. 8;2015 As stated earlier, business research is usually designed to enhance managers' understanding of how business organizations work. It is also frequently suggested that the best management research should lead to the development of guidelines by which individuals in positions of responsibility can manage their business responsibilities more efficiently and effectively (Remenyi et al., 1998). However, with reference to the results of the present study, one might question whether all implications and advice to managers are correct, since they may be based on inaccurate or untrustworthy data. As we have shown in the present study, divergences can emerge in collected data, which can result in imperfections, for example, in managerial advice or decision-making. We illustrate the impact of the divergence in the results with the "comfort zone, the limits of which divergences must exceed in order to matter when discussing problems associated with, for instance, decision-making based on results from either the questionnaire or the interviews. However, there is no easy way to determine the boundaries of the "comfort zone." The comfort zone must therefore be considered more as an illustration than an exact ditto; in other words, the comfort zone can be used as a tool to explain problems with the trustworthiness of collected data.
This paper's conclusions may be summarized as follows: 1) In prior research (i.e., in reviewed journals), few reflections have appeared on the trustworthiness of collected data; 2) There are divergences in collected data when we compare answers generated from a survey questionnaire with those generated in interviews; 3) Not every divergence in collected data is a problem; rather, the amplitude of the "Divergence Index" determines the magnitude of the problems they cause; 4) The present study illustrates great divergences (i.e., a lack of trustworthiness) in data among answers to simple and well-defined questions; 5) All researchers should reflect upon their data and bear in mind that data are not necessarily always trustworthy; and 6) Such untrustworthiness can cause consequences in decision-making based on the collected data.
As shown above, future research on this topic is needed. At this stage in the research, this paper does not claim to explain the indicated divergences, but rather only seeks to explore. One area for further research is to expand the present study to include different types of companies (e.g., different sizes, different industries) in order to investigate whether it is possible to obtain generalizable data concerning the trustworthiness of collected data. Another avenue for research would be to investigate how to obtain truthful answers in questionnaire-based surveys and in interviews. How is the trustworthiness of collected data influenced by different attempts to obtain a high response rate in surveys, such as lotteries in which respondents can win prize money or other rewards? Do these prizes make questionnaire respondents more dedicated and trustworthy? A third route for future research would be a study that clarifies how to conduct triangulation in a way that does not require time-consuming effort. Triangulation is often used as a means to enhance research reliability and validity; however, this strategy may not be suitable for all research purposes, as various constraints (e.g., cost, time) may prevent its effective use. Nevertheless, triangulation has vital strengths and encourages productive research.