The Impact of Dividend Payment on Market ‟ s Expectation after the Global Crisis

This paper examines if there is any change in investors‟ behavior toward dividend payout policy of the US corporations after the breakout of the global financial crisis in 2007-2008. In order to do so, dividend signaling theory is adopted as the main theme where we build econometrical models describing the influence of dividend payment on market‟s expectation. Unlike most of the previous studies which use either price-dividend or dividend-earnings type tests to provide empirical evidence for the signaling theory, this paper attempts to perform a new type of test called present value of growth options (PVGO)-dividend type test. It is inspired by the recent development of real options as an investment valuation tool. This paper is in our opinion the very first to examine the implications of the signalling models in the context of the relation between dividend and PVGO using real options technique. Our result shows that in the market flooded with information like the US financial market, dividend no longer plays a significant role in delivering additional valuable information to investors. This outcome is highly inconsistent with dividend signaling theory.


Introduction
"Dividend Puzzle" was firstly coined by Black (1976) and means that companies with high dividend payments are rewarded by investors with higher stock prices, although it should not matter to investors whether a firm pays dividends or not.There have been numerous theoretical as well as empirical studies trying to provide a comprehensive insight for this phenomenon.Yet the dividend payout policy has remained as an interesting paradox and after decades financial economists are still struggling to understand whether the dividend payment policy has an impact on firm"s valuations.Seemingly, signaling dividend theory has emerged as the most promising puzzle solver.Based on the proposal presented by three Nobel laureates George Akerlof, Michael Spence, and Joseph Stiglitz, the theory suggests that dividend is created to challenge the information asymmetry between managers and investors in the market.Some of the most frequently cited theoretical dividend signaling models include Bhattacharya (1979), John and Williams (1985) and Miller and Rock (1985).Although these models have different approaches, they share one common opinion that managers use dividends to signal the real value of the firms to market.
Empirical testing for dividend signalling broadly falls into two major categories including price-dividend and dividend-earnings type test.Still the results of those tests are often contradicting and do not provide a satisfactory resolution of this puzzle.Providentially, real options appeared to be a game-changer in the way dividend signaling theory can be empirically tested.When Stewart Meyers first presented the idea of real options in 1984, it was expected to close the gap between strategic management and corporate finance by providing financiers more flexibility in investment decision making.Gradually, researchers have learnt that real options can be a powerful key provide the answers to the dividend puzzle.Several studies use real options to prove that present value of growth options (PVGO), which is an embedded part of the firm market value, reflects market"s expectation of a firm"s future prospect.Surprisingly, no one has ever considered undertaking research into the link between dividend payout policy and PVGO, which is used in this paper and will deliver a solid evidence for the impact of the dividends on investors" confidence.
Using panel data approach, fixed effect and random effect models in particular, we try to build some models to examine the relationship between PVGO and dividend payment.The final result proves that dividend signaling theory might not hold true in the market"s current condition.Nonetheless, this outcome can serve as an important milestone for other analysts to develop further.The paper consists of three main sections.After reviewing some of the renowned works related to dividend payout policy, dividend signaling theory and real options in Section II, it will describe in detail data selection and variables calculation process in Section III.The panel data regression models will also be explained in this part.Subsequently, the empirical result would be discussed in Section IV.

Dividend Puzzle
Dividends are one of the principal mechanisms by which corporations disburse profits to their shareholders.Dividend payout policy is a procedure that requires managers to make a decision about the level of retained equity and the total amount of dividend to be distributed to shareholders.Many financial economists believe that the decision has a great influence on a firm"s value and, in turn, the wealth of its owners.Therefore, the dividend decision is undoubtedly a critical subject that often requires careful management from CFOs and attracts a huge number of academic attentions.For decades, there has been much debate on the issue of the dividend, and yet it remains as one of the thorniest problems in corporate finance.It was even referred to as "Dividend puzzle" by Black (1976): "The harder we look at the dividend picture, the more it seems like a puzzle, with pieces that just don't fit together."Based on the preceding discussions of the magnitude of dividend payments relative to after-tax earnings and investments-and the importance of dividends as a component of total stock returns, the paper questions the rationale behind dividend"s existence and its roles in financial market.It also put forward a number of debatable propositions about the nature of dividend.
In 1958, Miller-Modigliani (M & M) theorem was introduced by Merton Miller and Franco Modigliani, in their seminal academic paper denying the relevance of dividend for firm"s valuation.Under perfect capital markets(Note 1), they argue that dividend has neither influence on the value of a firm"s share nor the returns to investors.Higher dividend payouts result in lower retained earnings and capital gains, and vice versa, with now impact on total wealth of the shareholders.In addition, to some extent, a dividend can be simply considered as an exchange of current cash for future cash of equal market value when the securities sold to finance any current incremental dividends are reasonably priced.As Miller (1988) later stated, a current incremental dividend is "not much different in principle from withdrawing money from a pass-book savings account".
Even though M & M theorem has stood the test of time over the years, given the assumptions that gave rise to their conclusion, the majority of financial experts strongly believe that a dividend policy can be influential in owners" wealth.They also support the idea that share price will not decline by the same amount of the incremental dividend paid.In fact, the dividend irrelevance theory becomes highly arguable when coming back to the real world market with all of its imperfections or frictions.Those include various tax rates, security issues, flotation costs, brokerage commissions, conflicts of interest between managers and owners, and differences in information between insiders and outsiders.Those variations provide several significant hurdles for drawing plausible Conclusions Regarding The Impact Of Dividend Policy On The Share Price.

Information Asymmetry
In a perfect capital market, all participants including managers and other stakeholders have an equal access to a firm"s information.However, if one party has privileged knowledge about the firm"s future prospects and tries to take advantage of it over the others, informational asymmetry occurs.This problem featured prominently in the paper "The Market for Lemons: Quality uncertainty and the market mechanism" by George Akerlof in 1970.Since most of financial academics and practitioners agree with the idea that managers have better knowledge about their firms" future development relatively to investors, the signaling interpretation has become increasingly insightful.In order to advance in this subject, academics are required to provide both theoretical as well as empirical pieces of evidence.Miller and Modigliani recommend that the reason for managers to make such dividend announcements was to bring market expectations of a firm"s future earnings closer to their perceived level.This proposal is well accepted among the chief financial officers of large U.S. corporations.The survey conducted by Abrutyn and Turner (1990) showed that more than 60 percent of them rated a signalling hypothesis as the most or second most influential determinant of dividend payouts.
Although the idea of dividends signaling information to the market was historically presented as a heuristic proposition, only recently a rigorous logical structure has been provided by dividend signaling hypothesis.The most widespread signaling models were developed by Bhattacharya (1979), John and Williams (1985), and Miller and Rock (1985).Based on asymmetric information assumption, the authors imply that managers, who are superior to investors in term of private information, use the dividend to signal firm"s future prospects to the market.They also suggest that dividend payments often happened when managers realize that there is a mismatch between their firm"s market value and its intrinsic value.Consequently, the increased dividend payment plays a key role in providing a credible signal.Other firms which do not have advantageous inside information cannot imitate the dividend increase without simultaneously rising the risk of experiencing a dividend cut later.Thus, the implication of the dividend signaling hypothesis is that an increase (decrease) cash dividends should be followed by positive (negative) price reactions.
However, the signaling models are subject to several criticisms.Lease et al. (2000) points out that Bhattacharya fails to clarify the specific level of dividends that a firm commits to distribute to its shareholders.Since a declared dividend is only a payment to the residual claimants, the firms are not obliged to maintain the dividend by issuing costly external financing if a cash flow falls short of managers" expectation.Realizing this lack of obligation, the market would not attach any importance to pre-committed dividends.In the case of Miller and Rock"s model, it predicts an announcement effect of dividend changes on share prices.However, instead of taking into account the level of cash dividends paid by a firm, the dividend term in the model denotes the sum of cash dividends and stock repurchases net of any external financing.It leads to the result of zero cash dividends when there are adverse tax consequences for cash dividends.Hence, the model is not capable of solving the dividend puzzle.

Dividend Smoothing
Arguably, the major drawback of signaling models is that they believe dividends are paid to signal new information.In that case, the level of dividends should be fluctuated to reflect the new information (Allen & Morris, 1998).This feature of dividend signaling model was contradicted by the "smoothing" expectation that indicates that a firm"s dividend payout may not change over a period of time, even though earnings may change substantially.Kumar (1988) makes an important contribution to tackling this issue by developing a "coarse signaling theory".He argues that only when a firm starts to move outside of a range of productivity, the need to change their dividend level will be triggered.This argument is consistent with the fact that firms are keen on keeping a stable dividend level over time.In addition, he shows that there is a sequential equilibrium in which dividends act as a coarse signal of future cash flow.That is, in equilibrium, dividends show less variability than cash flows, as has been documented by Shiller (1981).The equilibrium also proves that dividends are poor predictors of future cash flows.
In an attempt to find the reason why firms are willing stabilize dividends in well-functioning capital markets, John and Nachman (1986) analyze the optimal level of dividends paid to shareholders as a product of two terms.The first term is the total extent of financing done at the firm level and shareholder level, and the second one is the level of optimism in the private information of the firm.By this way, no matter how different between the private information of the firm and that of the market, the optimal dividends would be approximately the same.In other words, dividends would be highly stable even though the realized earnings may be volatile.

Empirical Testing of Dividend Signaling Theory
Empirical testing for dividend signaling is broadly categorized into two main types, namely price-dividend and dividend-earning.In general, the first testing method runs OLS regression analysis with observed stock price or returns against dividends, accompanied by other explanatory variables.Fama and French (1998) adopt a different approach with the standardized total market value being the dependent variable.Other researchers use event study to observe the impact of dividend changes on the market expectations.The other method, in comparison with the first one, uses various types of earnings-including past, current and future earnings-as independent variables to study the link between dividends and firm future prosperity.Pettit (1972) provides convincing evidence that share price"s movement follows the trend of dividends changes and those changes are informative to market participants.He also supports the proposition that "the market makes use of announcements of changes in dividend payments in assessing the value of a security".This comment is backed by a number of papers such as Charest (1978), Patell and Wolfson (1984), Laub (1976), Asquith and Mullins (1983) and Bajaj and Vijh (1990).Some other authors have tried to explore this field by examining how informative quarterly dividend still is to the market when current earnings are already known.Aharony and Swary (1980) study a sample of 149 industrial firms listed on the New York Stock Exchange that released their current earnings at least ten days before their dividend declaration.They find that "capital market reaction to the dividend announcements studies strongly support the hypothesis that changes in quarterly cash dividend provide useful information beyond that delivered by corresponding quarterly earning numbers".Unlike the previous papers that use per share figures, Hand and Landsman (2005) use total market value and total dividends to regress the firm"s market equity values on dividend and other explanatory variables.In contrast, Fama and French (1998) try to apply a modified version of Ohlson"s accounting-based equity valuation model to examine the correlation between the market value of equity on book value, earnings, and dividends.Both studies draw a similar conclusion that dividend"s explanatory power on the firms" market value is statistically significant.

Price-Dividend Tests
However, Chin-Bun Tse (2005) criticizes Fama and French (1998) for not explaining the results for profitable companies precisely enough.They focus on free cash flow theory rather than trying in order to answer the question whether managers for those companies used dividend as a signal mechanism for the future continuing prosperity.As their result contradicts the theory, they conclude that dividend is not used by managers to alleviate concerns about the mismanagement of the firm"s profit.In addition, Chin-Bun Tse ( 2005) also casts doubt on the detail that Fama and French use the absolute level of dividend in their study instead of the changes in dividends which are broadly believed to have signaling effect.Remarkably, Miller and Rock (1985) indicate that without the presence of signaling mechanism, price changes could still correlate with dividend changes.It occurs when dividend declarations are successively followed by current earnings announcements.Investors, in this situation, would use dividends to obtain information on how the firms use profits and the share prices will be adjusted to reflect these inferred current earnings but not the "signaled" future ones.

Dividend-Earnings Tests
Unlike the first testing method, the empirical evidence from dividend-earnings tests is quite paradoxical.The hypothesis that dividends are informative about the future earnings of the company was first examined by Watts (1973).In the paper, the future earnings are regressed on historical earnings and dividends.The results showed that there is a positive but weak relationship between current dividends and future earnings.He indicates that the future earning-related information conveyed by unanticipated dividend changes is "trivial".This argument is supported by other studies including Watts (1978) and Gonedes (1978).Penman (1983) observes how the presence of management earnings forecasts can affect the dividend-earnings relationship.He suggests that as soon as those forecasts are announced it could eliminate dividends signaling power.By comparing the estimated returns of portfolios based on the management"s forecast earnings and those based on dividend signals, he concludes that the amount of information provided by those direct statements is more important than by dividend signals.Additionally, since managers voluntarily and explicitly provide the forecast to market, shareholders would bear fewer costs.The paper poses a question: As management"s forecasts of future earnings are apparently superior to dividends, why should firms use dividends to signal future earnings?Benartzi, Michaely, and Thaler (1997) test the signaling theory by observing the mean and median of earnings changes of sample firms in 4-year window.It starts from the year before an event (t 0 -1) until two years after dividend changes (t 0 + 2).OLS regression is used to study the relationship between earnings changes and dividends changes.The empirical result essentially shows that dividends signal the past rather the future condition of a firm.At the same time, there are several empirical studies in favour of dividend signaling.For instance, Brickley (1983) finds that firms that have dividend increases of more than 20% also enjoy earning growth both in the same year and the year after.Similarly, Aharony, and Dotan (1994) report that earnings increase in the year following a dividend increase.Healy and Palepu (1988) claim that it would take two years for earnings to rise after companies start to pay dividend for the first time.Lately, Garrett and Priestley (2000) find that significant evidence from stock market data that dividends convey information regarding unexpected positive changes in current permanent earnings.

Dividend-PVGO Tests Using Real Options Approach
As we see in the last chapter, there is a bewildering variety of theoretical explanations as well as empirical evidence trying to solve the dividend puzzle.Still none of them could provide a convincing answer to the question whether firms should pay dividends or not.However, the recent development of real options might shed new light on this dilemma 2.4.1 The Fundamentals of Real Options "Real options" terminology was first introduced by Stewart Myers (1984).In the last couple of decades, it has developed in response to the need for a new investment valuation tool to effectively capture the dynamic change of business environment.Unlike the traditional investment appraisal techniques, such as Net Present Value, which assume that investment is either irreversible or cannot be delayed (Dixit & Pindyck, 1995), this new approach provides market participants much more flexibility when it comes to investment decision making.The idea is that investment is not a one-time decision, but investors have choices to delay/ abandon this investment when the condition becomes unfavorable or expand it when there is more positive news available.In many cases, an investment can appear uneconomical at the beginning when it is viewed in isolation but later it would create a window of opportunity for the investors to make profits on it in the future (Damodaran, 2005).
In the current economic environment characterized by rapid change and great uncertainty, real options, with its flexibility aspect, tackle these challenges more effectively and efficiently than traditional valuation techniques (Leslie & Michaels, 1997).It provides explicit recognition that future decisions which are designed to maximize value will depend on new information-such as changes in financial prices or market conditions-that can be acquired over the course of time or through some explanatory investment.This feature of real options is analogous to one of financial options.While a stock option"s value and the decision to exercise it are determined by the future share price, the exercise decision of a real option depends on the future value of underlying real asset which is the investment project.In short, the real options" method adopts the fundamentals of pricing financial options to improve the techniques from the discipline of the investment decision analysis.

Present Value of Growth Options (PVGO) in Share Prices
The development of real options has done much to advance our understanding of corporate finance.Researchers have successfully separated the "growth options" part from the stock price in order to increase the accuracy when observing the impact of dividend payout policy on market expectation.The market value of the firm is comprised of the value of assets in place and the present value of growth opportunities.The latter term is associated with the value of future investments which can potentially generate higher rates of return than the opportunity cost of capital.According to Chung and Charoenwong, growth opportunities exist when the competitiveness that pushes the required rates of return on projects toward the firm"s cost of capital is ceased or postponed.Since it is not necessary for the company to go for all of its future investment opportunities, the value of these opportunities is best described as the present value of the company"s options to make future investments (Myers, 1977).
Empirical results suggest that growth opportunities often make up a substantial part of the market value of equity.For instance, Kester (1984), after comparing the capitalized value of the firms" equity, concludes that the value of growth options is half or more of the market value of equity for a lot of firms.Additionally, he discovers that the portion fluctuate between 75% for industries that have high demand volatility.Pindyck (1988) claims that the fraction of market value is attributable to the value of capital in place should be one-half or less for firms with reasonable demand volatility.Smit (2000) estimates the option features of growth stocks in the U.S and correspondingly offers further support for growth option impact on share prices over 10-year-period starting from 1988.In the UK market from 1990-1996, Al-Horani, Pope and Stark (2003) indicate that the returns are linked with the ratios of spending on R&D to market value and book-to-market as forecasted under real options analysis.Adam and Goyal (2002) specially evaluate the value of growth options of a sample of mining companies, and find that the market value of a company"s investment opportunities is empirically related to the book-to-market ratio of its assets.Smit and Trigerogis (2006) propose two possible ways to estimate PVGO, one from firms" business and one from the financial market.The former approach seems to create too many complexities since it requires a number of inputs including identifying the portfolio of firms" strategic options and approximating several option parameters to estimate the set of individual options fraction in firms" business.In contrast, the "market" approach offers a much more straightforward calculation process.Firms" market value is viewed as a sum of two components.The first portion is the base DCF linked with sustained operations from past investments (i.e.assets in place), which operated under no growth policy.The remaining part is associated with the firm"s growth options.While assets in place can be estimated by using standard DCF techniques, the measure of growth options part is required along the x-axis of the ROG matrix.The y-axis simply equals the discounted present value of future cash flows from past investments subtracted by the base NPV.Myers (1977) and Smit and Moraitis (2010) also share the same idea that the present value of a company"s growth options (PVGO), along with its assets in place (AIP), accounts for a significant proportion of the company"s market value (MV).Kester (1984) and Brealey et al. (2011) also use market approach to value PVGO but their calculations are slightly different.They prove that the growth options are equal to the difference between the total value of a company"s market equity (S) and its capitalized off current earnings (EPS).
If managerial and market perception of the firm"s growth opportunities coincide, on a per-share basis, it is possible to observe the implied market valuation of the bundle of growth options by adjusting the current stock price for the value of earnings generated by the assets that are already in place (Smit & Trigeorgis, 2006).Under a hypothetical no-further-growth policy, only the current assets are sustained.The stock price of a firm would then equal the present value of its expected future earnings per share discounted at its opportunity cost of capital.The market value of firms" growth options will reflect the investors" expectations that are influenced by publicly available information.Hence, news like firms" dividend declarations would reveal information concerning the future prospect of the firm and, thereby, influencing their own market value.

Data Description
This Note.Dividend payment"s unit of measurement is USD.PVGO"s unit of measurement is thousand USD.
Among 205 firms included in the sample data, the average dividend payment is about $0.17, ranging from $0.0025 to as much as $1.2225.The mean and median is relatively small and close to the minimum figure showing that the U.S firms made a very low level of dividend from 2008-2014.Additionally the small standard deviation supports the dividend smoothing theory suggested by Kumar (1988).It shows that US corporations are unwilling to make significant changes in dividend payment.Since the mean is slightly greater than the median, the distribution is skewed faintly to the right.At the same time, the PVGO averages out at just above $4 millions per quarter.However, there is a huge gap between the maximum and minimum numbers, about $343.75 millions.
It is reasonable to infer that the growth opportunities are highly different among firms.The comparison between the mean and the median shows some implications.Based on median, there are two groups of firms-high PVGO and low PVGO.The mean is much higher than the median indicating the huge gap between those groups.
The dividend announcement dates and the quarterly dividends for each company are then collected.The announcement date is a date when news of the forthcoming dividend first appears on the NASDAQ"s website.
Neither ex-dividend date nor the date when dividends are paid is considered to be an announcement date.The sample selection is basically based on a number of assumptions proposed by the dividend signaling theory in order to examine the relation between PVGO and firms" dividend policies.First of all, dividends constitute a costly signal to cope with asymmetric information and investors should perceive them as having the strongest information content among other signaling mechanisms.Therefore, if the signaling theory is valid, we would observe a positive relation between PVGO and firm dividend policy (Li & Zhao, 2008).
Following Bhattacharya" suggestion (1977), the study also assumes that the other simultaneous sources of information, such as accounting report or earning announcements, are not fundamentally reliable "screening" mechanisms because of the moral hazard involved in communicating profitability.As a result, regular quarterly dividends to common shareholders are supposed to carry the greatest possible information content and hence they have the most significant influence on firm market value.As in other studies including Aharony and Swary (1980) and Asquith and Mullins (1983), it is essential to make an assumption that the US market is semi-strong form efficient, which means there is no leakage of information prior to the dividend announcement.In addition, it is common sense to consider US market as one of the most transparent and well-regulated environments where investors have a rich access to information.Therefore, if this paper can prove that dividends still communicates valuable information to investors in such market, it would be safe to provide the same conclusion for other less developed markets where access to information is limited.

Estimation Procedure
The variable calculation process is described in detail as follows.Independent variable LN_ , represents the change in dividend payment of firm i at time t compare to the last quarter.It equals to natural logarithm of the ratio of the current dividend payment  , to the previous quarter dividend payment  ,(−1) .
LN_ , =  (  ,  ,(−1) ) The dividend payment for each firm is collected either from DataStream or NASDAQ"s website.The figures are subsequently dated back to the quarter when the dividend announcement is made to match with the corresponding data.Similarly, the dependent variable LN_ , refers to the difference between two successive quarters in PVGO of a firm.It is the natural logarithm of the ratio of the current PVGO to the last quarter"s figure.
, =  , −  ,  , =  , ×  , Where  , : Share price of firm i in quarter t; , : Number of shares issued by firm i in quarter t The value of a firm"s assets-in-place is calculated by the present value of its current free cash flow ( , ) treated as perpetuity, and discounted at its cost of capital ( , ): To estimate the  , , it is necessary to assume that replacement investments in current assets are equivalent to accounting depreciation.Thus,  , is computed by subtracting income tax payments (T) from current earnings before interest and tax (EBIT): The appropriate  , is estimated by the market model (CAPM) based on an adjusted beta for each firm as shown in the following formulae: , =  , + ( , −  , ) ×  , Where: , : Risk free rate using returns on 10 year US Treasury bonds; , : Returns on market risk using S&P500 index; , : The firm-specific beta.

Benefits from Using Panel Data
As it was discussed in the Literature review, most of the previous empirical researches use event study or OLS as the main method to test the impact of dividend policy on the market.However, this study feels the need to be conducted in different way to find a more comprehensive solution for the dividend puzzle.According to Hsiao (2006), panel data provides a more accurate inference of model parameters as it usually contains more degrees of freedom and more sample variability than pure cross-sectional or time series data.It leads to further improvement in the efficiency of econometric estimates.In addition, since panel data is associated with companies, there is bound to be heterogeneity in these units.The techniques of panel data estimation can take such heterogeneity into account by allowing for entity-specific variables.At the same time, panel data eliminates the impact of omitted variables.It is arguable that the real reason one finds (or does not find) certain effects is due to ignoring the effects of particular variables in one"s model specification which are correlated with the included explanatory variables.Panel data has information on both the inter-temporal dynamics and the individuality of the entities.So it may allow one to control the effects of missing or unobserved variables.
Moreover, panel data offers greater capacity for capturing the complexity of investors" behavior toward dividend policy (Baltagi, 1995).It generates more accurate predictions for individual outcomes by pooling the data rather than generating predictions for different outcomes using the data of the individual in question.If individual behaviors are similarly conditional on certain variables, panel data provides the possibility of learning one single behavior by observing the behavior of others.Thus, it is possible to obtain a more accurate description of an individual"s behavior by supplementing observations of the individual in question with data on others.
Correspondingly, panel data enriches empirical analysis in numerous ways that may not be possible when using only cross-section or time series data.According to Gujarati (2008), there are four potential estimating techniques to deal with panel data.Since the data used in this study is short balanced panel data (Note 2), it would be appropriate to employ two of them, including (1) the fixed effects least squares dummy variable model and ( 2) the random effects model.Clark and Linzer (2012) also made the similar recommendation of model choice (Note 3).

Panel Unit Root Tests
Brooks (2014) strongly suggests that researchers are often required to run panel unit root tests before working with the panel data.Two of the most common panel unit root tests are proposed by Levin, Lin, and Chu (2002) and by Im, Pesaran, and Shin (2003) (hereafter LLC and IPS respectively).While they are similar to the root test conducted on a single series, many authors, including Baltagi (1995) and Ramirez (2006), demonstrate that panel unit root tests are more powerful.They can eliminate the Type II error relatively well since the information in the time series is improved by the cross-sectional data.
Both of them are based on the equation: The equation allows for both entity-specific and time-specific effects through   and   respectively as well as separate deterministic trends in each series through  , and the lag structure to mop up autocorrelation in ∆  .
Even though using the same equation, those tests have different hypotheses.While LLC assumes common unit root process, IPS assumes individual unit root process.As a result, the null for the former test is  0 :   ≡  = 0, ∀ and the alternative hypothesis is  1 :   < 0, ∀.In contrast, the hypotheses for IPS are  0 :   = 0, ∀ and  1 :   < 0,  = 1, 2, … ,  1 ;   = 0,  =  1 + 1,  1 + 2, … ,  Brooks (2014) also notes that even though IPS"s heterogeneous panel unit root tests are more applicable than the homogeneous one in the case that N is small relative to T, their results may not be valid when N is large and T is small, in which case the LLC approach may be better.However, this study still produces IPS test as a reference point for the primary result provided by LLC test.

The Fixed Effect Least-Squares Dummy Variable (LSDV) Model
Consider the following fixed effects regression model (FEM): The subscription i on the intercept term  1 suggests that the intercepts of the 205 subjects might varies considerably because of the special features of each firms, such as industries, capital structures or corporate governance.Even though the intercept may be different among firms, it remains unchanged over time, that is, it is time-invariant.It is the so-called "fixed effects" of the model.
In the next step, the differential intercept dummy technique is introduced to allow the entity-fixed and time-fixed effects within the model.It would often be termed a two-way error component model since it contains both cross-sectional and time dummies.By presenting the dummy for each subject and each quarter, the model is able to capture the pure effect of the individual entity at a particular interval (by controlling for the unobserved heterogeneity).
To avoid falling into the dummy-variable trap (i.e., the situation of perfect multicollinearity between dummy variables and the intercept), there are only 206 dummy variables (from D 2i to D 207i ) for total 207 firms and firm number one is treated as the reference category.As a result, the intercept  1 is the value of firm number one"s intercept.And other  coefficient reflects the difference between he intercept value of the first company and those of the other companies.The sum ( 1 +  2 ) provides the true value of firm number two"s intercept.The figures for other firms can be calculated similarly.In the case of time dummy variables, the same intuition is relevant.

The Random Effects Model (REM)
According to Kmenta (1986), one of the most serious problems with FEM is that it fails to include relevant explanatory variables that are time-invariant.To overcome this issue, econometrists have come up with the idea of adding dummy variables into the model.Still it is not a complete solution since it causes the inevitable loss of the number of degrees of freedom.Random effects model does arguably offer a better resolution by transferring this ignorance to the disturbance term.
Recall (1): The intercept value for each firm  1 can be expressed as: Substituting  1 from (3) into (1) to obtain a new equation: The composite error term   consists of two components including   and   .While the former term is the cross-section error component, the latter one is the combined time series and cross-section error component and is occasionally referred as the idiosyncratic term.That is also the reason why the model is sometimes called error components model (ECM).This model makes some general following assumptions: That is, the individual disturbance terms are not correlated with each other and are not autocorrelated either in entities or time series dimensions.Moreover, w it is not correlated with any of the explanatory variables included in the model.Since w it consists partly of ε i it is possibly is correlated with the independent variables.In that case, the REM will cause inconsistent estimation of the regression coefficients.Later the Hausman test will be discussed, which will tell us in a given application if w it is correlated with the explanatory variables, that is, whether REM is the appropriate model.

Fixed or Random Effects Models
A challenge arose while building this model is: which model is more appropriate, FEM or REM? From the discussion above, it is apparent that both of them have its own potential pros and cons when selecting an approach.The FEM will provide unbiased estimates of  , but those estimates can be subject to high sample-to-sample variability.The REM will introduce bias in in estimates of , but can greatly constrain the variance of those estimates-leading to estimates that are closer, on average, to the true value in any particular sample.Unfortunately, it is challenging to decide which one between models is more applicable in this study.
According to Gujarati (2008), the answer for this dilemma depends upon the assumption about the likely correlation between cross-sectional specific error component   and the independent variable LN_DIV.FEM may be superior to the counter model in the case of correlation whereas ECM may be preferable if there is no correlation.
On the other hand, Judge et al. (1980) observes that in the case of short balanced panel, each method can provide significantly different results.While REM treats the intercept  1 =  1 +   , where   is the individual random component, FEM views  1 as fixed and not random.In the latter case, statistical inference is conditional on the observed cross-sectional subjects in the sample.This is appropriate if the cross-sectional subjects in the sample are believed not to be drawn randomly from a larger sample.In that case, FEM is more appropriate.If the cross-sectional subjects in the sample are referred as random drawings, however, then REM is more suitable, for in that case statistical inference is unconditional.
Given how the data in this study is selected, it is reasonable to state that FEM is more applicable.This paper will perform two specification tests including Redundant fixed effects test and Hausman test.The first test is intended to tell us whether a fixed effects panel regression approach is valid, or whether the data can purely be pooled and estimated using a standard OLS regression model.The Hausman specification test shows how significantly parameter estimates differ between FEM and REM.Basically it tests the violation of the random effects modelling assumption that the independent variables are statistically independent to the unit effects.Estimates  in the FEM ( ̂ ) should be the same as the estimates  in the REM ( ̂ ) if there is not violation detected.
According to Clark and Linzer (2015), the Hausman test statistic H is a measure of the difference between the two estimates: The null hypothesis states that H is distributed chi-square with degrees of freedom equal to the amount of independent variables in the model.If the probability statistic is less than 0.05 ( < 0.05), it shows that the two models are sufficiently different and consequently the null hypothesis can be rejected.In the case FEM is preferred to REM.

LLC and IPS Tests
Both tests are computed by using EViews.Two lags were chosen on the basis of Schwarz Info Criterion (SIC) for the augmented Dickey-Fuller (ADF) test including an intercept but no trend.Using a panel comprising all 205 companies, the results of the LLC and IPS tests for each variable at levels and first differences are presented in Table 2:  Levin, Lin, Chu (2002), IPS = Im, Peseran, Shin (2003).The statistics are asymptotically distributed as standard normal with a left hand side rejection area.A * indicates the rejection of the null hypothesis of non-stationary at the 5 per cent level of significance.
As can be easily seen, LLC and IPS produce the same result even though they make totally different assumptions.Both of them reject the null of a unit root for all variables in level form and in difference form as the test statistics are well below the critical values.The evidence strongly shows that both LN_PVGO and LN_DIV do not evolve as non-stationary process and the application of panel data regression models-namely FEM and REM-will not result in biased and inconsistent estimates.
It should be noted that LLC requires only evidence against the non-stationary null in one series to reject the joint null hypothesis.As a result, Breitung and Pesaran (2008) state that the correct conclusion when the null is rejected is that "a significant proportion of the cross-sectional units are stationary".Especially in the context of large number of subjects like in this study, this might not have much statistical meaning since no information is provided on exact number of series are stationary.Often, the homogeneity assumption is not economically meaningful either, since there is no theory suggesting that all of the series have the same autoregressive dynamics and thus the same value of p.

FEM and REM Models
Table 3 presents the statistical results of ( 2) and ( 4).For the convenience of readers, I also show the result of OLS method in this table as a reference point.While the estimates on the LN_DIV are positive for all models suggesting a positive relationship between the change in dividend payment and PVGO, they are not always statistically significant, in particular the case of FEM period fixed and cross-section and period fixed.However, only those tests produce the best R squared of around 0.26.In contrast, the intercepts are positive and statistically significant in all cases.All three methods produce a very similar value of around 0.0124 with significant t-values.
This empirical evidence is quite contradict with dividend signaling theory.While it proves a positive relationship between dividend payment and PVGO, it does not agree that dividend policy play a significant role in solving information asymmetry in the market.When there is an increase or decrease by one unit in LN_DIV, LN_PVGO will change by 0.0025 to 0.026 unit.Given the average of dividend payment among the sample companies is only about $0.17, the impact of dividend on PVGO looks even more insignificant.In addition, the extremely low R squared suggests that both FEM or REM do not have any considerable explanatory power.It implies that only LN_DIV is not sufficient to explain the trend of PVGO.Interestingly, the results of OLS and REM are very much alike.It provides that the random effect does not provide significant different in modeling meaning.
At a first glance, the fixed effects coefficients seem fairly consistent in size and sign.In addition, all fixed effects are different from zero even though those differences are not significant.However, it is still necessary to test whether there is unobserved heterogeneity.In order to determine if the fixed effects panel regression approach is necessary or not, I run a redundant fixed effects tests.The outcomes can be observed in the Table 4. Note."Cross-section" examines the redundancy of the cross-section effects against the full period/cross-section model."Period" examines the redundancy of the period effects against the full period/cross-section model."Cross-section/Period" examines the redundancy of both effects against he full period/cross-section model.
As can be seen, three different redundant fixed effects tests are performed, including: (1) constraining the cross-section fixed effects to zero; (2) constraining the period fixed effects to zero; and (3) constraining both types of fixed effects to zero.Each of them are produced in both Chi-square and F-test versions.It is clearly observable that the cross-sectional fixed effects model parameters are not significantly different from those of the pooled OLS.Hence the period only fixed effects are the force that makes the distinction.The p-values related to the Chi-square and the F-statistic are all 0.0000 for all three tests.It provides strong evidence against the null hypothesis that the fixed effects are all equal to each other.This suggests that there is unobserved heterogeneity.
It also means that the data does not support the restrictions (constraints) and, in turn, a pooled sample would not be applicable.
Subsequently, we run a Hausman test where the null hypothesis is that the preferred model is REM against the alternative hypothesis of FEM.REM assumes that the random effects are uncorrelated with the explanatory variables-otherwise there would be an endogeneity problem, which in turn would make the estimators inconsistent.The Hausman test for Correlated Random Effects tests this hypothesis.The test outcome is given in Table 5.The test fails to reject the null hypothesis at all conventional confidence levels.This provides evidence that the assumption that the random effects should be uncorrelated to the explanatory variables is true for this dataset.Therefore it should not be problematic to estimate a random effects model.The p-value for the test is more than 1%, indicating that the fixed effects model is not appropriate and that the random effects specification is to be preferred.
Even though the Hausman test does not indicate a significant difference ( = 0.3233), it does not completely guarantee that the random effects estimator is bias-free.As a result, it is not possible to conclude that REM is superior to FEM.In most cases, the real correlation between the covariates and unit effects is not exactly zero.Therefore, the failure to reject the null hypothesis in Hausman test may not be due to the real correlation is zero (i.e. the REM estimator is unbiased).It may be caused by the fact that the test does not have sufficient statistical power to reliably detect departures from the null.When using the REM, there will still be bias in estimates of , even if the Hausman test cannot reject the null hypothesis.

Conclusion
The primary purpose of this paper is to find an answer to the question: Do investors still pay attention to dividends?In other words: Does dividend payment policy have any impact on the firm"s valuation, especially in the time after the last global financial crisis.There are a number of preceding theoretical and empirical studies about this "dividend puzzle".First and foremost, dividend signaling theory helps to create the main backdrop for the discussion.It suggests that managers who often possess relatively superior information compared to investors, use dividends to signal the intrinsic value of their firms to the market.Therefore when a firm makes any announcement of an increase in dividend payment, it would have a positive influence on the public perception of this firm"s growth prospects and stability in the future.
Many researchers have attempted to provide empirical evidences for this theory.Two main tests including price-dividend and dividend-earnings type test are often performed.However the results from both of them are found very conflicting.Fortunately, the recent advances in corporate finance have introduced real options as a new investment valuation tool, which enables financial economists to produce the third type test named dividend-PVGO.Using real options technique, it is possible to estimate the present value of growth options from firms" market value.This value is believed to be the true reflection of market"s expectation of firms" prosperity.Therefore if the dividend signaling theory holds true, should dividend-PVGO test reveal a positive correlation between dividend payment and PVGO changes.
In order to conduct the test, the data are retrieved for 205 US companies.Those are frequent dividend payers who had disbursed profits to their shareholders in all 27 quarters over 7 years period from 2008-2014.The final sample consists of 5,535 firm-quarter observations.Based on the nature of sample, fixed effects and random effects model are selected to observe the relationship between dividend payment and PVGO.The results are somewhat contrary to the dividend signaling theory.Our results show that there is a positive but extremely weak correlation between the current change in dividends and PVGO.It also implies that the amount of information carried by dividends is humble and it does not take investors by surprise.This finding would be taken to alleviate the concern of smoothing dividend.Since dividend payout does not have significant effect on the market, managers can be more flexible when operating dividend policy.
This paper is a very first to specifically examine the testable implications of the signalling models in the context of the relation between dividend and PVGO using real options technique.Therefore it filled the research gap that has previously been conducted and other academics may use the results in this paper as a benchmark case.The outcome and the evaluation have revealed some additional insights for future researches.US market is believed to be well-developed and highly regulated environment which provide great protection and rich information assessment to investors.However, other markets might be characterized differently.Therefore it would be interesting to conduct a similar research with different markets.Furthermore, the dependent and independent variables in this study are LN_PVGO and LN_DIV respectively.However, a suggestion for future researches is to replace those variables by different ones.For example, some studies use dividend per share instead of the change in dividend payment when examining the influence of dividend pay-out policy.

Table 1 .
study analyzes a sample of 205 US companies which are frequent quarterly dividend payers during the period from Q1/2008 to Q3/2014.The purpose is to observe if there are any changes in investors" behavior toward listed firms" dividend payout policy after the latest global financial crisis.To be included in the sample, those firms must satisfy all of the following criteria: (1) being listed on Nasdaq Composite, (2) paying dividend in all 27 quarters during the sample period, and (3) all the data required for variable calculation process are available on either DataStream or Bloomberg.The final sample is a panel data comprising 5,535 (205 firms × 27 quarters) firm-quarter observations.Table1presents a simple summary statistics for the Dividend payment and PVGO of the sample firms.Descriptive statistics of the sample of 205 firms

Table 2 .
Panel unit root tests

Table 3 .
Results of FEM and REM Note.FEM = Fixed effects model; REM = Random effects model.t-ratios in parentheses.Intercept and country dummy parameter estimates are not shown.

Table 4 .
Redundant fixed effects tests (cross-section and period fixed effects)

Table 5 .
Correlated random effects-Hausman Test