2022年伍德里奇計量經(jīng)濟學英文版各章總結(jié)_第1頁
2022年伍德里奇計量經(jīng)濟學英文版各章總結(jié)_第2頁
2022年伍德里奇計量經(jīng)濟學英文版各章總結(jié)_第3頁
2022年伍德里奇計量經(jīng)濟學英文版各章總結(jié)_第4頁
2022年伍德里奇計量經(jīng)濟學英文版各章總結(jié)_第5頁
已閱讀5頁,還剩10頁未讀, 繼續(xù)免費閱讀

下載本文檔

版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請進行舉報或認領

文檔簡介

1、CHAPTER 1 TEACHING NOTES You have substantial latitude about what to emphasize in Chapter 1. I find it useful to talk about the economics of crime example (Example 1.1) and the wage example (Example 1.2) so that students see, at the outset, that econometrics is linked to economic reasoning, even if

2、the economics is not complicated theory. I like to familiarize students with the important data structures that empirical economists use, focusing primarily on cross-sectional and time series data sets, as these are what I cover in a first-semester course. It is probably a good idea to mention the g

3、rowing importance of data sets that have both a cross-sectional and time dimension. I spend almost an entire lecture talking about the problems inherent in drawing causal inferences in the social sciences. I do this mostly through the agricultural yield, return to education, and crime examples. Thes

4、e examples also contrast experimental and nonexperimental (observational) data. Students studying business and finance tend to find the term structure of interest rates example more relevant, although the issue there is testing the implication of a simple theory, as opposed to inferring causality. I

5、 have found that spending time talking about these examples, in place of a formal review of probability and statistics, is more successful (and more enjoyable for the students and me). CHAPTER 2 TEACHING NOTES This is the chapter where I expect students to follow most, if not all, of the algebraic d

6、erivations. In class I like to derive at least the unbiasedness of the OLS slope coefficient, and usually I derive the variance. At a minimum, I talk about the factors affecting the variance. To simplify the notation, after I emphasize the assumptions in the population model, and assume random sampl

7、ing, I just condition on the values of the explanatory variables in the sample. Technically, this is justified by random sampling because, for example, E(ui|x1,x2, ,xn) = E(ui|xi) by independent sampling. I find that students are able to focus on the key assumption SLR.4 and subsequently take my wor

8、d about how conditioning on the independent variables in the sample is harmless. (If you prefer, the appendix to Chapter 3 does the conditioning argument carefully.) Because statistical inference is no more difficult in multiple regression than in simple regression, I postpone inference until Chapte

9、r 4. (This reduces redundancy and allows you to focus on the interpretive differences between simple and multiple regression.) You might notice how, compared with most other texts, I use relatively few assumptions to derive the unbiasedness of the OLS slope estimator, followed by the formula for its

10、 variance. This is because I do not introduce redundant or unnecessary assumptions. For example, once SLR.4 is assumed, nothing further about the relationship between u and x is needed to obtain the unbiasedness of OLS under random sampling. CHAPTER 3 TEACHING NOTES For undergraduates, I do not work

11、 through most of the derivations in this chapter, at least not in detail. Rather, I focus on interpreting the assumptions, which mostly concern the population. Other than random sampling, the only assumption that involves more than population considerations is the assumption about no perfect colline

12、arity, where the possibility of perfect collinearity in the sample (even if it does not occur in the population) should be touched on. The more important issue is perfect collinearity in the population, but this is fairly easy to dispense with via examples. These come from my experiences with the ki

13、nds of model specification issues that beginners have trouble with. The comparison of simple and multiple regression estimates based on the particular sample at hand, as opposed to their statistical properties usually makes a strong impression. Sometimes I do not bother with the “ partialling out” i

14、nterpretation of multiple regression. As far as statistical properties, notice how I treat the problem of including an irrelevant variable: no separate derivation is needed, as the result follows form Theorem 3.1. I do like to derive the omitted variable bias in the simple case. This is not much mor

15、e difficult than showing unbiasedness of OLS in the simple regression case under the first four Gauss-Markov assumptions. It is important to get the students thinking about this problem early on, and before too many additional (unnecessary) assumptions have been introduced. I have intentionally kept

16、 the discussion of multicollinearity to a minimum. This partly indicates my bias, but it also reflects reality. It is, of course, very important for students to understand the potential consequences of having highly correlated independent variables. But this is often beyond our control, except that

17、we can ask less of our multiple regression analysis. If two or more explanatory variables are highly correlated in the sample, we should not expect to precisely estimate their ceteris paribus effects in the population. I find extensive treatments of multicollinearity, where one “ tests ” or somehow

18、“ solves ” the multicollinearity problem, to be misleading, at best. Even the organization of some texts gives the impression that imperfect multicollinearity is somehow a violation of the Gauss-Markov assumptions: they include multicollinearity in a chapter or part of the book devoted to “ violatio

19、n of the basic assumptions,” or something like that. I have noticed that masters students who have had some undergraduate econometrics are often confused on the multicollinearity issue. It is very important that students not confuse multicollinearity among the included explanatory variables in a reg

20、ression model with the bias caused by omitting an important variable. I do not prove the Gauss-Markov theorem. Instead, I emphasize its implications. Sometimes, and certainly for advanced beginners, I put a special case of Problem 3.12 on a midterm exam, where I make a particular choice for the func

21、tion g(x). Rather than have the students directly compare the variances, they should appeal to the Gauss-Markov theorem for the superiority of OLS over any other linear, unbiased estimator. CHAPTER 4 TEACHING NOTES At the start of this chapter is good time to remind students that a specific error di

22、stribution played no role in the results of Chapter 3. That is because only the first two moments were derived under the full set of Gauss-Markov assumptions. Nevertheless, normality is needed to obtain exact normal sampling distributions (conditional on the explanatory variables). I emphasize that

23、the full set of CLM assumptions are used in this chapter, but that in Chapter 5 we relax the normality assumption and still perform approximately valid inference. One could argue that the classical linear model results could be skipped entirely, and that only large-sample analysis is needed. But, fr

24、om a practical perspective, students still need to know where the t distribution comes from because virtually all regression packages report tstatistics and obtain p-values off of the t distribution. I then find it very easy to cover Chapter 5 quickly, by just saying we can drop normality and still

25、use t statistics and the associated p-values as being approximately valid. Besides, occasionally students will have to analyze smaller data sets, especially if they do their own small surveys for a term project. It is crucial to emphasize that we test hypotheses about unknown population parameters.

26、I tell my students that they will be punished if they write something like H0: 1? = 0 on an exam or, even worse, H 0: .632 = 0. One useful feature of Chapter 4 is its illustration of how to rewrite a population model so that it contains the parameter of interest in testing a single restriction. I fi

27、nd this is easier, both theoretically and practically, than computing variances that can, in some cases, depend on numerous covariance terms. The example of testing equality of the return to two- and four-year colleges illustrates the basic method, and shows that the respecified model can have a use

28、ful interpretation. Of course, some statistical packages now provide a standard error for linear combinations of estimates with a simple command, and that should be taught, too. One can use an F test for single linear restrictions on multiple parameters, but this is less transparent than a t test an

29、d does not immediately produce the standard error needed for a confidence interval or for testing a one-sided alternative. The trick of rewriting the population model is useful in several instances, including obtaining confidence intervals for predictions in Chapter 6, as well as for obtaining confi

30、dence intervals for marginal effects in models with interactions (also in Chapter 6). The major league baseball player salary example illustrates the difference between individual and joint significance when explanatory variables (rbisyr and hrunsyr in this case) are highly correlated. I tend to emp

31、hasize the R-squared form of the F statistic because, in practice, it is applicable a large percentage of the time, and it is much more readily computed. I do regret that this example is biased toward students in countries where baseball is played. Still, it is one of the better examples of multicol

32、linearity that I have come across, and students of all backgrounds seem to get the point. CHAPTER 5 TEACHING NOTES Chapter 5 is short, but it is conceptually more difficult than the earlier chapters, primarily because it requires some knowledge of asymptotic properties of estimators. In class, I giv

33、e a brief, heuristic description of consistency and asymptotic normality before stating the consistency and asymptotic normality of OLS. (Conveniently, the same assumptions that work for finite sample analysis work for asymptotic analysis.) More advanced students can follow the proof of consistency

34、of the slope coefficient in the bivariate regression case. Section E.4 contains a full matrix treatment of asymptotic analysis appropriate for a masters level course.An explicit illustration of what happens to standard errors as the sample size grows emphasizes the importance of having a larger samp

35、le. I do not usually cover the LM statistic in a first-semester course, and I only briefly mention the asymptotic efficiency result. Without full use of matrix algebra combined with limit theorems for vectors and matrices, it is very difficult to prove asymptotic efficiency of OLS. I think the concl

36、usions of this chapter are important for students to know, even though they may not fully grasp the details. On exams I usually include true-false understanding of asymptotics. type questions, with explanation, to test the students For example: “ In large samples we do not have to worry about omitte

37、d variable bias. ” (False). Or “ Even if the error term is not normally distributed, in large samples we can still compute approximately valid confidence intervals under the Gauss-Markov assumptions. (True).CHAPTER 6 TEACHING NOTES I cover most of Chapter 6, but not all of the material in great deta

38、il. I use the example in Table 6.1 to quickly run through the effects of data scaling on the important OLS statistics. (Students should already have a feel for the effects of data scaling on the coefficients, fitting values, and R-squared because it is covered in Chapter 2.) At most, I briefly menti

39、on beta coefficients; if students have a need for them, they can read this subsection. The functional form material is important, and I spend some time on more complicated models involving logarithms, quadratics, and interactions. An important point for models with quadratics, and especially interac

40、tions, is that we need to evaluate the partial effect at interesting values of the explanatory variables. Often, zero is not an interesting value for an explanatory variable and is well outside the range in the sample. Using the methods from Chapter 4, it is easy to obtain confidence intervals for t

41、he effects at interesting x values. As far as goodness-of-fit, I only introduce the adjusted R-squared, as I think using a slew of goodness-of-fit measures to choose a model can be confusing to novices (and does not reflect empirical practice). It is important to discuss how, if we fixate on a high

42、R-squared, we may wind up with a model that has no interesting ceteris paribus interpretation. I often have students and colleagues ask if there is a simple way to predict when log(y) has been used as the dependent variable, and to obtain a goodness-of-fit measure for the log(y) model that can be co

43、mpared with the usual R-squared obtained when y is the dependent variable. The methods described in Section 6.4 are easy to implement and, unlike other approaches, do not require normality. The section on prediction and residual analysis contains several important topics, including constructing pred

44、iction intervals. It is useful to see how much wider the prediction intervals are than the confidence interval for the conditional mean. I usually discuss some of the residual-analysis examples, as they have real-world applicability. CHAPTER 7 TEACHING NOTES This is a fairly standard chapter on usin

45、g qualitative information in regression analysis, although I try to emphasize examples with policy relevance (and only cross-sectional applications are included.). In allowing for different slopes, it is important, as in Chapter 6, to appropriately interpret the parameters and to decide whether they

46、 are of direct interest. For example, in the wage equation where the return to education is allowed to depend on gender, the coefficient on the female dummy variable is the wage differential between women and men at zero years of education. It is not surprising that we cannot estimate this very well

47、, nor should we want to. In this particular example we would drop the interaction term because it is insignificant, but the issue of interpreting the parameters can arise in models where the interaction term is significant. In discussing the Chow test, I think it is important to discuss testing for

48、differences in slope coefficients after allowing for an intercept difference. In many applications, a significant Chow statistic simply indicates intercept differences. (See the example in Section 7.4 on student-athlete GPAs in the text.) From a practical perspective, it is important to know whether

49、 the partial effects differ across groups or whether a constant differential is sufficient. I admit that an unconventional feature of this chapter is its introduction of the linear probability model. I cover the LPM here for several reasons. First, the LPM is being used more and more because it is e

50、asier to interpret than probit or logit models. Plus, once the proper parameter scalings are done for probit and logit, the estimated effects are often similar to the LPM partial effects near the mean or median values of the explanatory variables. The theoretical drawbacks of the LPM are often of se

51、condary importance in practice. Computer Exercise C7.9 is a good one to illustrate that, even with over 9,000 observations, the LPM can deliver fitted values strictly between zero and one for all observations. If the LPM is not covered, many students will never know about using econometrics to expla

52、in qualitative outcomes. This would be especially unfortunate for students who might need to read an article where an LPM is used, or who might want to estimate an LPM for a term paper or senior thesis. Once they are introduced to purpose and interpretation of the LPM, along with its shortcomings, t

53、hey can tackle nonlinear models on their own or in a subsequent course. A useful modification of the LPM estimated in equation (7.29) is to drop kidsge6 (because it is not significant) and then define two dummy variables, one for kidslt6 equal to one and the other for kidslt6 at least two. These can

54、 be included in place of kidslt6 (with no young children being the base group). This allows a diminishing marginal effect in an LPM. materialize. I was a bit surprised when a diminishing effect did not CHAPTER 8 TEACHING NOTES This is a good place to remind students that homoskedasticity played no r

55、ole in showing that OLS is unbiased for the parameters in the regression equation. In addition, you probably should mention that there is nothing wrong with the R-squared or adjusted R-squared as goodness-of-fit measures. The key is that these are estimates of the population R-squared, 1 Var( u)/Var

56、(y), where the variances are the unconditional variances in the population. The usual R-squared, and the adjusted version, consistently estimate the population R-squared whether or not Var(u|x) = Var(y|x) depends on x. Of course, heteroskedasticity causes the usual standard errors, t statistics, and

57、 F statistics to be invalid, even in large samples, with or without normality. By explicitly stating the homoskedasticity assumption as conditional on the explanatory variables that appear in the conditional mean, it is clear that only heteroskedasticity that depends on the explanatory variables in

58、the model affects the validity of standard errors and test statistics. The version of the Breusch-Pagan test in the text, and the White test, are ideally suited for detecting forms of heteroskedasticity that invalidate inference obtained under homoskedasticity. If heteroskedasticity depends on an ex

59、ogenous variable that does not also appear in the mean equation, this can be exploited in weighted least squares for efficiency, but only rarely is such a variable available. One case where such a variable is available is when an individual-level equation has been aggregated. I discuss this case in

60、the text but I rarely have time to teach it. As I mention in the text, other traditional tests for heteroskedasticity, such as the Park and Glejser tests, do not directly test what we want, or add too many assumptions under the null. The Goldfeld-Quandt test only works when there is a natural way to

溫馨提示

  • 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會有圖紙預覽,若沒有圖紙預覽就沒有圖紙。
  • 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
  • 5. 人人文庫網(wǎng)僅提供信息存儲空間,僅對用戶上傳內(nèi)容的表現(xiàn)方式做保護處理,對用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對任何下載內(nèi)容負責。
  • 6. 下載文件中如有侵權(quán)或不適當內(nèi)容,請與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準確性、安全性和完整性, 同時也不承擔用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。

最新文檔

評論

0/150

提交評論