1            Stylized Facts on Long-run Growth and Business Cycles

(either A) Collect data for two countries of your choice and check the key stylized facts about long-run growth (the Kaldor facts) and business cycles.

(or B) Collect data for ten countries of your choice and check the stylized facts about the business cycles only with regard to the trade balance (by de nition, exports minus imports). To construct it, you need to nd data on real exports and real imports, as well as real GDP, to compare against. If data in real terms are not available, you may use the nominal value of exports and imports, converting them into real terms via dividing by the GDP price de ator (by de nition, this can be obtained by dividing nominal GDP by real GDP).

(for both A and B) Report your ndings in appropriate graphs and tables, comparing your results with those in the literature and across the two countries of your choice. Interpret your results by writing up a short research paper, including sections such as introduction, underlying theory, method of analysis or estimation, and your conclusions in the light of those from similar work you have identi ed via bibliographic search.

References for getting started (but          nd, read and cite at least 5-10 other):

(a) King, Robert G. and Sergio T. Rebelo (2000), Resuscitating Real Business Cycles, NBER Working Paper No. 7534 (February); published as Real Business Cycles and the Test of the Adelmans, Journal of Monetary Economics

33 (2), 1994, 405 438; also in J. B. Taylor and M. Woodford (eds.), Handbook of Macroeconomics, 1999, edition 1, volume 1, chapter 14, pp. 927 1007, Elsevier: pdf on BB.

(b) Stock, James H. and Mark W. Watson (1998), Business Cycle Fluctuations in U.S. Macroeconomic Time Series, NBER Working Paper No. 6528 (April); published as Evidence on Structural Instability in Macroeconomic Time Series Relations, Journal of Business and Economic Statistics 14 (1, January), 1996, 11 30; also in: J. B. Taylor and M. Woodford (eds.), Handbook of Macroeconomics, 1999, edition 1, volume 1, chapter 1, pp. 3 64, Elsevier: pdf on BB.

2           VAR Analysis of Money, Output and In ation

Collect data for a country of your choice and estimate a vector autoregression (VAR) in (stationarized) real money balances, real output (real GDP) and ination (measured by the GDP de ator or the CPI). Perform the necessary data transformations, unit root, Granger causality and lag length tests, and choose a suitable Choleski ordering for the three variables. Then perform impulse response function (IRF) and forecast error variance decomposition (FEVD) analysis. Interpret your results and compare them to analogous results for the EA and US data we employed during our class session. Report your ndings in an informative selection of graphs and tables. Write up a short research paper, including sections such as introduction, underlying theory, method of analysis or estimation, and your conclusions in the light of those from similar work you have identi ed via bibliographic search.

References for getting started (but nd, read and cite at least 5-10 other): (a) Stock, James H. and Mark W. Watson (2001), Vector Autoregressions, Journal of Economic Perspectives 15 (4, Autumn), 2001, 101 115: pdf on BB.

(b) Christiano, Lawrence J., Mathias Trabandt, and Karl Walentin (2011), DSGE Models for Monetary Policy Analysis, Handbook of Monetary Economics, Vol. 3A, chapter 7, pp. 285 367, Elsevier: pdf on BB.

3            Further Guidance on Some Technical Details (including EViews)

In this nal section, I have structured in the user-friendly format of Q(uestions) and A(nswers), some further guidance with regard to the empirical project (in both of its options above) that also provides technical details on EViews, in addition to what you have already included in the data and code les in the zip archive we use and cover during our lab-classes.

Q1. Can we use multiple sources for our data for each country, or will this impact our analysis?

A1. Yes, you can. Of course, there might be some di⁄erence in results due to di⁄erent data series/sources. But that should be ne in case you cannot nd a common data source (such as the IMF, for example see next).

Q2. How do we        nd data relating to the capital stock?

A2. This is usually a hard series to nd. One available good/harmonised data source I am aware of is available via the following IMF link: https://www.imf.org/external/np/fad/publicinvestment/data/data.xlsx

Q3. How to compute 1st-order autocorrelation of any time series (in your data sets) as a measure of persistence in the BC facts?

A3. The easiest way in EViews seems to be the following: click on a series in your output *.wf1 le, then select View and then Correlogram, and check the magnitude (and p-value) of the 1st-order autocorrelation. The latter is usually high in levels or log-levels for variables such as real GDP, but not that high for other; persistence for the same variable but taken in rst(-log) di⁄erences usually decreases (considerably) so, not to be surprised in case you observe that in your data.

Q4. Are the provided starting references compulsory to read?

A4. The starting references I have suggested and posted in pdf on Blackboard are very informative and helpful. You do not have to read these from beginning to end, but rather focus on the 5-10 relevant methodological pages (and related tables and graphs) you need to report something similar in your work!

Q5. How do we submit the coursework?

A5. When submitting your work via Turnitin, please use the Turnitin Assignments folder in Course Tools (not the Turnitin Assignments by Groups folder) and name your report (in pdf) as CW1_StNo1_StNo2_StNo3.pdf (replacing StNo1 by the true student number in digits, etc., starting with the lowest value and ordering the following ones in an increasing order). You can also name in an identical way the corresponding zip archive, containing your group data set, program and output EViews les, that you may optionally (not by obligations) e-mail to me (on my Uni Reading e-mail address) after you submit the pdf via Blackboard/Turnitin. This proposed arrangement is meant to make the process of group submission more uniform and manageable, and thereby to facilitate my feedback and marking.

Q6. Do we address all Kaldor and business cycle facts?

A6. No, you may focus on a subset of the Kaldor and BC facts, but say why you do so in the report, e.g., something like for the limited scope of the present study and due to problems with data availability, we opt to focus hereafter on the following subset of facts…

Q7. How do we import data into EViews (from Excel)?

A7. There are alternative ways, and you can read details using the Help of EViews (e.g., by typing keywords like importing data ). What works fast and easy is copying an area of columns and rows in Excel (including the top row with the variable names but not the rst column with the dates), then create an EViews work le with annual or quarterly frequency indicating the start and end date, then click on Quick in the top menu and select Empty Group (Edit Series) and paste in the top row, just above the rst date entry, from the top-left corner cell. This creates a copy of your Excel data set, and you can delete the open window with this copy as the series are now also contained in the EViews le, so just save this newly-created EViews le. Then you can use this input le with the data set in EViews running the various commands, as we did in the lab-classes. Do use my code even by copy/pasting it: it is meant for you to use, replicate, modify, adapt and learn from! Just make sure you change it appropriately (even if minimally) so that it uses as input your data and not mine, e.g., by changing the names of the variables in my code for the US or EA in my case with a suitable country descriptor (e.g., UK) for the country of your choice (EViews can quickly do that by Find/Replace but control the process at each replace so that you are sure it is correct!…).

Q8. What is the expected structure, length and content of the report?

A8: You should know that from previous courseworks. The structure should be something like introduction (why is a theme interesting and how your work relates to it), data set description, what you do and why, what you nd and how similar or di⁄erent it is compared to earlier studies or to our lectures and classes. You should include an alphabetical list of uniformly formatted references (5-1015) at the end, by the family name of the rst author, as a separate section, and you can cite from these full references in the report only as Author(s) (year of publication) , e.g., Jones and Wilkins (2003) or Jones et al. (2003) if there are 3 co-authors or more. Your submitted paper/pdf should be about 5-7 pages of main text with 3-6 gures and/or tables that are most central or important to what you want to say and interpret, then you may include less important additional gures and/or tables in an appendix – but not necessary to do it if you send me anyway the zip archive by e-mail after submitting the pdf report via Turnitin, which is desirable but optional.

Q9. What about seasonal adjustment?

A9. Seasonal adjustment of quarterly and monthly data (not annual data) is normally done at the source (of information: e.g., IMF, ONS, OECD, Bank of England). You may do it as well in EViews, e.g., via the Census 12 procedure see EViews Help if it is not done for a particular series or more. For consistency and precision, you can either work with all data seasonally adjusted or not, but don t mix them in the same le/project.

Q10. Real vs nominal terms?

A10. You should, generally, work with data in real terms (at constant prices), not in nominal terms (except that in ation is nominal by de nition and measured in % or dln (dlq or dla, whatever you want, but be consistent in not confounding these two measures at the quarterly frequency when it comes to annual(ised) rates/percentages as in our programs). If you do not nd data in real terms, you can transform the respective nominal data by dividing them (de ating them) by a measure of the price level, most commonly the GDP deator or the CPI. Remember that Real Interest Rate = Nominal Interest Rate – (Expected) In ation, but as it is hard to nd data on expected in ation, you can just use actual (i.e., observed) in ation.

Q11. How to check quantitatively for constancy/stationarity of some of the Kaldor facts (e.g., the great ratios )?

A11. The great ratios , since these are ratios, are one good example where you can work with nominal data in the numerator and denominator (you can, of course, also work alternatively with real data in both the numerator and the denominator of such ratios, but not a mixture of the two!). To quantitatively check, or more precisely to statistically test, for constancy, in fact, stationarity, use ADF and KPSS tests as in class but make sure you have a relatively long sample, i.e., more than 80-100 observations ideally (quarterly data increase the size of their annual data equivalent by 4 times) or at least some 40-50 (annual) observations. Please do read the relevant 5-10-15 pages in the 2 starting references I posted on Blackboard these are methodologically very helpful (as I did write above, but let me stress it again)!

Q12. Why can t EViews take a log from a negative number?

A12. It is possible for both the real exchange rate (RER) index and the trade balance (TB) to be negative, by their de nitions, so log cannot be taken (mathematically impossible/unde ned) that s ne. The log of the variables (in real terms, not nominal) is more relevant when you extract the cyclical component via HP ltering to study the BC facts you don t really need the log for the Kaldor facts.

Q13. Shall we use annual data or quarterly data?

A13. Annual data are more appropriate for the Kaldor facts, but you need at least 50-60 observations (for the stationarity tests to be meaningful, more than 100 observations are usually required), so you may use quarterly data for both the Kaldor and the BC facts.

Q14. How many variables to include in the VAR?

A14. After checking and con rming stationarity, for the growth rates (dlq) or hpcycle transformations of the raw data, you can run a VAR with 3 transformed variables that are stationary/stationarised (you do not have to include real unit labour costs (RULC) as a 4th variable just delete it from my VAR codes); if a problem of one or more series being found nonstationary (by ADF and KPSS) arises from their transformation in growth rates (dlq), use only the hpcycle transformation (and explain why you do so: e.g., as I brie y do here this is ne).

Q15. How do we order the variables in the VAR?

A15. You have to run pairwise Granger causality tests, as we did in class but you do not exclude any variable from the VAR, you use the test to order the variables in terms of causality: the variable that Granger-causes another must come before it. As we will see in the VAR lab-class, though, there may be circularity, so there might be 2-3 alternative orderings. If so, chose to report in the main text the ordering that makes most economic sense in terms of IRFs and FEVDs – or sometimes there is no essential di⁄erence in the IRFs and FEVDs from the di⁄erent alternative orderings (do mention whatever the case is in a footnote or sentence or paragraph in your report).

Q16. How do we describe the BC facts?

A16. You have         rst to take the log of a real variable. Then:

  • To measure volatility (relative to output) you work with the HP cyclical components. You look at the standard deviation of the HP cyclical component reported as part of the descriptive statistics of each variable with respect to that of output see our respective code in eausbcfacts.prg

group groupdlaEA dlaEAPCR dlaEARULC dlaEAYED dlaEAYER dlaEAM1R groupdlaEA.stats(i)

displays the descriptive statistics view of groupdlaEA for the

individual samples freeze(tgroupdlaEA) groupdlaEA.stats creates and displays a table object tgroupdlaEA containing the

contents of the previous view

Alternatively, you can see this SD when you plot the histogram as a graph option for this cyclical component.

  • To judge about comovement (procyclicity, acyclicity or countercyclicity), you can look at the cross-correlation coe¢ cients between the respective HP cycles of the variables in a group reported by executing the following command in your eausgrbcfacts.prg

@cor(groupdlaEA) freeze it as a table tcorrgroupdlaEA before closing the window – alternatively, open groupdlaEA and select covariance analysis followed by correlation, which will display the same table in the output workfile

Or even a faster way is to select two time series (their HP cycles) in a Group and then open the group and go to View, Cross-Correlations and look at lag 0.

  • To compute the autocorrelation as a measure of persistence (or inertia) in the macrovariables for the BC facts, you NOW DO NOT NEED THE HP

CYCLE, but should work with the log of the series. Select a single series, e.g., lUSRGDP and then open it and go to View followed by Correlogram: then you can see the value at lag 1 for the partial autocorrelation function (PAC) reported by EViews. It is su¢ cient for our purposes to look only at lag 1 (that is the rho1 persistence coe¢ cient we often discussed).

Q17. How many copies of the coursework a group/team needs to submit

(electronically, via Turnitin in Blackboard)?

A17. You should submit electronically, with a single submission by each group/team research leader , via the respective link and following the detailed instructions to be found in the Assessment Submission folder of the Blackboard EC302 module website, using Turnitin Assignments and (I d suggest) naming your le (uniformly) as:

ec302cw1_StNo1_StNo2_StNo3_StNo4.pdf (in increasing order of student

numbers)

or as

ec302cw1_StName1_StName2_StName3_StName4.pdf (alphabetically by

family name).

I shall then provide feedback and mark for each submission. As we agreed, it is also a good practice when doing empirical research to provide openly/publicly the data set, the programs/codes, and the input and output les when running them in a zip archive, which you could name by the same uniform le name I suggested, and then e-mail the zip separately to me as an attachment (could be after the deadline, just by one member of the group).

Q18. How about exceeding the word limit?

A18. Plus or minus 10% with respect to the word limit (of 2000) words is ne (I can exclude counting the text in tables and graphs, and possibly the references do not worry much), just don t submit too long (or too short) papers.

Q19: Is it okay to have a quarterly time range from 1995 to 2020 if I am investigating the long run Kaldor facts or should I seek a longer timeline? I registered with the UK Data Service to access data further back however it states that my access is forbidden even though I have logged into my University account.

A19: Yes, Q data over 25 years will be ne, but you should explain, as you do below, when you present the data in the write-up why you do not use annual data and a longer sample. It is strange that you have no access to the data: try to use the Parallels Client remote connection when you access UK Data Service via the library (that could be the reason).

Q20 (if remote access to Apps Anywhere away from campus): I am experiencing issues with the EViews Parallel program which is temperamental where one hour I can log into it and it works ne but then another hour I log into my parallels account and nothing appears – it s as if the program has crashed or doesn t want to start. As an example, I was on EViews trying to create a work le and then I got a message saying it has stopped working and then it just crashed. Now I cannot open EViews again for some reason, but it will possibly open in a few hours (again I don t know why). I have lost three days on just trying to use this system, this is stressing me out as I have got other assignment deadlines to meet. Could you please advise.

A20 (if remote access to Apps Anywhere away from campus): Try to download and use the free analogue of EViews, gretl (link in one of my announcements/messages on Bb); or use R or Stata for your project, if you know how to (or any other software). Or, if you can spend a day or two in our library labs, at a desktop there, which may have access to EViews without the remote connection, that is more stable?… I am really sorry for the wasted days with software issues; it has happened to me too, of course, even if I always often backup, and I know how frustrating it might be!…j

Q21: With regards to the readme le in the zip archive for our empirical project, do you have a template for that or an example to use, since we have never done one like it?

A21: I have posted an example of a readme le in the folder/item where Coursework 1 and the references for it are collected in the Bb module website. Your readme le for CW1 should be (much) simpler and shorter, as you do not have that many variations of the data and code. So half a page or so should be enough.

Q22: For the background literature section, since the variables are already decided, we won t need to justify we are using them, unlike our applied econometrics assignment in the Autumn term. So instead, can we use background literature as a way to support and compliment the methods and data used instead? So if our results show something particular, we can use these literatures to support our narrative to explain that another study has found something similar. Would that be ok?

A22: It is good to motivate why you use these 3 variables: the justi cation is obvious: you want to explore the Hume-Friedman observation (check back our slides and my respective video). You need not write more than a paragraph on this, relating it to Hume-Friedman. Yes, once you provide your results in terms of IRFs and FEVDs, you shall relate what you nd to our EA and US ndings from the lab-class (no need to copy paste my IRFs) and to the literature you have read and cited (at least 10-15 sources).

Q23: Lastly, so far with all my assignments, we have not needed cover sheets even with group projects. Does that not apply for the macro project?

A23: You may be required to enter a coversheet with Turnitin, so do it; even if it does not matter for me, and would not a⁄ect the mark, provided that you have included either your student numbers or names in the submitted paper.

Q24: I m collecting data for the business cycle and Kaldor facts, however some of the data is only available annually (e.g. for the capital stock), therefore there aren t enough observations. Is there a way to still make use of this data?

A24: Yes, I know that capital stock data are annual and usually have a limited sample coverage. We can t do anything about it. Even if 20 annual observations are not enough for a valid and conclusive stationarity test (one would wish at least about 100 observations), you can still proceed to ADF and KPSS tests and report what you nd in a dense summary table like we did in our lab-class. Originally, the Kaldor facts (of long-run growth) were studied using annual data (and, ironically, in a very short sample), so the traditional approach for these Kaldor facts would be to use annual data. However, I would not mind as well quarterly data, especially if annual data are available only for about 1015 years. Quarterly % changes should then be better annualised, i.e., multiplied by 400 (as in our dla version in the code, not the dlq). By contrast, the BC facts (of short-run uctuations) were traditionally studied (see the King-Rebelo handbook chapter I posted for you as pdf on Bb) at a quarterly frequency, and it is not quite informative to use annual data on this dimension of the project. The mentioned pdf contains about 5-10 pages of dense methodological guidance of how you can condense in a table the BC facts for a country, and you may compare (even if indirectly, as the samples will not be identical) your respective ndings to the US case in King and Rebelo. So, ideally, one would study the Kaldor facts using annual data and the BC facts using quarterly data. If there are limitations with such an approach, as we just discussed, at least this should be explained (in a footnote, 2-3 sentences or a paragraph), and then you proceed as you can best proceed.

Q25: I have found data on long-term interest rates, but it is unclear whether they are in nominal or real terms. Should I just assume they are nominal?

A25: You cannot assume anything about the data: you need to carefully check the description at the source: usually these are scrupulously provided (may be on a di⁄erent, de nitions webpage: so do check). Send me the link for your interest rate data, I may be able to help you. Interest rates, nominal mostly, are easy to nd everywhere on the Internet: IMF/IFS, World Bank, OECD, BoE, ECB, Fred, all other central banks.

Q26 (if remote access to Apps Anywhere away from campus): I have been trying to launch Eviews on my mac through apps anywhere following your instructions however when I launch it only parallels opens and Eviews doesn t, and I am unsure of what I am doing wrong here. Would you be able to provide me with some help?

A26 (if remote access to Apps Anywhere away from campus): Try this: open Parallels, then open Google Chrome (not Safari!), then nd AppsAnywhere on our Uni Rdg website (e.g., via the IT Services webpages), then scroll down AppsAnywhere, and launch EViews 11. That should work.

Q27: I have begun writing up my results and wondered if for the kaldor fact which show the ratio to be constant , if it would be best to conduct stationarity tests on those variables. To prove whether they are constant or not. if so would I be looking for the variable to stationary to prove it is constant over time?

A27: Correct: formal ADF and KPSS test results should be reported as statistical evidence whether a variable in level or growth rate or ratio or growth rate of ratio is stationary (i.e., mean reverting, but not constant). We did that in our lab-class, watch the recording.

Q28: Me and my group members are currently doing the second question for the group project and we are estimating a VAR for Japan. I have a question to ask. I have noticed in the code of the program eausvar.prg used for lab class 4, that the IRF was performed for both growth rates (dlq) and hpcycle transformations. For our group project do we have to include both transformations? Or is just one transformation su¢ cient for our analysis if it is justi ed by previous literature? We did nd that both transformations were stationary for Japan using the ADF and KPSS tests.

A28: I am glad that you have read carefully the code and my instructions. There is no any particular preference in the literature as to whether dlq % changes or HP cyclical % deviations from trend are more correct or more suitable in running a VAR. If one of the transformations did not lead to stationarity con rmed by at least ADF and KPSS tests unambiguously, then there is a reason to opt for the other. In your case, I would suggest that you run both speci cations, dlq and hpcycle (when you have a batch code, it costs nothing: literally, the same run in 2-3 seconds produces the results for both speci cations). Then check the IRFs and FEVDs, with a focus on the Hume-Friedman observation, or the rst part of it, i.e., with respect to output (in case you do not nd statistically signi cant e⁄ects on the price level or in ation). If they show similar key facts/trends/patterns, you can choose hpcycle to be the benchmark or baseline estimate, and use it in the main text, and the dlq to be the alternative estimate meant as a robustness check, and summarise its IRFs and FEVDs in an appendix, saying in the main text that your results are robust (that is, do not change much quantitatively) if you use dlq instead of hpcycle. If the results do change visibly (quite a bit), then you can/should state in the main text and conclusions that what you nd depends on the way you have performed the stationarisation of the original variables. Then pick up the better results (i.e., the more interpretable ones from the perspective of economic intuition, given what we know and has been written on Japan s monetary policy) for the main text, and summarise the alternative results in the appendix. Japan may give you some problems, as it has been in a liquidity trap since the late 1990s (read about that and cite 5-10 papers) – but let s see. You may nd that monetary policy has no real e⁄ects or e⁄ects on prices/in ation. But if so, that is what you report, presenting your IRFs and FEVDs. You may check if a sample split isolating away the Global Financial Crisis (GFC) so exclude 2007:Q3 through 2009:Q2, and work with 1994:1-2007:2 and then 2009:3-2020:3(?) can lead to di⁄erences in the IRFs and/or FEVDs pre-GFC and post-GFC.

Q29: I noticed your sample is from 1970 to 2015 and then you split it, whereas the data we collected is from 1994 to 2020. When comparing our estimates to EA and US data, should we create estimates for the same time period as our data or just compare to the exact same data that you did in the lab classes (meaning the time period won t be the same)?

A29: A precise or direct comparison would require the same sample, so 1994-2020, which will mean additional work for you to update my dataset (and I think the last update of the EA data, online, goes through 2017:4, so it may not be feasible, which you should mention). A less precise or indirect comparison could refer to the VAR IRFs and FEVDs for Japan vs US and EA in our EC302 sample/ gures/tables, stating however as a note of caution that the samples are not identical. The rst approach is more careful (but not feasible, due to lack of EA data through 2020), the second can be used if you are short of time and stating the reason just mentioned, EA data unavailability, and would not be that much worse after all.

Q30: For your coding of the VAR, in part 3a I noticed you ordered mc rst before running the granger causality. Is there an economic reason for this or did you just randomly order it in that way and then ran the Granger causality to gure out the best ordering afterwards?

A30: I wanted to show for you in the code 2 versions of the VAR, with di⁄erent ordering. If I recall correctly, M1 ordered rst is more consistent with empirical evidence, i.e., the Granger causality tests, while RULC ordered rst and M1 last would make more theoretical sense from the perspective of how prices are formed after labour costs and how monetary policy responds to prices and output, having observed them, i.e., RULC ! P ! Y ! M1. In your project, you can present a benchmark or baseline, either justi ed empirically (by Granger causality) or theoretically (in the same sense as I wrote), and an alternative ordering, or use generalised IRFs in addition to the Choleskiidenti ed (by ordering) VAR to check robustness (in GIRFs, ordering does not matter, but a multivariate Normal distribution is assumed for the shocks, so there is a trade-o⁄ involved as usual, nothing comes clean, with only bene ts and no disadvantages, in economic systems and empirical research).

I hope this detailed account of potential worries and pitfalls addresses most of your questions regarding the empirical project. If you have other questions, not included above, do send me an e-mail or ask during lectures or lab-classes.