• Architecture and Design
  • Asian and Pacific Studies
  • Business and Economics
  • Classical and Ancient Near Eastern Studies
  • Computer Sciences
  • Cultural Studies
  • Engineering
  • General Interest
  • Geosciences
  • Industrial Chemistry
  • Islamic and Middle Eastern Studies
  • Jewish Studies
  • Library and Information Science, Book Studies
  • Life Sciences
  • Linguistics and Semiotics
  • Literary Studies
  • Materials Sciences
  • Mathematics
  • Social Sciences
  • Sports and Recreation
  • Theology and Religion
  • Publish your article
  • The role of authors
  • Promoting your article
  • Abstracting & indexing
  • Publishing Ethics
  • Why publish with De Gruyter
  • How to publish with De Gruyter
  • Our book series
  • Our subject areas
  • Your digital product at De Gruyter
  • Contribute to our reference works
  • Product information
  • Tools & resources
  • Product Information
  • Promotional Materials
  • Orders and Inquiries
  • FAQ for Library Suppliers and Book Sellers
  • Repository Policy
  • Free access policy
  • Open Access agreements
  • Database portals
  • For Authors
  • Customer service
  • People + Culture
  • Journal Management
  • How to join us
  • Working at De Gruyter
  • Mission & Vision
  • De Gruyter Foundation
  • De Gruyter Ebound
  • Our Responsibility
  • Partner publishers

empirical research in economics

Your purchase has been completed. Your documents are now available to view.

Methods Used in Economic Research: An Empirical Study of Trends and Levels

The methods used in economic research are analyzed on a sample of all 3,415 regular research papers published in 10 general interest journals every 5th year from 1997 to 2017. The papers are classified into three main groups by method: theory, experiments, and empirics. The theory and empirics groups are almost equally large. Most empiric papers use the classical method, which derives an operational model from theory and runs regressions. The number of papers published increases by 3.3% p.a. Two trends are highly significant: The fraction of theoretical papers has fallen by 26 pp (percentage points), while the fraction of papers using the classical method has increased by 15 pp. Economic theory predicts that such papers exaggerate, and the papers that have been analyzed by meta-analysis confirm the prediction. It is discussed if other methods have smaller problems.

1 Introduction

This paper studies the pattern in the research methods in economics by a sample of 3,415 regular papers published in the years 1997, 2002, 2007, 2012, and 2017 in 10 journals. The analysis builds on the beliefs that truth exists, but it is difficult to find, and that all the methods listed in the next paragraph have problems as discussed in Sections 2 and 4. Hereby I do not imply that all – or even most – papers have these problems, but we rarely know how serious it is when we read a paper. A key aspect of the problem is that a “perfect” study is very demanding and requires far too much space to report, especially if the paper looks for usable results. Thus, each paper is just one look at an aspect of the problem analyzed. Only when many studies using different methods reach a joint finding, we can trust that it is true.

Section 2 discusses the classification of papers by method into three main categories: (M1) Theory , with three subgroups: (M1.1) economic theory, (M1.2) statistical methods, and (M1.3) surveys. (M2) Experiments , with two subgroups: (M2.1) lab experiments and (M2.2) natural experiments. (M3) Empirics , with three subgroups: (M3.1) descriptive, (M3.2) classical empirics, and (M3.3) newer empirics. More than 90% of the papers are easy to classify, but a stochastic element enters in the classification of the rest. Thus, the study has some – hopefully random – measurement errors.

Section 3 discusses the sample of journals chosen. The choice has been limited by the following main criteria: It should be good journals below the top ten A-journals, i.e., my article covers B-journals, which are the journals where most research economists publish. It should be general interest journals, and the journals should be so different that it is likely that patterns that generalize across these journals apply to more (most?) journals. The Appendix gives some crude counts of researchers, departments, and journals. It assesses that there are about 150 B-level journals, but less than half meet the criteria, so I have selected about 15% of the possible ones. This is the most problematic element in the study. If the reader accepts my choice, the paper tells an interesting story about economic research.

All B-level journals try hard to have a serious refereeing process. If our selection is representative, the 150 journals have increased the annual number of papers published from about 7,500 in 1997 to about 14,000 papers in 2017, giving about 200,000 papers for the period. Thus, the B-level dominates our science. Our sample is about 6% for the years covered, but less than 2% of all papers published in B-journals in the period. However, it is a larger fraction of the papers in general interest journals.

It is impossible for anyone to read more than a small fraction of this flood of papers. Consequently, researchers compete for space in journals and for attention from the readers, as measured in the form of citations. It should be uncontroversial that papers that hold a clear message are easier to publish and get more citations. Thus, an element of sales promotion may enter papers in the form of exaggeration , which is a joint problem for all eight methods. This is in accordance with economic theory that predicts that rational researchers report exaggerated results; see Paldam ( 2016 , 2018 ). For empirical papers, meta-methods exist to summarize the results from many papers, notably papers using regressions. Section 4.4 reports that meta-studies find that exaggeration is common.

The empirical literature surveying the use of research methods is quite small, as I have found two articles only: Hamermesh ( 2013 ) covers 748 articles in 6 years a decade apart studies in three A-journals using a slightly different classification of methods, [1] while my study covers B-journals. Angrist, Azoulay, Ellison, Hill, and Lu ( 2017 ) use a machine-learning classification of 134,000 papers in 80 journals to look at the three main methods. My study subdivide the three categories into eight. The machine-learning algorithm is only sketched, so the paper is difficult to replicate, but it is surely a major effort. A key result in both articles is the strong decrease of theory in economic publications. This finding is confirmed, and it is shown that the corresponding increase in empirical articles is concentrated on the classical method.

I have tried to explain what I have done, so that everything is easy to replicate, in full or for one journal or one year. The coding of each article is available at least for the next five years. I should add that I have been in economic research for half a century. Some of the assessments in the paper will reflect my observations/experience during this period (indicated as my assessments). This especially applies to the judgements expressed in Section 4.

2 The eight categories

Table 1 reports that the annual number of papers in the ten journals has increased 1.9 times, or by 3.3% per year. The Appendix gives the full counts per category, journal, and year. By looking at data over two decades, I study how economic research develops. The increase in the production of papers is caused by two factors: The increase in the number of researchers. The increasing importance of publications for the careers of researchers.

The 3,415 papers

2.1 (M1) Theory: subgroups (M1.1) to (M1.3)

Table 2 lists the groups and main numbers discussed in the rest of the paper. Section 2.1 discusses (M1) theory. Section 2.2 covers (M2) experimental methods, while Section 2.3 looks at (M3) empirical methods using statistical inference from data.

The 3,415 papers – fractions in percent

The change of the fractions from 1997 to 2017 in percentage points

Note: Section 3.4 tests if the pattern observed in Table 3 is statistically significant. The Appendix reports the full data.

2.1.1 (M1.1) Economic theory

Papers are where the main content is the development of a theoretical model. The ideal theory paper presents a (simple) new model that recasts the way we look at something important. Such papers are rare and obtain large numbers of citations. Most theoretical papers present variants of known models and obtain few citations.

In a few papers, the analysis is verbal, but more than 95% rely on mathematics, though the technical level differs. Theory papers may start by a descriptive introduction giving the stylized fact the model explains, but the bulk of the paper is the formal analysis, building a model and deriving proofs of some propositions from the model. It is often demonstrated how the model works by a set of simulations, including a calibration made to look realistic. However, the calibrations differ greatly by the efforts made to reach realism. Often, the simulations are in lieu of an analytical solution or just an illustration suggesting the magnitudes of the results reached.

Theoretical papers suffer from the problem known as T-hacking , [2] where the able author by a careful selection of assumptions can tailor the theory to give the results desired. Thus, the proofs made from the model may represent the ability and preferences of the researcher rather than the properties of the economy.

2.1.2 (M1.2) Statistical method

Papers reporting new estimators and tests are published in a handful of specialized journals in econometrics and mathematical statistics – such journals are not included. In our general interest journals, some papers compare estimators on actual data sets. If the demonstration of a methodological improvement is the main feature of the paper, it belongs to (M1.2), but if the economic interpretation is the main point of the paper, it belongs to (M3.2) or (M3.3). [3]

Some papers, including a special issue of Empirical Economics (vol. 53–1), deal with forecasting models. Such models normally have a weak relation to economic theory. They are sometimes justified precisely because of their eclectic nature. They are classified as either (M1.2) or (M3.1), depending upon the focus. It appears that different methods work better on different data sets, and perhaps a trade-off exists between the user-friendliness of the model and the improvement reached.

2.1.3 (M1.3) Surveys

When the literature in a certain field becomes substantial, it normally presents a motley picture with an amazing variation, especially when different schools exist in the field. Thus, a survey is needed, and our sample contains 68 survey articles. They are of two types, where the second type is still rare:

2.1.3.1 (M1.3.1) Assessed surveys

Here, the author reads the papers and assesses what the most reliable results are. Such assessments require judgement that is often quite difficult to distinguish from priors, even for the author of the survey.

2.1.3.2 (M1.3.2) Meta-studies

They are quantitative surveys of estimates of parameters claimed to be the same. Over the two decades from 1997 to 2017, about 500 meta-studies have been made in economics. Our sample includes five, which is 0.15%. [4] Meta-analysis has two levels: The basic level collects and codes the estimates and studies their distribution. This is a rather objective exercise where results seem to replicate rather well. [5] The second level analyzes the variation between the results. This is less objective. The papers analyzed by meta-studies are empirical studies using method (M3.2), though a few use estimates from (M3.1) and (M3.3).

2.2 (M2) Experimental methods: subgroups (M2.1) and (M2.2)

Experiments are of three distinct types, where the last two are rare, so they are lumped together. They are taking place in real life.

2.2.1 (M2.1) Lab experiments

The sample had 1.9% papers using this method in 1997, and it has expanded to 9.7% in 2017. It is a technique that is much easier to apply to micro- than to macroeconomics, so it has spread unequally in the 10 journals, and many experiments are reported in a couple of special journals that are not included in our sample.

Most of these experiments take place in a laboratory, where the subjects communicate with a computer, giving a controlled, but artificial, environment. [6] A number of subjects are told a (more or less abstract) story and paid to react in either of a number of possible ways. A great deal of ingenuity has gone into the construction of such experiments and in the methods used to analyze the results. Lab experiments do allow studies of behavior that are hard to analyze in any other way, and they frequently show sides of human behavior that are difficult to rationalize by economic theory. It appears that such demonstration is a strong argument for the publication of a study.

However, everything is artificial – even the payment. In some cases, the stories told are so elaborate and abstract that framing must be a substantial risk; [7] see Levitt and List ( 2007 ) for a lucid summary, and Bergh and Wichardt ( 2018 ) for a striking example. In addition, experiments cost money, which limits the number of subjects. It is also worth pointing to the difference between expressive and real behavior. It is typically much cheaper for the subject to “express” nice behavior in a lab than to be nice in the real world.

(M2.2) Event studies are studies of real world experiments. They are of two types:

(M2.2.1) Field experiments analyze cases where some people get a certain treatment and others do not. The “gold standard” for such experiments is double blind random sampling, where everything (but the result!) is preannounced; see Christensen and Miguel ( 2018 ). Experiments with humans require permission from the relevant authorities, and the experiment takes time too. In the process, things may happen that compromise the strict rules of the standard. [8] Controlled experiments are expensive, as they require a team of researchers. Our sample of papers contains no study that fulfills the gold standard requirements, but there are a few less stringent studies of real life experiments.

(M2.2.2) Natural experiments take advantage of a discontinuity in the environment, i.e., the period before and after an (unpredicted) change of a law, an earthquake, etc. Methods have been developed to find the effect of the discontinuity. Often, such studies look like (M3.2) classical studies with many controls that may or may not belong. Thus, the problems discussed under (M3.2) will also apply.

2.3 (M3) Empirical methods: subgroups (M3.1) to (M3.3)

The remaining methods are studies making inference from “real” data, which are data samples where the researcher chooses the sample, but has no control over the data generating process.

(M3.1) Descriptive studies are deductive. The researcher describes the data aiming at finding structures that tell a story, which can be interpreted. The findings may call for a formal test. If one clean test follows from the description, [9] the paper is classified under (M3.1). If a more elaborate regression analysis is used, it is classified as (M3.2). Descriptive studies often contain a great deal of theory.

Some descriptive studies present a new data set developed by the author to analyze a debated issue. In these cases, it is often possible to make a clean test, so to the extent that biases sneak in, they are hidden in the details of the assessments made when the data are compiled.

(M3.2) Classical empirics has three steps: It starts by a theory, which is developed into an operational model. Then it presents the data set, and finally it runs regressions.

The significance levels of the t -ratios on the coefficient estimated assume that the regression is the first meeting of the estimation model and the data. We all know that this is rarely the case; see also point (m1) in Section 4.4. In practice, the classical method is often just a presentation technique. The great virtue of the method is that it can be applied to real problems outside academia. The relevance comes with a price: The method is quite flexible as many choices have to be made, and they often give different results. Preferences and interests, as discussed in Sections 4.3 and 4.4 below, notably as point (m2), may affect these choices.

(M3.3) Newer empirics . Partly as a reaction to the problems of (M3.2), the last 3–4 decades have seen a whole set of newer empirical techniques. [10] They include different types of VARs, Bayesian techniques, causality/co-integration tests, Kalman Filters, hazard functions, etc. I have found 162 (or 4.7%) papers where these techniques are the main ones used. The fraction was highest in 1997. Since then it has varied, but with no trend.

I think that the main reason for the lack of success for the new empirics is that it is quite bulky to report a careful set of co-integration tests or VARs, and they often show results that are far from useful in the sense that they are unclear and difficult to interpret. With some introduction and discussion, there is not much space left in the article. Therefore, we are dealing with a cookbook that makes for rather dull dishes, which are difficult to sell in the market.

Note the contrast between (M3.2) and (M3.3): (M3.2) makes it possible to write papers that are too good, while (M3.3) often makes them too dull. This contributes to explain why (M3.2) is getting (even) more popular and the lack of success of (M3.3), but then, it is arguable that it is more dangerous to act on exaggerated results than on results that are weak.

3 The 10 journals

The 10 journals chosen are: (J1) Can [Canadian Journal of Economics], (J2) Emp [Empirical Economics], (J3) EER [European Economic Review], (J4) EJPE [European Journal of Political Economy], (J5) JEBO [Journal of Economic Behavior & Organization], (J6) Inter [Journal of International Economics], (J7) Macro [Journal of Macroeconomics], (J8) Kyklos, (J9) PuCh [Public Choice], and (J10) SJE [Scandinavian Journal of Economics].

Section 3.1 discusses the choice of journals, while Section 3.2 considers how journals deal with the pressure for publication. Section 3.3 shows the marked difference in publication profile of the journals, and Section 3.4 tests if the trends in methods are significant.

3.1 The selection of journals

They should be general interest journals – methodological journals are excluded. By general interest, I mean that they bring papers where an executive summary may interest policymakers and people in general. (ii) They should be journals in English (the Canadian Journal includes one paper in French), which are open to researchers from all countries, so that the majority of the authors are from outside the country of the journal. [11] (iii) They should be sufficiently different so that it is likely that patterns, which apply to these journals, tell a believable story about economic research. Note that (i) and (iii) require some compromises, as is evident in the choice of (J2), (J6), (J7), and (J8) ( Table 4 ).

The 10 journals covered

Note. Growth is the average annual growth from 1997 to 2017 in the number of papers published.

Methodological journals are excluded, as they are not interesting to outsiders. However, new methods are developed to be used in general interest journals. From studies of citations, we know that useful methodological papers are highly cited. If they remain unused, we presume that it is because they are useless, though, of course, there may be a long lag.

The choice of journals may contain some subjectivity, but I think that they are sufficiently diverse so that patterns that generalize across these journals will also generalize across a broader range of good journals.

The papers included are the regular research articles. Consequently, I exclude short notes to other papers and book reviews, [12] except for a few article-long discussions of controversial books.

3.2 Creating space in journals

As mentioned in the introduction, the annual production of research papers in economics has now reached about 1,000 papers in top journals, and about 14,000 papers in the group of good journals. [13] The production has grown with 3.3% per year, and thus it has doubled the last twenty years. The hard-working researcher will read less than 100 papers a year. I know of no signs that this number is increasing. Thus, the upward trend in publication must be due to the large increase in the importance of publications for the careers of researchers, which has greatly increased the production of papers. There has also been a large increase in the number of researches, but as citations are increasingly skewed toward the top journals (see Heckman & Moktan, 2018 ), it has not increased demand for papers correspondingly. The pressures from the supply side have caused journals to look for ways to create space.

Book reviews have dropped to less than 1/3. Perhaps, it also indicates that economists read fewer books than they used to. Journals have increasingly come to use smaller fonts and larger pages, allowing more words per page. The journals from North-Holland Elsevier have managed to cram almost two old pages into one new one. [14] This makes it easier to publish papers, while they become harder to read.

Many journals have changed their numbering system for the annual issues, making it less transparent how much they publish. Only three – Canadian Economic Journal, Kyklos, and Scandinavian Journal of Economics – have kept the schedule of publishing one volume of four issues per year. It gives about 40 papers per year. Public Choice has a (fairly) consistent system with four volumes of two double issues per year – this gives about 100 papers. The remaining journals have changed their numbering system and increased the number of papers published per year – often dramatically.

Thus, I assess the wave of publications is caused by the increased supply of papers and not to the demand for reading material. Consequently, the study confirms and updates the observation by Temple ( 1918 , p. 242): “… as the world gets older the more people are inclined to write but the less they are inclined to read.”

3.3 How different are the journals?

The appendix reports the counts for each year and journal of the research methods. From these counts, a set of χ 2 -scores is calculated for the three main groups of methods – they are reported in Table 5 . It gives the χ 2 -test comparing the profile of each journal to the one of the other nine journals taken to be the theoretical distribution.

The methodological profile of the journals –  χ 2 -scores for main groups

Note: The χ 2 -scores are calculated relative to all other journals. The sign (+) or (−) indicates if the journal has too many or too few papers relatively in the category. The P -values for the χ 2 (3)-test always reject that the journal has the same methodological profile as the other nine journals.

The test rejects that the distribution is the same as the average for any of the journals. The closest to the average is the EJPE and Public Choice. The two most deviating scores are for the most micro-oriented journal JEBO, which brings many experimental papers, and of course, Empirical Economics, which brings many empirical papers.

3.4 Trends in the use of the methods

Table 3 already gave an impression of the main trends in the methods preferred by economists. I now test if these impressions are statistically significant. The tests have to be tailored to disregard three differences between the journals: their methodological profiles, the number of papers they publish, and the trend in the number. Table 6 reports a set of distribution free tests, which overcome these differences. The tests are done on the shares of each research method for each journal. As the data cover five years, it gives 10 pairs of years to compare. [15] The three trend-scores in the []-brackets count how often the shares go up, down, or stay the same in the 10 cases. This is the count done for a Kendall rank correlation comparing the five shares with a positive trend (such as 1, 2, 3, 4, and 5).

Trend-scores and tests for the eight subgroups of methods across the 10 journals

Note: The three trend-scores in each [ I 1 , I 2 , I 3 ]-bracket are a Kendall-count over all 10 combinations of years. I 1 counts how often the share goes up. I 2 counts when the share goes down, and I 3 counts the number of ties. Most ties occur when there are no observations either year. Thus, I 1 + I 2 + I 3 = 10. The tests are two-sided binominal tests disregarding the zeroes. The test results in bold are significant at the 5% level.

The first set of trend-scores for (M1.1) and (J1) is [1, 9, 0]. It means that 1 of the 10 share-pairs increases, while nine decrease and no ties are found. The two-sided binominal test is 2%, so it is unlikely to happen. Nine of the ten journals in the (M1.1)-column have a majority of falling shares. The important point is that the counts in one column can be added – as is done in the all-row; this gives a powerful trend test that disregards differences between journals and the number of papers published. ( Table A1 )

Four of the trend-tests are significant: The fall in theoretical papers and the rise in classical papers. There is also a rise in the share of stat method and event studies. It is surprising that there is no trend in the number of experimental studies, but see Table A2 (in Appendix).

4 An attempt to interpret the pattern found

The development in the methods pursued by researchers in economics is a reaction to the demand and supply forces on the market for economic papers. As already argued, it seems that a key factor is the increasing production of papers.

The shares add to 100, so the decline of one method means that the others rise. Section 4.1 looks at the biggest change – the reduction in theory papers. Section 4.2 discusses the rise in two new categories. Section 4.3 considers the large increase in the classical method, while Section 4.4 looks at what we know about that method from meta-analysis.

4.1 The decline of theory: economics suffers from theory fatigue [16]

The papers in economic theory have dropped from 59.5 to 33.6% – this is the largest change for any of the eight subgroups. [17] It is highly significant in the trend test. I attribute this drop to theory fatigue.

As mentioned in Section 2.1, the ideal theory paper presents a (simple) new model that recasts the way we look at something important. However, most theory papers are less exciting: They start from the standard model and argue that a well-known conclusion reached from the model hinges upon a debatable assumption – if it changes, so does the conclusion. Such papers are useful. From a literature on one main model, the profession learns its strengths and weaknesses. It appears that no generally accepted method exists to summarize this knowledge in a systematic way, though many thoughtful summaries have appeared.

I think that there is a deeper problem explaining theory fatigue. It is that many theoretical papers are quite unconvincing. Granted that the calculations are done right, believability hinges on the realism of the assumptions at the start and of the results presented at the end. In order for a model to convince, it should (at least) demonstrate the realism of either the assumptions or the outcome. [18] If both ends appear to hang in the air, it becomes a game giving little new knowledge about the world, however skillfully played.

The theory fatigue has caused a demand for simulations demonstrating that the models can mimic something in the world. Kydland and Prescott pioneered calibration methods (see their 1991 ). Calibrations may be carefully done, but it often appears like a numerical solution of a model that is too complex to allow an analytical solution.

4.2 Two examples of waves: one that is still rising and another that is fizzling out

When a new method of gaining insights in the economy first appears, it is surrounded by doubts, but it also promises a high marginal productivity of knowledge. Gradually the doubts subside, and many researchers enter the field. After some time this will cause the marginal productivity of the method to fall, and it becomes less interesting. The eight methods include two newer ones: Lab experiments and newer stats. [19]

It is not surprising that papers with lab experiments are increasing, though it did take a long time: The seminal paper presenting the technique was Smith ( 1962 ), but only a handful of papers are from the 1960s. Charles Plott organized the first experimental lab 10 years later – this created a new standard for experiments, but required an investment in a lab and some staff. Labs became more common in the 1990s as PCs got cheaper and software was developed to handle experiments, but only 1.9% of the papers in the 10 journals reported lab experiments in 1997. This has now increased to 9.7%, so the wave is still rising. The trend in experiments is concentrated in a few journals, so the trend test in Table 6 is insignificant, but it is significant in the Appendix Table A2 , where it is done on the sum of articles irrespective of the journal.

In addition to the rising share of lab experiment papers in some journals, the journal Experimental Economics was started in 1998, where it published 281 pages in three issues. In 2017, it had reached 1,006 pages in four issues, [20] which is an annual increase of 6.5%.

Compared with the success of experimental economics, the motley category of newer empirics has had a more modest success, as the fraction of papers in the 5 years are 5.8, 5.2, 3.5, 5.4, and 4.2, which has no trend. Newer stats also require investment, but mainly in human capital. [21] Some of the papers using the classical methodology contain a table with Dickey-Fuller tests or some eigenvalues of the data matrix, but they are normally peripheral to the analysis. A couple of papers use Kalman filters, and a dozen papers use Bayesian VARs. However, it is clear that the newer empirics have made little headway into our sample of general interest journals.

4.3 The steady rise of the classical method: flexibility rewarded

The typical classical paper provides estimates of a key effect that decision-makers outside academia want to know. This makes the paper policy relevant right from the start, and in many cases, it is possible to write a one page executive summary to the said decision-makers.

The three-step convention (see Section 2.3) is often followed rather loosely. The estimation model is nearly always much simpler than the theory. Thus, while the model can be derived from a theory, the reverse does not apply. Sometimes, the model seems to follow straight from common sense, and if the link from the theory to the model is thin, it begs the question: Is the theory really necessary? In such cases, it is hard to be convinced that the tests “confirm” the theory, but then, of course, tests only say that the data do not reject the theory.

The classical method is often only a presentation devise. Think of a researcher who has reached a nice publishable result through a long and tortuous path, including some failed attempts to find such results. It is not possible to describe that path within the severely limited space of an article. In addition, such a presentation would be rather dull to read, and none of us likes to talk about wasted efforts that in hindsight seem a bit silly. Here, the classical method becomes a convenient presentation device.

The biggest source of variation in the results is the choice of control/modifier variables. All datasets presumably contain some general and some special information, where the latter depends on the circumstances prevailing when the data were compiled. The regression should be controlled for these circumstances in order to reach the general result. Such ceteris paribus controls are not part of the theory, so many possible controls may be added. The ones chosen for publication often appear to be the ones delivering the “right” results by the priors of the researcher. The justification for their inclusion is often thin, and if two-stage regressions are used, the first stage instruments often have an even thinner justification.

Thus, the classical method is rather malleable to the preferences and interests of researchers and sponsors. This means that some papers using the classical technique are not what they pretend, as already pointed out by Leamer ( 1983 ), see also Paldam ( 2018 ) for new references and theory. The fact that data mining is tempting suggests that it is often possible to reach smashing results, making the paper nice to read. This may be precisely why it is cited.

Many papers using the classical method throw in some bits of exotic statistics technique to demonstrate the robustness of the result and the ability of the researcher. This presumably helps to generate credibility.

4.4 Knowledge about classical papers reached from meta-studies

Individual studies using the classical method often look better than they are, and thus they are more uncertain than they appear, but we may think of the value of convergence for large N s (number of observations) as the truth. The exaggeration is largest in the beginning of a new literature, but gradually it becomes smaller. Thus, the classical method does generate truth when the effect searched for has been studied from many sides. The word research does mean that the search has to be repeated! It is highly risky to trust a few papers only.

Meta-analysis has found other results such as: Results in top journals do not stand out. It is necessary to look at many journals, as many papers on the same effect are needed. Little of the large variation between results is due to the choice of estimators.

A similar development should occur also for experimental economics. Experiments fall in families: A large number cover prisoner’s dilemma games, but there are also many studies of dictator games, auction games, etc. Surveys summarizing what we have learned about these games seem highly needed. Assessed summaries of old experiments are common, notably in introductions to papers reporting new ones. It should be possible to extract the knowledge reached by sets of related lab experiments in a quantitative way, by some sort of meta-technique, but this has barely started. The first pioneering meta-studies of lab experiments do find the usual wide variation of results from seemingly closely related experiments. [25] A recent large-scale replicability study by Camerer et al. ( 2018 ) finds that published experiments in the high quality journal Nature and Science exaggerate by a factor two just like regression studies using the classical method.

5 Conclusion

The study presents evidence that over the last 20 years economic research has moved away from theory towards empirical work using the classical method.

From the eighties onward, there has been a steady stream of papers pointing out that the classical method suffers from excess flexibility. It does deliver relevant results, but they tend to be too good. [26] While, increasingly, we know the size of the problems of the classical method, systematic knowledge about the problems of the other methods is weaker. It is possible that the problems are smaller, but we do not know.

Therefore, it is clear that obtaining solid knowledge about the size of an important effect requires a great deal of papers analyzing many aspects of the effect and a careful quantitative survey. It is a well-known principle in the harder sciences that results need repeated independent replication to be truly trustworthy. In economics, this is only accepted in principle.

The classical method of empirical research is gradually winning, and this is a fine development: It does give answers to important policy questions. These answers are highly variable and often exaggerated, but through the efforts of many competing researchers, solid knowledge will gradually emerge.

Home page: http://www.martin.paldam.dk

Acknowledgments

The paper has been presented at the 2018 MAER-Net Colloquium in Melbourne, the Kiel Aarhus workshop in 2018, and at the European Public Choice 2019 Meeting in Jerusalem. I am grateful for all comments, especially from Chris Doucouliagos, Eelke de Jong, and Bob Reed. In addition, I thank the referees for constructive advice.

Conflict of interest: Author states no conflict of interest.

Appendix: Two tables and some assessments of the size of the profession

The text needs some numbers to assess the representativity of the results reached. These numbers just need to be orders of magnitude. I use the standard three-level classification in A, B, and C of researchers, departments, and journals. The connections between the three categories are dynamic and rely on complex sorting mechanisms. In an international setting, it matters that researchers have preferences for countries, notably their own. The relation between the three categories has a stochastic element.

The World of Learning organization reports on 36,000 universities, colleges, and other institutes of tertiary education and research. Many of these institutions are mainly engaged in undergraduate teaching, and some are quite modest. If half of these institutions have a program in economics, with a staff of at least five, the total stock of academic economists is 100,000, of which most are at the C-level.

The A-level of about 500 tenured researchers working at the top ten universities (mainly) publishes in the top 10 journals that bring less than 1,000 papers per year; [27] see Heckman and Moktan (2020). They (mainly) cite each other, but they greatly influence other researchers. [28] The B-level consists of about 15–20,000 researchers who work at 4–500 research universities, with graduate programs and ambitions to publish. They (mainly) publish in the next level of about 150 journals. [29] In addition, there are at least another 1,000 institutions that strive to move up in the hierarchy.

The counts for each of the 10 journals

Counts, shares, and changes for all ten journals for subgroups

Note: The trend-scores are calculated as in Table 6 . Compared to the results in Table 6 , the results are similar, but the power is less than before. However, note that the results in Column (M2.1) dealing with experiments are stronger in Table A2 . This has to do with the way missing observations are treated in the test.

Angrist, J. , Azoulay, P. , Ellison, G. , Hill, R. , & Lu, S. F. (2017). Economic research evolves: Fields and styles. American Economic Review (Papers & Proceedings), 107, 293–297. 10.1257/aer.p20171117 Search in Google Scholar

Bergh, A. , & Wichardt, P. C. (2018). Mine, ours or yours? Unintended framing effects in dictator games (INF Working Papers, No 1205). Research Institute of Industrial Econ, Stockholm. München: CESifo. 10.2139/ssrn.3208589 Search in Google Scholar

Brodeur, A. , Cook, N. , & Heyes, A. (2020). Methods matter: p-Hacking and publication bias in causal analysis in economics. American Economic Review, 110(11), 3634–3660. 10.1257/aer.20190687 Search in Google Scholar

Camerer, C. F. , Dreber, A. , Holzmaster, F. , Ho, T.-H. , Huber, J. , Johannesson, M. , … Wu, H. (27 August 2018). Nature Human Behaviour. https://www.nature.com/articles/M2.11562-018-0399-z Search in Google Scholar

Card, D. , & DellaVigna, A. (2013). Nine facts about top journals in economics. Journal of Economic Literature, 51, 144–161 10.3386/w18665 Search in Google Scholar

Christensen, G. , & Miguel, E. (2018). Transparency, reproducibility, and the credibility of economics research. Journal of Economic Literature, 56, 920–980 10.3386/w22989 Search in Google Scholar

Doucouliagos, H. , Paldam, M. , & Stanley, T. D. (2018). Skating on thin evidence: Implications for public policy. European Journal of Political Economy, 54, 16–25 10.1016/j.ejpoleco.2018.03.004 Search in Google Scholar

Engel, C. (2011). Dictator games: A meta study. Experimental Economics, 14, 583–610 10.1007/s10683-011-9283-7 Search in Google Scholar

Fiala, L. , & Suentes, S. (2017). Transparency and cooperation in repeated dilemma games: A meta study. Experimental Economics, 20, 755–771 10.1007/s10683-017-9517-4 Search in Google Scholar

Friedman, M. (1953). Essays in positive economics. Chicago: University of Chicago Press. Search in Google Scholar

Hamermesh, D. (2013). Six decades of top economics publishing: Who and how? Journal of Economic Literature, 51, 162–172 10.3386/w18635 Search in Google Scholar

Heckman, J. J. , & Moktan, S. (2018). Publishing and promotion in economics: The tyranny of the top five. Journal of Economic Literature, 51, 419–470 10.3386/w25093 Search in Google Scholar

Ioannidis, J. P. A. , Stanley, T. D. , & Doucouliagos, H. (2017). The power of bias in economics research. Economic Journal, 127, F236–F265 10.1111/ecoj.12461 Search in Google Scholar

Johansen, S. , & Juselius, K. (1990). Maximum likelihood estimation and inference on cointegration – with application to the demand for money. Oxford Bulletin of Economics and Statistics, 52, 169–210 10.1111/j.1468-0084.1990.mp52002003.x Search in Google Scholar

Justman, M. (2018). Randomized controlled trials informing public policy: Lessons from the project STAR and class size reduction. European Journal of Political Economy, 54, 167–174 10.1016/j.ejpoleco.2018.04.005 Search in Google Scholar

Kydland, F. , & Prescott, E. C. (1991). The econometrics of the general equilibrium approach to business cycles. Scandinavian Journal of Economics, 93, 161–178 10.2307/3440324 Search in Google Scholar

Leamer, E. E. (1983). Let’s take the con out of econometrics. American Economic Review, 73, 31–43 Search in Google Scholar

Levitt, S. D. , & List, J. A. (2007). On the generalizability of lab behaviour to the field. Canadian Journal of Economics, 40, 347–370 10.1111/j.1365-2966.2007.00412.x Search in Google Scholar

Paldam, M. (April 14th 2015). Meta-analysis in a nutshell: Techniques and general findings. Economics. The Open-Access, Open-Assessment E-Journal, 9, 1–4 10.5018/economics-ejournal.ja.2015-11 Search in Google Scholar

Paldam, M. (2016). Simulating an empirical paper by the rational economist. Empirical Economics, 50, 1383–1407 10.1007/s00181-015-0971-6 Search in Google Scholar

Paldam, M. (2018). A model of the representative economist, as researcher and policy advisor. European Journal of Political Economy, 54, 6–15 10.1016/j.ejpoleco.2018.03.005 Search in Google Scholar

Smith, V. (1962). An experimental study of competitive market behavior. Journal of Political Economy, 70, 111–137 10.1017/CBO9780511528354.003 Search in Google Scholar

Stanley, T. D. , & Doucouliagos, H. (2012). Meta-regression analysis in economics and business. Abingdon: Routledge. 10.4324/9780203111710 Search in Google Scholar

Temple, C. L. (1918). Native races and their rulers; sketches and studies of official life and administrative problems in Nigeria. Cape Town: Argus Search in Google Scholar

© 2021 Martin Paldam, published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

  • X / Twitter

Supplementary Materials

  • Supplementary material

Please login or register with De Gruyter to order this product.

Economics

Journal and Issue

Articles in the same issue.

empirical research in economics

empirical research in economics

Empirical Economics

Journal of the Institute for Advanced Studies, Vienna, Austria

  • Exemplary topics are treatment effect estimation, policy evaluation, forecasting, and econometric methods.
  • Contributions may focus on the estimation of established relationships between economic variables or on the testing of hypotheses.
  • Emphasizes the replicability of empirical results: replication studies may be published as short papers.
  • Follows a single-blind review procedure (authors’ names are known to reviewers, but reviewers are anonymous to authors). 
  • Submissions that have poor chances of receiving positive reviews are routinely rejected without sending the papers for review.

This is a transformative journal , you may have access to funding.

  • Robert M. Kunst
  • Bertrand Candelon,
  • Subal C. Kumbhakar,
  • Arthur H. O. van Soest,
  • Joakim Westerlund

Societies and partnerships

New Content Item

Latest issue

Volume 66, Issue 5

Latest articles

Revisiting the countercyclicality of fiscal policy.

  • João Tovar Jalles
  • Youssouf Kiendrebeogo
  • Roberto Piazza

empirical research in economics

Money demand stability in India: allowing for an unknown number of breaks

  • Masudul Hasan Adil
  • Aditi Chaubal

empirical research in economics

Output, employment, and price effects of U.S. narrative tax changes: a factor-augmented vector autoregression approach

empirical research in economics

Uncovering heterogeneous regional impacts of Chinese monetary policy

  • Andrew Tsang

empirical research in economics

The impact of robots on labor demand: evidence from job vacancy data in South Korea

empirical research in economics

Journal updates

Lawrence r. klein award 2021/2022.

This biannual prize is awarded for the best paper published in the journal Empirical Economics. 

The Empirical Economics prize was awarded for the fi rst time by Springer in 2006, and was renamed in honor of the Nobel prize winner Lawrence R. Klein in 2013.

Call for Papers: Special Issue on Applications of Efficiency and Productivity Analysis

Guest Editors: Subal C. Kumbhakar, Christopher F. Parmeter and Emir Malikov

Deadline for paper submission: February 29, 2020

Journal information

  • ABS Academic Journal Quality Guide
  • Australian Business Deans Council (ABDC) Journal Quality List
  • Engineering Village – GEOBASE
  • Google Scholar
  • Japanese Science and Technology Agency (JST)
  • Norwegian Register for Scientific Journals and Series
  • OCLC WorldCat Discovery Service
  • Research Papers in Economics (RePEc)
  • Social Science Citation Index
  • TD Net Discovery Service
  • UGC-CARE List (India)

Rights and permissions

Springer policies

© Springer-Verlag GmbH Germany, part of Springer Nature

  • Find a journal
  • Publish with us
  • Track your research

Suggestions or feedback?

MIT News | Massachusetts Institute of Technology

  • Machine learning
  • Social justice
  • Black holes
  • Classes and programs

Departments

  • Aeronautics and Astronautics
  • Brain and Cognitive Sciences
  • Architecture
  • Political Science
  • Mechanical Engineering

Centers, Labs, & Programs

  • Abdul Latif Jameel Poverty Action Lab (J-PAL)
  • Picower Institute for Learning and Memory
  • Lincoln Laboratory
  • School of Architecture + Planning
  • School of Engineering
  • School of Humanities, Arts, and Social Sciences
  • Sloan School of Management
  • School of Science
  • MIT Schwarzman College of Computing

The case for economics — by the numbers

Press contact :, media download.

A new study examines 140,000 economics papers published from 1970 to 2015, tallying the “extramural” citations that economics papers received in 16 other academic fields, including sociology, medicine, and public health.

*Terms of Use:

Images for download on the MIT News office website are made available to non-commercial entities, press and the general public under a Creative Commons Attribution Non-Commercial No Derivatives license . You may not alter the images provided, other than to crop them to size. A credit line must be used when reproducing images; if one is not provided below, credit the images to "MIT."

A new study examines 140,000 economics papers published from 1970 to 2015, tallying the “extramural” citations that economics papers received in 16 other academic fields, including sociology, medicine, and public health.

Previous image Next image

In recent years, criticism has been levelled at economics for being insular and unconcerned about real-world problems. But a new study led by MIT scholars finds the field increasingly overlaps with the work of other disciplines, and, in a related development, has become more empirical and data-driven, while producing less work of pure theory.

The study examines 140,000 economics papers published over a 45-year span, from 1970 to 2015, tallying the “extramural” citations that economics papers received in 16 other academic fields — ranging from other social sciences such as sociology to medicine and public health. In seven of those fields, economics is the social science most likely to be cited, and it is virtually tied for first in citations in another two disciplines.

In psychology journals, for instance, citations of economics papers have more than doubled since 2000. Public health papers now cite economics work twice as often as they did 10 years ago, and citations of economics research in fields from operations research to computer science have risen sharply as well.

While citations of economics papers in the field of finance have risen slightly in the last two decades, that rate of growth is no higher than it is in many other fields, and the overall interaction between economics and finance has not changed much. That suggests economics has not been unusually oriented toward finance issues — as some critics have claimed since the banking-sector crash of 2007-2008. And the study’s authors contend that as economics becomes more empirical, it is less dogmatic.

“If you ask me, economics has never been better,” says Josh Angrist, an MIT economist who led the study. “It’s never been more useful. It’s never been more scientific and more evidence-based.”

Indeed, the proportion of economics papers based on empirical work — as opposed to theory or methodology — cited in top journals within the field has risen by roughly 20 percentage points since 1990.

The paper, “Inside Job or Deep Impact? Extramural Citations and the Influence of Economic Scholarship,” appears in this month’s issue of the Journal of Economic Literature .

The co-authors are Angrist, who is the Ford Professor of Economics in MIT Department of Economics; Pierre Azoulay, the International Programs Professor of Management at the MIT Sloan School of Management; Glenn Ellison, the Gregory K. Palm Professor Economics and associate head of the Department of Economics; Ryan Hill, a doctoral candidate in MIT’s Department of Economics; and Susan Feng Lu, an associate professor of management in Purdue University’s Krannert School of Management.

Taking critics seriously

As Angrist acknowledges, one impetus for the study was the wave of criticism the economics profession has faced over the last decade, after the banking crisis and the “Great Recession” of 2008-2009, which included the finance-sector crash of 2008. The paper’s title alludes to the film “Inside Job” — whose thesis holds that, as Angrist puts it, “economics scholarship as an academic enterprise was captured somehow by finance, and that academic economists should therefore be blamed for the Great Recession.”

To conduct the study, the researchers used the Web of Science, a comprehensive bibliographic database, to examine citations between 1970 and 2015. The scholars developed machine-learning techniques to classify economics papers into subfields (such as macroeconomics or industrial organization) and by research “style” —  meaning whether papers are primarily concerned with economic theory, empirical analysis, or econometric methods.

“We did a lot of fine-tuning of that,” says Hill, noting that for a study of this size, a machine-learning approach is a necessity.

The study also details the relationship between economics and four additional social science disciplines: anthropology, political science, psychology, and sociology. Among these, political science has overtaken sociology as the discipline most engaged with economics. Psychology papers now cite economics research about as often as they cite works of sociology.

The new intellectual connectivity between economics and psychology appears to be a product of the growth of behavioral economics, which examines the irrational, short-sighted financial decision-making of individuals — a different paradigm than the assumptions about rational decision-making found in neoclassical economics. During the study’s entire time period, one of the economics papers cited most often by other disciplines is the classic article “Prospect Theory: An Analysis of Decision under Risk,” by behavioral economists Daniel Kahneman and Amos Tversky.

Beyond the social sciences, other academic disciplines for which the researchers studied the influence of economics include four classic business fields — accounting, finance, management, and marketing — as well as computer science, mathematics, medicine, operations research, physics, public health, and statistics.

The researchers believe these “extramural” citations of economics are a good indicator of economics’ scientific value and relevance.

“Economics is getting more citations from computer science and sociology, political science, and psychology, but we also see fields like public health and medicine starting to cite economics papers,” Angrist says. “The empirical share of the economics publication output is growing. That’s a fairly marked change. But even more dramatic is the proportion of citations that flow to empirical work.”

Ellison emphasizes that because other disciplines are citing empirical economics more often, it shows that the growth of empirical research in economics is not just a self-reinforcing change, in which scholars chase trendy ideas. Instead, he notes, economists are producing broadly useful empirical research.  

“Political scientists would feel totally free to ignore what economists were writing if what economists were writing today wasn’t of interest to them,” Ellison says. “But we’ve had this big shift in what we do, and other disciplines are showing their interest.”

It may also be that the empirical methods used in economics now more closely match those in other disciplines as well.

“What’s new is that economics is producing more accessible empirical work,” Hill says. “Our methods are becoming more similar … through randomized controlled trials, lab experiments, and other experimental approaches.”

But as the scholars note, there are exceptions to the general pattern in which greater empiricism in economics corresponds to greater interest from other fields. Computer science and operations research papers, which increasingly cite economists’ research, are mostly interested in the theory side of economics. And the growing overlap between psychology and economics involves a mix of theory and data-driven work.

In a big country

Angrist says he hopes the paper will help journalists and the general public appreciate how varied economics research is.

“To talk about economics is sort of like talking about [the United States of] America,” Angrist says. “America is a big, diverse country, and economics scholarship is a big, diverse enterprise, with many fields.”

He adds: “I think economics is incredibly eclectic.”

Ellison emphasizes this point as well, observing that the sheer breadth of the discipline gives economics the ability to have an impact in so many other fields.  

“It really seems to be the diversity of economics that makes it do well in influencing other fields,” Ellison says. “Operations research, computer science, and psychology are paying a lot of attention to economic theory. Sociologists are paying a lot of attention to labor economics, marketing and management are paying attention to industrial organization, statisticians are paying attention to econometrics, and the public health people are paying attention to health economics. Just about everything in economics is influential somewhere.”

For his part, Angrist notes that he is a biased observer: He is a dedicated empiricist and a leading practitioner of research that uses quasiexperimental methods. His studies leverage circumstances in which, say, policy changes random assignments in civic life allow researchers to study two otherwise similar groups of people separated by one thing, such as access to health care.

Angrist was also a graduate-school advisor of Esther Duflo PhD ’99, who won the Nobel Prize in economics last fall, along with MIT’s Abhijit Banerjee — and Duflo thanked Angrist at their Nobel press conference, citing his methodological influence on her work. Duflo and Banerjee, as co-founders of MIT’s Abdul Latif Jameel Poverty Action Lab (J-PAL), are advocates of using field experiments in economics, which is still another way of producing empirical results with policy implications.

“More and more of our empirical work is worth paying attention to, and people do increasingly pay attention to it,” Angrist says. “At the same time, economists are much less inward-looking than they used to be.”

Share this news article on:

Related links.

  • Paper: “Inside Job or Deep Impact? Extramural Citations and the Influence of Economic Scholarship”
  • Josh Angrist
  • Glenn Ellison
  • Pierre Azoulay
  • Department of Economics
  • Article: “The Natural Experimenter”

Related Topics

  • MIT Sloan School of Management
  • School of Humanities Arts and Social Sciences
  • History of science
  • Social sciences

Related Articles

MIT economists Abhijit Banerjee and Esther Duflo stand outside their home after learning that they have been named co-winners of the 2019 Nobel Prize in economic sciences. They will share the prize with Michael Kremer of Harvard University.

MIT economists Esther Duflo and Abhijit Banerjee win Nobel Prize

Left to right: Erik Demaine, Graham Jones, T.L. Taylor, Joshua Angrist

2019 MacVicar Faculty Fellows named

A study co-authored by MIT professor Pierre Azoulay has shown that in many areas of the life sciences, the deaths of prominent researchers are often followed by a surge in highly cited research by newcomers to those fields.

New science blooms after star researchers die, study finds

The cover of "Mastering ’Metrics: The Path from Cause to Effect" (Princeton University Press), by MIT economist Joshua Angrist (pictured) and Jörn-Steffen Pischke of the London School of Economics

The “metrics” system

Previous item Next item

More MIT News

Drone photo of Killian court and the MIT campus.

Seven from MIT elected to American Academy of Arts and Sciences for 2024

Read full story →

Dramatic lighting highlights a futuristic computer chip on a stylized circuit board.

Two MIT teams selected for NSF sustainable materials grants

Lydia Bourouiba stands near a full bookshelf and chalk board.

3 Questions: A shared vocabulary for how infectious diseases spread

In a white room, six individuals in orange jumpsuits are smiling and holding certificates of achievement. They are accompanied by three individuals dressed in casual and professional attire.

Study demonstrates efficacy of MIT-led Brave Behind Bars program

Anna Russo sits in a red armchair with her legs crossed, smiling at the camera

Bringing an investigator’s eye to complex social challenges

Headshot photos of Iwnetim Abate, Yoel Fink, Andrew Babbin, and Skylar Tibbits

Empirical Strategies in Economics: Illuminating the Path from Cause to Effect

The view that empirical strategies in economics should be transparent and credible now goes almost without saying. The local average treatment effects (LATE) framework for causal inference helped make this so. The LATE theorem tells us for whom particular instrumental variables (IV) and regression discontinuity estimates are valid. This lecture uses several empirical examples, mostly involving charter and exam schools, to highlight the value of LATE. A surprising exclusion restriction, an assumption central to the LATE interpretation of IV estimates, is shown to explain why enrollment at Chicago exam schools reduces student achievement. I also make two broader points: IV exclusion restrictions formalize commitment to clear and consistent explanations of reduced-form causal effects; compelling applications demonstrate the power of simple empirical strategies to generate new causal knowledge.

This is a revised version of my recorded Nobel Memorial Lecture posted December 8, 2021. Many thanks to Jimmy Chin and Vendela Norman for their help preparing this lecture and to Noam Angrist, Hank Farber, Peter Ganong, Guido Imbens, and Parag Pathak for comments on an earlier draft. Thanks also go to my coauthors and Blueprint Labs colleagues, from whom I’ve learned so much over the years. Special thanks are due to my co-laureates, David Card and Guido Imbens, for their guidance and partnership. We three share a debt to our absent friend, Alan Krueger, with whom we collaborated so fruitfully. This lecture incorporates empirical findings from joint work with Atila Abdulkadiroğlu, Sue Dynarski, Bill Evans, Iván Fernández-Val, Tom Kane, Victor Lavy, Yusuke Narita, Parag Pathak, Chris Walters, and Román Zárate. The views expressed herein are those of the author and do not necessarily reflect the views of the National Bureau of Economic Research.

The work discussed here was funded in part by the Laura and John Arnold Foundation, the National Science Foundation, and the W.T. Grant Foundation. Joshua Angrist's daughter teaches in a Boston charter school.

MARC RIS BibTeΧ

Download Citation Data

Published Versions

More from nber.

In addition to working papers , the NBER disseminates affiliates’ latest findings through a range of free periodicals — the NBER Reporter , the NBER Digest , the Bulletin on Retirement and Disability , the Bulletin on Health , and the Bulletin on Entrepreneurship  — as well as online conference reports , video lectures , and interviews .

15th Annual Feldstein Lecture, Mario Draghi, "The Next Flight of the Bumblebee: The Path to Common Fiscal Policy in the Eurozone cover slide

Browse Econ Literature

  • Working papers
  • Software components
  • Book chapters
  • JEL classification

More features

  • Subscribe to new research

RePEc Biblio

Author registration.

  • Economics Virtual Seminar Calendar NEW!

IDEAS home

How to do empirical economics

  • Author & abstract
  • 12 References
  • 2 Citations
  • Most related
  • Related works & more

Corrections

(University of North Carolina)

(University of Bonn)

(Université de Paris 1)

(Northwestern University)

  • Joshua D Angrist
  • Jean-Marc Robin

Suggested Citation

Download full text from publisher, references listed on ideas.

Follow serials, authors, keywords & more

Public profiles for Economics researchers

Various research rankings in Economics

RePEc Genealogy

Who was a student of whom, using RePEc

Curated articles & papers on economics topics

Upload your paper to be listed on RePEc and IDEAS

New papers by email

Subscribe to new additions to RePEc

EconAcademics

Blog aggregator for economics research

Cases of plagiarism in Economics

About RePEc

Initiative for open bibliographies in Economics

News about RePEc

Questions about IDEAS and RePEc

RePEc volunteers

Participating archives

Publishers indexing in RePEc

Privacy statement

Found an error or omission?

Opportunities to help RePEc

Get papers listed

Have your research listed on RePEc

Open a RePEc archive

Have your institution's/publisher's output listed on RePEc

Get RePEc data

Use data assembled by RePEc

This website uses cookies.

By clicking the "Accept" button or continuing to browse our site, you agree to first-party and session-only cookies being stored on your device to enhance site navigation and analyze site performance and traffic. For more information on our use of cookies, please see our Privacy Policy .

  • American Economic Review

Design-Based Research in Empirical Microeconomics

  • Article Information

JEL Classification

  • J53 Labor-Management Relations; Industrial Jurisprudence
  • Tools and Resources
  • Customer Services
  • Econometrics, Experimental and Quantitative Methods
  • Economic Development
  • Economic History
  • Economic Theory and Mathematical Models
  • Environmental, Agricultural, and Natural Resources Economics
  • Financial Economics
  • Health, Education, and Welfare Economics
  • History of Economic Thought
  • Industrial Organization
  • International Economics
  • Labor and Demographic Economics
  • Law and Economics
  • Macroeconomics and Monetary Economics
  • Micro, Behavioral, and Neuro-Economics
  • Public Economics and Policy
  • Urban, Rural, and Regional Economics
  • Share This Facebook LinkedIn Twitter

Article contents

Gravity models and empirical trade.

  • Scott Baier Scott Baier College of Business, Clemson University
  •  and  Samuel Standaert Samuel Standaert Institute on Comparative Regional Integration Studies, United Nations University
  • https://doi.org/10.1093/acrefore/9780190625979.013.327
  • Published online: 31 March 2020

The gravity model of international trade states that the volume of trade between two countries is proportional to their economic mass and a measure of their relative trade frictions. Perhaps because of its intuitive appeal, the gravity model has been the workhorse model of international trade for more than 50 years. While the initial empirical work using the gravity model lacked sound theoretical underpinnings, the theoretical developments have highlighted how a gravity-like specification can be derived from many models with varying assumptions about preferences, technology, and market structure. Along the strengthening of the theoretical roots of the gravity model, the way in which it is estimated has also evolved significantly since the start of the new millennium. Depending on the exact characteristics of regression, different estimation methods should be used to estimate the gravity model.

  • international trade
  • bilateral trade
  • the gravity equation
  • structural gravity
  • trade costs
  • new trade theory
  • heterogeneous firms

The Workhorse of International Trade

For more than 50 years, the gravity model has been the workhorse model of empirical international trade. Originally the model was presented as a simple analogy between Newton’s Universal Law of Gravitation and factors that would influence bilateral trade flows. The flow of trade between two countries was posited to be proportional to the economic size of the trading partners and inversely related to their distance from each other. As formulated, the gravity equation of international trade could be rewritten as a log-linear empirical specification that could be easily estimated. A large number of studies showed that the empirical findings were consistent with naïve gravity model. In particular, the coefficient estimates of the elasticity of bilateral trade to importer and exporter GDP were close to unity, the elasticity of trade with respect to bilateral distance was negative; moreover, the empirical specification was able to account for a reasonable amount of the observed variation in trade.

Even though the model was an empirical success, the gravity equation lacked a sound theoretical background. Beginning in the late 1970s, several authors showed that the gravity-like specification would emerge from a variety of standard assumptions regarding preferences, technology, market structure, and trade. At the same time, empirical trade economists became more concerned about the estimation strategy; in particular, that estimation using ordinary least squares might lead to biased coefficient estimates. The purpose of the review is to trace the history of the gravity equation and provide context for the evolution of the gravity equation of international trade. The review also highlights the current state of the field and highlights areas of future research. 1

Since much of the work on the gravity equation has been designed to identify factors that may reduce or enhance bilateral trade, the paper starts by using a naïve gravity specification to show how geography, history, culture, and government policies appear to influence trade flows by looking at the cross-section of data for 145 countries in 2014 . It goes on to provide an overview of theoretical models and empirical specifications from 1970 through 2001 . The subsequent section works through four of the standard models of international trade and shows how each model leads to a similar empirical specification—the structural gravity model. This section also briefly covers how these models can be extended to include tariffs, intermediate goods, and multiple sectors; it concludes by reviewing recent theoretical models that lead to a gravity-like empirical specification.

The final section of the article reviews the state of the empirical specifications. It starts with a discussion of the conditions under which the log-linear gravity model estimated by ordinary least squares will yield consistent estimates of the coefficients of interest. In most cases, however, these conditions are not satisfied, and an alternative estimator is needed. Santos Silva and Tenreyro ( 2006 ) showed that the Poisson Pseudo Maximum Likelihood Estimator has desirable properties that make it attractive for the empirical gravity work. These estimators are contrasted with the Gamma Pseudo Maximum Likelihood Estimator, and Nonlinear Least Squares, and different specification tests are discussed that may assist in choosing among them. Another issue that can arise in estimating the gravity model is the endogeneity of the control variables, the typical solutions for which are also briefly discussed. The section concludes with a discussion of the path for future work.

Gravity: A First Look at the Data

The early empirical gravity models of international trade were rooted in a simple and intuitive analogy to Newton’s Law of Universal Gravity. According to Newton’s law, the force of attraction between two bodies is proportional to the product of their masses and inversely proportional to their distance squared. These early gravity models of trade postulated a similar relationship between the bilateral trade flows between two countries, their economic sizes, and a measure of trade frictions. The lack of theoretical underpinnings for this relationship is the reason why it is referred to as the naïve gravity model. Mathematically, it can be expressed as

where X i j is bilateral trade between exporting country i and importing country j , Y i ( Y j ) is the gross domestic product in country i ( j ) and d i s t i j is the bilateral distance between country i and j . ε i j is typically assumed to be a log-normally distributed error term. Given the multiplicative structure and the assumption on the error term, Equation 1 can be estimated by taking the natural logarithm that leads to a log-linear specification

Literally, hundreds of papers have estimated the gravity equation by ordinary least squares. Intuitively, one would expect the economic size of the countries to have a positive effect ( β 1 > 0 , β 2 > 0 ) and distance to have a negative effect ( β 3 < 0 ). While the coefficient estimates have varied from study to study depending on the period and the sample of countries, the estimated coefficients of β 1 and β 2 were typically found to be close to unity, while that of β 3 was typically negative and statistically significant. What cemented the gravity model’s popularity was its ability to explain much of the observed variation in bilateral trade flows.

The core relationship of gravity models can be easily illustrated using the overall patterns in trade data. In a world without trade frictions, a simple gravity relationship is given by X i j = Y i Y j Y W where, as before, Y i ( Y j ) is GDP of country i ( j ) and Y W is world income. The frictionless gravity equation can be rearranged to relate country j ' s expenditure share on goods produced in country i ( X i j Y j ) to the latter’s share of the world production ( Y i Y W ). 2 Using trade data from 2014 for 125 countries, Figure 1 plots the relationship between expenditure shares and production shares on a logarithmic scale. The high correlation between these variables shown in this graph is consistent with, and part of the appeal of, the empirical structure of the gravity equation. 3 At the same time, Figure 1 also shows that more than 90% of all expenditure shares remain below the 45-degree line. That these import shares fall consistently below the production shares indicates that the world is far from frictionless.

Figure 1. Expenditure and production shares.

Modifying the frictionless gravity equation gives a crude measure of trade costs. To start, we rewrite the bilateral trade flows as

where ϕ i j the bilateral cost of trading and the strictly positive ϵ is the elasticity of bilateral trade flows with respect to these trade costs. Much of the empirical gravity literature has been devoted to identifying and quantifying the factors that influence trade costs. They can be classified as costs induced by geography (natural trade costs), costs associated with historical and cultural linkages, and costs induced by policies (sometimes referred to as “unnatural trade costs”). Researchers interested in international trade and economic geography have emphasized the role of natural trade costs (often referred to as second nature geography) and how these natural trade costs are associated with the respective location of the economic agents. An obvious empirical measure of such costs is the distance between countries. Limao and Venables ( 2001 ) and Hummels ( 2007 ) investigated the empirical relationship between observed CIF-FOB trade costs—that is, all the costs associated with shipping the goods and insuring it against damage during transport—and distance. These authors found a positive correlation between distance and trade costs. Indeed, one of the most robust findings in the empirical gravity literature is the negative relationship between distance and bilateral trade, or its equivalent: the positive relationship between the natural logarithm of distance and trade costs. The rough measure of trade costs obtained by rewriting the naïve gravity equation as

Figure 2 plots the relationship between this measure of trade costs and distance, depicting their clear positive relationship. The fitted line indicates that trade falls by 1.4% for every 1% increase in distance.

Figure 2. Trade costs and distance.

Other geographical factors are also posited to influence trade, including whether the countries share a border. It is frequently argued that contiguous countries have lower trade costs because their common border lowers both pecuniary and non-pecuniary costs of trade. Figure 3 depicts the relationship between trade costs and distance, where contiguous country-pairs are depicted with the plus symbol (+). On average, the points that correspond to the bilateral pairs that share a border lie below the least-squares line representing the relationship between trade costs and distance, indicating that contiguous countries face lower trade costs. 4

Figure 3. Trade costs and distance for contiguous (+) and non-contiguous (o) countries.

In addition to geography, cultural and historical factors are likely to influence trade costs. For example, Figure 4 depicts the relationship between trade costs and distance, this time with bilateral pairs that have ever had a colonial relationship indicated with a diamond symbol. Since these country pairs fall on average below the fitted line, the figure again suggests that trade costs are lower for bilateral pairs that share a colonial history. 5 Potential explanations for this may be that the colonial history implies more familiarity with each other or more similar institutions. Alternatively, the existence of differences in resources that increase trade between the two countries would have been a factor of colonial relationships in the first place.

Figure 4. Trade costs and distance for countries with (♦) and without (●) a colonial relationship.

Finally, some trade costs are likely attributable to government policies. For example, higher tariffs, economic sanctions, and other forms of regulations likely raise the cost of trading and hence reduce trade. Free trade agreements, on the other hand, are typically designed to lower trade costs and boost trade. Figure 5a depicts the relationship between trade costs and distance where countries that have a free trade agreement are depicted with a gold cross.

However, it is not immediately evident that conditional on distance, countries that have free trade agreements face lower trade costs. A potential explanation may be that other factors associated with trade costs also need to be included. Alternatively, the formation of the trade agreements could be in response to other factors such as high trade costs. Figure 5b depicts the separate fitted lines for the relationship between trade costs and distance with and without trade agreements. These fitted lines indicate that over shorter distances trade costs are lower, on average, for bilateral pairs that have a trade agreement. However, as the distance between the countries increases, free trade agreements appear to have a smaller impact on trade costs. 6

Figure 5. Distance and trade costs for countries with (x) and without (●) a free trade agreement. (a). combined fitted line. (b). separate fitted lines.

While these figures are only suggestive of the relationships between bilateral trade and geography, history, and government policies, the theories described in subsequent sections provide guidance on how other confounding factors can be controlled for when specifying an empirical gravity model.

Early Theoretical Developments and Empirical Applications

The gravity framework initially was appealing to researchers because the log-linear model was a simple and intuitive empirical way to assess the relationship between bilateral trade flows, production, income, and variables that could conceivably be viewed as factors that distort bilateral trade. When applied to trade data, the coefficient estimates were typically economically and statistically significant, and the simple gravity specification seemed to account for a large share of the variation of bilateral trade flows. Even though the gravity equation was considered an empirical success, it was often criticized for lacking sound theoretical foundations. Many of the early attempts to provide a theoretical foundation for the gravity model showed that bilateral trade was a function of incomes but did not provide an explicit rationale for the inclusion of distance and other trade costs. For example, Leamer and Stern ( 1970 ) presented a probabilistic model of bilateral trade flows. In their model, it was assumed that each transaction was of the same size ( γ ) and that the likelihood an exporter in i would meet and trade with an importer in j , would depend on the trade capacity of each of the two countries relative to total trade. If trade capacity of the exporting (importing) county i ( j ) is given by F i ( F j ) , then the probability of trade between an exporter and importer is given by p i j = F i F W F j F W , where F W represents total world trade. If there are N transactions of size γ , then total world trade would be given by F W = N γ and the volume of trade between i and j to be given by

Letting gross domestic product proxy for trade capacity results in the frictionless gravity equation. Leamer and Stern ( 1970 ) then asserted that it is plausible to assume that the likelihood of trade between two countries would depend on their proximity to each other so that bilateral trade would be given by

where g ′ ( d i s t i j ) < 0 .

Anderson ( 1979 ) also derived a simple, frictionless gravity equation. In a world without trade costs and where preferences are characterized by homothetic preferences defined over a distinct basket of goods produced by each country, Anderson showed that the volume of trade between country i and country j is given by X i j = θ i Y j where θ i represents the representative agent’s preferences for the good produced in country i . Goods market clearing (i.e., total goods supplied equal total goods demanded) implies that

Substituting for θ i yields the frictionless gravity equation

Even this simple formulation without trade costs provides a few simple, testable hypotheses regarding bilateral trade flows. Helpman ( 1987 ) and Baier and Bergstrand ( 2001 ) showed that this model predicts that bilateral trade increases as the difference in the economic size of the two countries decreases and when total economic size increases. To see this, define s i = ( Y i Y i + Y j ) and s i j = Y i + Y j Y W so that the frictionless gravity equation can be expressed as X i j Y W = s i s j s i j 2

This equation is linear when log transformed and can be estimated as

One would expect the coefficient on β 1 to be close to unity and the coefficient on β 2 to be close to two. Estimating this model using bilateral data from 2014 for 145 countries, the parameters are as follows

where the standard errors are in parenthesis, r 2 = 0.531 and N = 17 , 538 . The coefficient estimates of both l n ( s i s j ) and l n ( s i j ) are different from its hypothesized values at the 95% confidence level; however, the coefficient estimate is roughly consistent with the simple, frictionless gravity.

In addition to the model with costless trade, Anderson ( 1979 ) presented several models where bilateral trade is influenced by trade costs. The most widely cited of these models is the Armington model. In this model, the representative agent’s preferences are defined over goods, where each good is uniquely produced by one country. These preferences are characterized by a constant elasticity of substitution (CES) utility function given by

The representative agent maximizes her utility subject to a budget constraint given by w j = ∑ i = 1 N p i j c i j where w j is the wage rate of the representative agent in country i and p i j and c i j are respectively the consumption and the landed price of the good produced in i and consumed in j (i.e. the price paid by consumers in j). Anderson assumed that trade was subject to iceberg trading costs such that if one unit of the good is shipped from country i to country j only 1 t i j of the good would arrive in country j ( t i j > 1 for i ≠ j ). As markets are assumed to be competitive, the landed price is simply equal to the factory gate price , p i , scaled up by the iceberg trading costs so that p i j = p i t i j .

The Armington assumptions imply that country j’s total expenditures on goods produced in i are given by

where Y j is aggregate income ( Y j = w j L j ) and P j = ( ∑ i = 1 N β i ( p i t i j ) 1 − σ ) 1 1 − σ . By summing over all destinations, market clearing implies

If the quantity of each country’s good is defined so that its price is equal to unity, the following expression for bilateral trade can be obtained by substituting for β i

In order to estimate equation 4 , many authors assumed that the term in brackets exhibited little variation across bilateral trading partners so that it could safely be ignored. Additionally, it was assumed that trade costs could be modeled as t i j 1 − σ = exp ( z i j N β N + z i j H β H + z i j P β P + e i j ) , where z i j N is a vector of variables capturing natural trading costs, z i j H is a vector of variables associated with history and cultural factors, and z i j P is a vector of trade costs associated with policy, and ϵ i j is a normally distributed error term. Taking the natural logarithm of Equation 4 yields

Rather than ignoring the term in brackets, some authors approximated the term by computing a “remoteness” index (see, e.g., Wei [ 1996 ], Helliwell [ 1997 ], and Harrigan [ 2003 ]). While the coefficient estimates on the remoteness variables had the expected signs and were statistically significant, the variables were simple reduced-form representations that typically were a GDP-weighted measure of distance that did not incorporate all aspects of the trade cost vector.

Bergstrand ( 1985 ) built on the Anderson framework by including a nested CES demand structure that allowed for the elasticity of substitution among imported goods to differ from the elasticity of substitution between domestically produced goods and imported goods. Unlike Anderson ( 1979 ), Bergstrand ( 1985 ) assumed that there are costs associated with distributing the products to each potential market and this cost could be modeled with a constant elasticity of transformation (CET) function. Assuming that incomes and prices were exogenous, the CET-technology yielded a set of export supply equations that can be linked to the CES-system of demand equations yielding the following gravity equation

where ν is the elasticity of substitution between domestically produced goods and importables. One of the main contributions of the Bergstrand framework was to show that in addition to incomes and trade costs, bilateral trade flows depended on importer and exporter price indices. Furthermore, with data on gross tariffs, this framework allows the researcher to identify the elasticity of substitution among importables ( σ ) and the elasticity of transformation parameter ( γ ). While the Bergstrand model featured price indices that were related to trade and trade costs, it was not clear at the time how to account for these indices.

One weakness of the Armington models of Anderson ( 1979 ) and Bergstrand ( 1985 ) was that product differentiation was determined arbitrarily depending on the country of origin. The new trade models of Krugman ( 1979 ), Krugman ( 1980 ) and Helpman and Krugman ( 1989 ), which were developed to account for intra-industry trade, provided a richer supply-side model of production. The key characteristics of these models are that the market structure is monopolistically competitive, consumers have a love of variety, firms within a country all have the same production technology, and the production technology for each firm exhibits increasing returns to scale.

Specifically, consumer preferences are defined by a utility function with constant elasticity of over the varieties of a good

where ω represents a distinct variety. These preferences are typically referred to Dixit-Stiglitz preferences. Given that firms are homogeneous within a country and that preferences are symmetric over all varieties, the utility function of the representative agent can be expressed as

where N i is the number of varieties produced in country i . The representative agent maximizes her utility subject to a budget constraint, which is given by w j = ∑ i = 1 N N i p i j c i j . The demand for traded goods of consumers in country j for goods produced in country i is given by

where P j = { ∑ i = 1 N N i p i j 1 − σ } 1 1 − σ .

In this model, firms face a fixed cost of production and constant marginal cost. With labor as the only input, production of a representative firm in country i is given by q i = A i ( l i − f i ) where q i is the output produced by the firm in country i , A i is the technology available to firms in country i , l i is the labor employed by the representative firm in country i , and f i is the fixed production cost for the representative firm in country i . Given the production technology, the wage bill for the firm is given by w i l i = w i ( q i A i + f i ) . As in the Armington model, trade involves iceberg trade costs so that total production of a variety produced in country i and shipped to all other markets is constrained by

Profit maximization implies

The first-order conditions can be arranged to show that prices, inclusive of iceberg trade costs, are markups over marginal costs

Free-entry implies zero excess profits, so that total output of each firm in country i is given by q i = ( A i f i ( σ − 1 ) ) . Labor market clearing implies that L i = ∑ ( q A i + f i ) = N i σ f i or N i = L i σ f i , so that aggregate production in market i is given by Y i = p i Q i = p i N i q i = ( w i A i ρ ( A i L i ( σ − 1 ) σ ) = w i L i

A conditional equilibrium gravity equation is obtained by substituting p i j = p i t i j in the demand equation and substituting N i = Y i p i q i so that the demand equation can be expressed as

Assuming that technology is the same across countries this equation can be estimated in log form as

where the trade costs were modeled as

This conditional equilibrium gravity equation includes the price of goods produced in i and the price index for the commodities consumed in country j . Baier and Bergstrand ( 2001 ) used GDP deflators as empirical proxies for the price terms. However, as Feenstra ( 2004 ) pointed out, the GDP deflators do not reflect the international prices implied by the theory, so it should come as no surprise when the price terms were often statistically insignificant.

In order to obtain unbiased coefficient estimates, the empirical specification needs to account for the price terms or a more theoretically consistent measure of the remoteness terms ( p i and P j ). Since these variables are functions of the trade costs, they are likely correlated with the other right-hand side variables. As a result, failing to account for them correctly will lead to biased coefficient estimates. Anderson and van Wincoop ( 2003 ) were the first to provide guidance on estimating the gravity equation and accounting for the general equilibrium price terms in a theoretically consistent way. They showed that the price terms in the gravity equation were implicit functions of the trade costs, incomes, and expenditures confirming what Linnemann ( 1966 ) had suggested many years ago when he stated that prices were an equilibrium outcome of the trade costs and (assumed to be) exogenous incomes. However, unlike Linneman, who had suggested that the price terms could be ignored, Anderson and van Wincoop stressed the importance of controlling for these price terms in order to obtain consistent estimates. In their application, Anderson and van Wincoop addressed what had been termed the “border puzzle.” This referred to McCallum ( 1995 ) who found that controlling for distance and economic size, the trade between Canadian provinces was 22 times higher than trade between Canadian provinces and U.S. states. However, once the theoretically consistent price terms are accounted for and using the comparative statics outlined in Anderson and van Wincoop, the impact of the border is dramatically reduced. Since the publication of the Anderson and van Wincoop paper, nearly all empirical specifications have attempted to account for the price terms.

Structural Gravity

The remainder of this section explains how once the market structure and market clearing are taken into account, the Armington model, the monopolistically competitive trade model, the Eaton and Kortum ( 2002 ) Ricardian model, and Melitz’s ( 2003 ) heterogeneous firms model can all be written in a similar form. The so-called structural gravity equation takes the following shape

Where like before, G is a constant term, t I j are trade costs between countries i and j , Y i is production in country i , E j is aggregate expenditures by country j , and ϵ is the trade elasticity. As will be explained in more detail, the exporter and importer price indexes ( Π i and P j ) aggregate the trade costs over all trading partners

Structural Gravity in the Armington Model

The Armington model assumes that preferences can be represented by a CES utility function where each country’s good enters into the utility function symmetrically. 7 More formally, the utility function of the representative agent in country j is given by

The agents maximize their utility subject to a budget constraint, which states that expenditures equal income plus the trade deficit: e j = w j + d j ≥ ∑ i = 1 N p i j c i j . The underlying assumption is that the trade deficit is attributable to macroeconomic factors that are not influenced by current trade or trade policies. Maximizing Equation 7 subject to the budget constraint and aggregating over consumers in country j yields the following expression country i ' s consumption of country j ' s good

where the price index is given P j = [ ∑ i = 1 N p i j 1 − σ ] 1 1 − σ , C i j = c i j L j is aggregate consumption of goods produced in country i and consumed in j , and total expenditures in country j are given by E j = e j L j .

Assuming that markets are perfectly competitive, the landed price of a good will be equal to the factory gate price scaled up by the iceberg trade costs; that is, p i j = p i t i j . The value of bilateral exports from i to j is given by

and the price index can be expressed as P j = [ ∑ i = 1 N ( p i t i j ) 1 − σ ] 1 1 − σ .

Market clearing implies Y i = ∑ i = 1 N X i j

where Π i = [ ∑ j = 1 N ( t i j P j ) 1 − σ E j ] 1 1 − σ .

Substituting p i 1 − σ from Equation 8 into the price index and the trade flow equation yields the following set of equations

These price terms are referred to in Anderson and van Wincoop as the multilateral resistance terms. The outward multilateral resistance term, Π i , is a weighted aggregate of all trade costs faced by the exporters in country i . While the inward multilateral resistance term, P j , is a weighted aggregate of all trade costs faced by the importers in country i . Therefore, what matters for the volume of trade is the vector of bilateral trade costs relative to the inward and outward multilateral resistance terms. Finally, from an empirical standpoint, the trade elasticity in Armington model is constant and is determined by the elasticity of substitution across goods ( σ ) .

In order to close the model, labor is assumed to be the only input in the production of good i and that production function for the good produced in country i ( i = 1 , … , N ) exhibits constant returns to scale. If technology in country i is given by A i , then the factory gate price is given by p i = W i / A i . Market clearing implies W i L i = ∑ i = 1 N X i j = p i 1 − σ Π i 1 − σ . After substituting for p i , the market clearing condition can be rearranged to solve for wage rate; that is,

As one would expect, an increase in technology in country i ( A i ) increases the wage rate in country i . In addition, having better access to markets reduces the outward multilateral resistance term ( Π i ) and pushes up the wage rate similar to technological improvement. Finally, in the Armington model, increasing the population (the number of workers) lowers the wage because this increases the supply of country i ' s good and lowers the price, which ends up being reflected back on factor prices.

Structural Gravity and Monopolistic Competition with Homogenous Firms

As in the previous section, agents have symmetric preferences over varieties of goods. Since each firm has identical technology within a country, the value of trade from country i to country j is given by

where P j = [ ∑ i = 1 N N i p i j 1 − σ ] 1 1 − σ is the CES-price index for the consumer in country j and N i is the number of varieties (goods) produced in country i . As in the Armington model, the landed price in country j is equal to the factory gate price scaled by the iceberg trading costs ( p i j = p i t i j ) and market clearing implies Y i = ∑ i = 1 N X i j or

Substituting for N i p i 1 − σ in the CES-price index and into the bilateral trade flow equation yields the following system of equations

As in the Armington model, the trade elasticity is determined by the elasticity of substitution across (varieties) of goods ( σ ) and is constant.

Factor prices are pinned down by substituting these into the market clearing condition for goods and labor: Y i = W i L i and N i = A i L i σ f i . Rearranging gives us

where B M C = ( f i − 1 σ ( σ − 1 ( σ − 1 ) σ − 1 σ ) . Equations (12) — (15) yield a system of equations for trade flows, the multilateral resistance terms, and factor prices. As in the Armington model, factor prices are influenced by technology and the outward-multilateral resistance term. However, unlike the Armington model, the wage rate does not depend on the supply of labor. Instead, an increase in the amount of labor in country i leads to an increase in the number of varieties in country i . Since consumers have preferences over varieties, the increase in demand perfectly offsets the increased production and the wage rate does not change.

The supply side of both the Armington and the monopolistically competitive trade model are rather simplistic. The Armington model assumes that each country produces a single good and that the producers in the country do not face direct competition. Within a country, production exhibits constant returns to scale, so that pricing and market demand follow directly. In the monopolistically competitive trade model, households’ love of variety and increasing returns to production are essential for pinning down the number of varieties and the size of the firm. However, in most cases, it is assumed that each firm has access to the same technology so that production, sales, and exports are the same for all firms. The models developed by Eaton and Kortum ( 2002 ) and Melitz ( 2003 ) provide richer supply side models of international trade. The Eaton-Kortum model is a Ricardian model with perfect competition where agents have preferences over varieties, and consumers buy from the low-cost producer. The Melitz model builds on the models developed by Krugman ( 1979 , 1980 ) and Helpman and Krugman ( 1989 ) by modeling heterogeneous firms that differ in terms of their productivity.

Structural Gravity in the Multi-Country Ricardian Model

Eaton and Kortum ( 2002 ) extended the classic Dornbusch, Fischer, and Samuelson ( 1977 ) Ricardian model with a continuum of goods to a multi-country setting. In this setting, goods within a category are homogenous, and Ricardian differences in technology imply that trade is driven by comparative advantage. It will be shown that in this framework, the structural gravity model also emerges. Unlike the previous models, trade elasticity is determined by the dispersion parameter on the Frechet distribution, which determines the dispersion of productivity across firms in different countries.

As in the monopolistically competitive model, aggregate demand by country j for variety ω is given by

Eaton and Kortum assume that the technical efficiency of a firm in country i is determined by a random draw from a Frechet distribution. The CDF of this distribution is given by F i ( z ) = exp ( − T i z i − θ ) where T i is a country-specific parameter reflecting the productivity distribution in i ( T i > 0 ), and θ , common to all countries, represents the dispersion of technology. Eaton and Kortum assume that the productivity draws are independent. Furthermore, if labor is the only input into the production process and production exhibits constant returns to scale, the factory gate price for variety ω produced in country i is given by p i i ( ω ) = W i / z i where z i is the technology of the firm producing the good. 8

Given that the productivity draws are independent, it can be shown that the probability that a producer in country i can deliver the product to country j at a price lower than or equal to p is given by

where Φ j = { ∑ k = 1 N T i ( W i t i j ) − θ } . The probability that country i ' s good sells for the lowest price in market j is given by

Given this probability, bilateral trade from i to j can be expressed as

Market clearing implies that

Eaton and Kortum show that the price index in country j is given by P j = γ Φ j − 1 θ . Substituting the price index into the market clearing condition and into the trade flow equation yields

Goods market clearing implies that the wage rate is given by

where z ¯ i = e 0.577 T i θ is the geometric mean of z i .

Structural Gravity with Heterogeneous Firms

In the standard monopolistically competitive trade model, all firms are assumed to be identical, and all firms export. However, firm-level data shows that firms differ in terms of a host of characteristics, including size and productivity, and that the latter is highly correlated with trade participation. By allowing for differences in firm-level productivity and a fixed cost of exporting, Melitz ( 2003 ) developed a model that can account for several of these features that are present in the data. Chaney ( 2008 ) and Redding ( 2011 ) show that when the distribution of firm-level productivity is characterized by a Pareto distribution, the response of bilateral trade flows to changes in trade costs results in a clean decomposition of the extensive and intensive margins of trade. This section will show how the Melitz model works and where the distribution of productivity can be characterized by a Pareto distribution. This yields a structural gravity equation that is similar to the gravity equation derived in previous sections. Similar to the Eaton and Kortum model, the trade elasticity with respect to marginal trade costs is given by the parameters of the Pareto distribution.

As in the previous section, consumer’s preferences are characterized by the Dixit-Stiglitz CES-utility function defined over varieties. Aggregate demand in country j for variety ω is given by

For a producer of the variety ω in country i , profits from selling in market j are given by

where A i is the aggregate technology in country i , φ is the firm-specific productivity, and f i j X is the fixed cost of exporting from market i into market j . Profit maximization implies that the price of the varieties shipped from country i to country j will depend on the firm’s productivity draw ( ω ); that is, the price of goods shipped from country i to country j are given by

where ρ = σ − 1 σ . The profits earned by firms in country i with productivity φ that sells in country j are given by

Melitz defined the cut-off productivity φ i j * such that the firm’s profits are exactly zero: π ( φ i j * ) = 0 .

As in Chaney ( 2008 ) Redding ( 2011 ) and Melitz and Redding ( 2014 ), productivity is assumed to follow a Pareto distribution where the cumulative density is given by G ( φ ) = 1 − ( φ ¯ φ ) κ and is defined on the support [ φ ¯ , ∞ ) . Given the productivity distribution and the definition of the zero-cut-off productivity, the expected profits for firms in country i selling in country j are given by

and expected profits in market j by all active firms in country i are given by

Aggregating across all markets delivers an expression for expected profits

Free entry implies that the expected profits, conditional on a productivity draw greater than or equal to φ i i * , are equal to the fixed costs of entry ( f i E

) in terms of domestic labor units; that is,

As in the Krugman model, the labor market clearing condition pins down the mass of firms in each country

Given the mass of firms, exports from country i to county j are given by

where G ( φ i k * ) = 1 − ( φ i k * φ ) κ ∀ k . Substituting φ i j * W i f i j X = ( W i t i j ρ A i P j ) 1 − σ E j yields

The first term in brackets is the mass of firms in country i that actively export to country j , which following Redding ( 2011 ) can be viewed as the extensive margin. A change in variable trade costs or fixed trade costs will impact the extensive margin through its impact on the cut-off productivity, φ i j * . The second term in brackets, in turn, shows that the intensive margin is a function of the fixed cost of exporting, f i j X .

In order to express the bilateral trade equation in the form of structural gravity, the cut-off productivity of Equation 21 can be substituted into the bilateral trade equation to obtain

where B m = φ ¯ ( σ − 1 ) κ + 1 σ κ σ σ − 1 κ − σ + 1 . Market clearing implies

Defining Π i as

bilateral trade can be expressed as

In the models reviewed in the previous section, all trade costs were modeled as iceberg trade costs. However, some trade costs create rents, and how these rents are (re-)distributed can impact trade flows. For example, if there are ad valorem tariffs (i.e., tariffs on the value and not the quantity of the good) and those tariffs are distributed as lump-sum payments to households, a structural gravity equation emerges with tariffs included as part of the trade cost vector. Moreover, the inclusion of tariffs may allow the researcher to identify key parameters of the model. In both the Armington model and the model with monopolistic competition, the structural gravity equation is given by

where τ i j is the gross tariff rate. When data on ad valorem tariff rates is available, this can then be used to identify the elasticity of substitution across varieties.

Other examples where the structural gravity model emerges, is where the production function is modified to include intermediate goods. Eaton and Kortum ( 2002 ) and Redding and Venables ( 2004 ) provide theoretical models that include intermediate goods used in the production process, as described in Fujita, Krugman, and Venables ( 1999 ). They show that when production technology is represented by a Cobb-Douglas production function using labor and a CES-aggregate of intermediates goods, a structural gravity equation emerges. The main difference between these models and, for example, the Armington model is that factor prices will be influenced by the inward-multilateral resistance terms, Π i . Intuitively, better access to foreign intermediaries tends to raise the returns to the domestic factors of production. In addition, a sectoral gravity equation emerges when there are many sectors, and the demand for the varieties produced in different sectors is weakly separable in the production function and/or in the utility function. Anderson and Yotov ( 2016 ) estimated a sectoral structural gravity equation, and they highlight how trade costs vary across sectors. Redding and Weinstein ( 2019 ) used a nested CES demand system to show how a log-linear gravity equation can be estimated and aggregated, and how it is possible to decompose the overall effects of different trade costs into different components reflecting the sectoral gravity equation estimates.

Hallak ( 2006 ), Hallak ( 2010 ), and Baldwin and Harrigan ( 2011 ) allowed for varieties to differ by quality. In these models, the demand for high-quality products increases with the consumer’s income. On the supply side, high-income countries also tend to produce high-quality goods. This is either because they are more likely to produce high-quality goods so that they can satisfy the local market demands. Alternatively, high-income countries are more capable of producing high-quality goods because their firms are, on average, more productive and can therefore produce higher-quality goods more efficiently. As a result, countries with similar per capita GDPs are expected to trade more (see Linder, 1961 ). Hallak ( 2010 ) used a sectoral gravity-like equation to show that bilateral pairs that have similar per capita GDPs should trade more.

In all of the models discussed so far, preferences are characterized by CES-utility function. Novy ( 2013 ) derived a gravity equation where the demand system can be characterized by a translog demand system. Unlike the earlier models, Novy ( 2013 ) showed that the bilateral elasticity of trade with respect to trade costs is not constant. Behrens and Murata ( 2012 ) and Arkolakis, Costinot, Donaldson, and Rodriiguez-Clare ( 2015 ) showed that the structural gravity model can be obtained when agents have constant absolute risk aversion utility functions. In this case, there is a choke price that prevents all firms from exporting. Nevertheless, because the firm technology is Pareto distributed in the ACDRC framework, there will always be positive bilateral trade because of the unbounded Pareto distribution, and a gravity-like equation can be obtained when preferences are characterized by constant absolute risk aversion utility functions, and firm productivity is given by an unbounded Pareto distribution. 9

Empirical Gravity

From Tinbergen’s early application until the beginning of this century, trade economists working on the gravity model have focused mainly on either the theoretical foundations of the model or on expanding the list of covariates used to identify other natural, historical, cultural, and policy-related variables that affect bilateral trade. During this time, nearly every empirical application estimated the gravity model in log-linear form using ordinary least squares (OLS). 10 The influential work by Santos Silva and Tenreyro ( 2006 ) called into question the use of the log-linear specification. Santos Silva and Tenreyro (hereafter SST) argued that the error term was likely heteroskedastic, and the variance was likely a function of the right-hand side control variables. If this were the case, the coefficient estimates from the log-linear would be inconsistent. SST proposed using a Poisson Pseudo Maximum Likelihood Estimator. This section discusses the properties of the different estimators, starting with the assumptions required for OLS to yield consistent estimates. It subsequently covers the estimation of the gravity model using Poisson Pseudo Maximum Likelihood Estimator (PPML), the Gamma Pseudo Maximum Likelihood Estimator (GPML), and Nonlinear Least Squares (NLS). 11 This section closes by addressing the potential endogeneity of the policy variables and discussing how this has been addressed.

The Log-Linear Model

Assuming that the expected value of trade is given by the structural gravity equation as derived in the previous section, observed bilateral trade is given by

In addition to the error term u i j t , time subscripts have been added to the gravity model to allow for the estimation to include several years of bilateral trade data. In some instances, it will be more convenient to substitute for the trade cost vector in the functional form and express Equation 22 as

Most early applications of the gravity model estimated Equation (22) without accounting for the multilateral resistance terms. To illustrate, the first column of Table 1 presents the coefficients of the log-linear model estimated using pooled OLS for five-year intervals from 1974 to 2014 . 12 Included in these regressions are time dummies that capture the yearly variations in worldwide trade. The results are consistent with many papers estimating the log-linear gravity. Specifically, the coefficients on GDP are (relatively) close to unity, the absolute value of the distance elasticity is close to unity, and the coefficients on the common language, contiguity, colony, and the FTA indicator variables all indicate that they have a positive effect on trade.

Addressing Multilateral Resistance

There are a number of reasons why the coefficient estimates from this specification may be inconsistent. The most obvious, given the theoretical discussion in the previous section, is that this specification does not include controls for the multilateral resistance terms. 13 Anderson and van Wincoop used an iterative nonlinear least squares estimator that computes and incorporates the multilateral resistance terms. However, a more straightforward way to account for the multilateral resistance terms that avoids custom programming is to include importer and exporter fixed effects. The inclusion of these fixed effects means that the trade elasticity with respect to GDPs can no longer be identified directly. When extended to a panel setting, Baldwin and Taglioni ( 2006 ) and Baier and Bergstrand ( 2007 ) emphasized that the theoretically consistent fixed effects should be specified as exporter-year and importer-year fixed effects.

The structural gravity equation can now be rewritten as

where X ij SG = exp ( Z i j t β + δ X D i t + δ M D j t ) , Z i j t is a k -dimensional vector capturing bilateral trade costs, and D i t ( D j t ) are exporter-year (importer-year) dummy indicators. One issue that arises in estimating equation (23) is that as the number of countries and years in the panel rises, the estimation of the coefficients on the dummy indicators becomes increasingly difficult and time consuming. This challenge led to Baier and Bergstrand’s ( 2009 ) linearized version of the multilateral resistance terms, which greatly reduced the number of parameters and allowed for the inclusion of importer and exporter specific effects. However, these technical issues are less of a concern now that most statistical packages have custom programs that allow the researcher to estimate models using high dimensional fixed effects.

Table 1. Comparison of Different Estimators

*** Standard errors in parentheses p<0.01,

** p<0.05,

Heteroskedasticity and the Structural Gravity Model

For many years, nearly all empirical papers estimated a log-linear gravity model using ordinary least squares, which in many instances may have led to biased coefficient estimates. If the error term is heteroskedastic and variance of the error term is correlated with the right-hand side variables, the estimates are likely to be biased. Estimating equation (23) in log levels will yield consistent estimates under the following conditions

Furthermore, in the presence of missing trade data, the OLS coefficient estimates will only be consistent when the data are completely missing at random , or the missing observations are functions of the right-hand side controls but independent of the error terms. There are statistical tests that can be performed to check whether the zero trade flows are economically determined as opposed to missing at random. Perhaps the simplest of these tests is to estimate Equation 22 while including an indicator variable that shows if the bilateral pair has positive trade flows in the subsequent period. If the coefficient on this variable is statistically significant, the zero trade flows are likely economically determined.

Alternatively, Helpman, Melitz, and Rubinstein ( 2008 ) employ a Heckman-like correction to account for firm heterogeneity and zero trade flows. They develop a model where firm productivity is drawn from a truncated Pareto distribution. They then show how to account for firm heterogeneity empirically and how to employ a two-step Heckman correction for selection into trade. As is typical with Heckman corrections, the researcher needs to find a variable that influences the extensive margin of trade without impacting the intensive margin of trade. HMR used data from the World Bank’s “Doing Business” report for a core set of countries and used religion as the identifying variable for a broader group of countries. However, Santos Silva and Tenreyro ( 2015 ) showed that the HMR specification is only valid under relatively strong distributional assumptions and that standard statistical tests to assess these assumptions were rejected.

Gamma and Poisson Estimators

While controlling for the multilateral resistance terms helps to account for the correlation between the trade costs and the error term, there are other reasons to suspect the coefficient estimates may be inconsistent. SST showed that the log-linear specification leads to inconsistent estimates if the error term is heteroskedastic, and the variance depends on the right-hand side control variables. To see how the heteroskedasticity is likely to depend on the right-hand side variables the gravity equation is rewritten as

where E ( ν i j t | Z i j , D i , D j ) = exp ( h ( Z i j ) * e i j t ) and e i j t ~ N ( 0 , σ 2 ) so that ν i j t is log-normal with a zero mean and variance that is a function of the Z i j ’s. Then the expected value E [ ln ( v i j t ) | Z i j ] = − 1 2 σ v 2 so that the coefficient estimates of the log linear model would be given by

When heteroskedasticity is present, and the conditional mean function is exponential, SST showed that the PPML estimator provides consistent estimates. 14 However, there are other Pseudo Maximum Likelihood Estimators that will also lead to consistent estimates of the parameters of interest. The first-order conditions for this class of models include

In the case of the PPML estimator, the variance is proportional to the mean, and so the first-order conditions include

For the Gamma Pseudo Maximum Likelihood estimator (GPML) where the variance is proportionate to the square of the mean, the first-order conditions would include

The term in brackets is the percentage difference in the actual trade from the predicted trade. As Head and Mayer ( 2014 ) pointed out, this term may be roughly equal to the log difference in actual trade and predicted trade; in which case, the coefficient estimates may be similar to those using OLS.

Nonlinear Least Squares

The final specification discussed in this section is the nonlinear least squares (NLS). For NLS, the variance is independent of the conditional mean so that the first-order conditions include

As long as the conditional mean is correctly specified, and the sample size is sufficiently large, the coefficient estimates should be similar across these specifications.

Table 1 columns 3 to 5 include the results for the PPML, the GPML, and NLS. As expected, the GPML estimates are similar to the OLS-FE model. The absolute value of the distance elasticity is lower for the Poisson model than it is for the OLS-FE and GPML; this is quite common and was pointed out by SST. For the NLS model, the coefficient estimates on language and Colony are notably different from the other specifications.

Model Selection and Heteroskedasticity

In order to assess these models, a number of standard tests for functional form and for the presence of heteroskedasticity can be implemented. To test for the latter, SST used the Ramsey Reset test, but this test may be also be thought of as a test for functional form. The idea of the Ramsey Reset test is straightforward. After estimating the model, save the predicted values and re-run the model with the same controls along with squares of the predicted value and other higher-order terms. If these additional regressors are not statistically significant, the functional form is likely correctly specified and heteroskedasticity is not a problem. Another commonly used test is the MaMu (or Park) test for heteroskedasticity. For this test, you again save the fitted value from the original specification, create V ^ i j t = ( X i j t − X ^ i j t ) 2 , and subsequently estimate the following model

Using the same estimator that generated the predicted values, a statistical test on the value on λ 1 can help discriminate among the models. If the coefficient estimate is close to one (two) the PPML (GPML) estimator is more efficient, and if the coefficient is close to zero, then NLS may be appropriate. In many cases, the coefficient estimate is somewhere between one and two. Head and Mayer ( 2014 ) ran a simulation exercise in which the variance structure is proportionate to the mean-making PPML the most efficient estimate. They found the coefficient estimate on λ 1 to be close to 1.60. When λ ^ 1 was significantly below two, the MaMu (Park) test was a near-perfect predictor for the model specification. 15

Given the advances in computing power and improvement in estimating techniques, best practices for reporting empirical results would include estimating the model using OLS, PPML, GPML, and potentially NLS. 16 As Head and Mayer suggest, if all the coefficient estimates are similar, then there is little reason for concern. If the coefficient estimates are economically different, then the Ramsey Reset test and the MaMu (Park) test can provide additional insights into the correct empirical specification.

Endogenous Trade Policy

In many cases, the researcher may be concerned that the right-hand side controls are not exogenous. This is most likely to arise when policy variables are included in the specification. Clearly, tariffs and trade agreements are the results of negotiations between bilateral pairs and are hence unlikely to be randomly distributed across bilateral pairs even after controlling other right-hand side variables. By running a series of cross-sectional gravity equations over time, Baier and Bergstrand ( 2007 ) showed that the estimated coefficients on free trade agreements are less stable compared to standard gravity controls. 17 Table 2 presents the results for the gravity equation for five-year intervals from 1979 to 2014 . In order to account for the endogeneity, one must find instruments that are correlated with the tariffs or trade agreements but uncorrelated with trade flows. An alternative is to assume that there are bilateral specific effects that evolve slowly over time to the point where the researcher can assume that they are constant. One can then estimate the model using bilateral fixed effects. Baier and Bergstrand found that when controlling for time-varying multilateral resistance using standard panel fixed effects, the coefficient on trade agreements was positive and significant and was robust to changes in the specification.

Table 2 presents the results at five-year intervals from the OLS specification from 1979 to 2014 . The coefficient on trade agreements is negative and significant for several years, after which it becomes positive and significant. The coefficient on trade agreements ranges from −0.689 to 0.590. If the policy variables are correlated with the error term, the consistent estimation can be obtained by using standard instrumental variable techniques. Egger et al. ( 2011 ) and Magee ( 2003 ) are two notable examples that use IV estimation. Rather than taking the IV approach, Baier and Bergstrand ( 2007 ) assumed that the policy variables are correlated with an unobserved component that is fixed or sufficiently slow moving over time. If this assumption holds and all of the other conditions needed for consistent estimation for the log-linear gravity model are met, then consistent estimates can be obtained by fixed effects or first differencing the data.

Table 2. Stability of the Coefficients

*** Robust standard errors in parentheses p<0.01,

Baier and Bergstrand ( 2007 ) also included lags and lead to capture the dynamic aspects of trade agreements. The lagged values of the trade agreement variables detect the phase-in effects, while the leads detect feedback effects (i.e., where large bilateral trade flows lead to the new trade agreements). Anderson and Yotov ( 2016 ) obtained qualitatively similar findings using a PPML estimator with bilateral fixed effects. Table 3 presents the results using a standard fixed effects estimation and the fixed-effect PPML estimator. In both specifications, there is evidence of economically and statistically significant lagged effects of trade agreements and little evidence of feedback effects. The fixed effect PPML estimates are also smaller than the standard log-linear fixed effect specification.

Table 3. Lagging and Leading Trade Agreements

The current state and future of gravity.

Over time, improvements to the data and theoretical innovations have resolved several of the empirical puzzles that trade economists identified when employing the gravity equation. McCallum’s border puzzle is one such issue and was addressed by Anderson and van Wincoop ( 2003 ). Another puzzle that has been widely discussed is the distance puzzle. Several studies have shown that the absolute value of the elasticity of trade with respect to distance has increased over time (see, e.g., Disdier & Head, 2008 ). Using data that includes gross production and intra-country trade, Yotov ( 2012 ) showed that the effect of distance on trade has declined over time when one measures the impact of distance on international relative to intranational trade. Caron, Fally, and Markusen ( 2014 ) argued that incorporating a gravity framework into a model with multiple sectors and non-homothetic preferences addresses several puzzles in international trade.

More recently, the gravity framework has been used to assist in quantifying the general equilibrium impacts of trade policies and to assess the welfare implications. A typical assumption in most empirical specifications is that incomes and prices are assumed to exogenous, and this may be an appropriate assumption when the observation is bilateral trade. Given the theoretical developments of the gravity model, it is relatively easy to embed measured trade costs into the general equilibrium models and observe how changes in trade costs will impact prices and incomes. An important contribution that set the stage for the use of the gravity equation in evaluating trade policies was the paper of Arkolakis, Costinot, and Rodríguez-Clare ( 2012 ). They showed that for a wide class of models, the welfare implication depends on the share of expenditures on domestically produced goods and the elasticity of trade with respect to (variable) trade costs. Another significant contribution that led to the gravity model’s use in evaluating trade policy was the small-scale model developed by Alvarez and Lucas ( 2007 ). Alvarez and Lucas showed how the Eaton and Kortum model could be calibrated to simulate changes in trade policy. Caliendo and Parro ( 2015 ) quantified the impact of the reduction in tariffs as a result of the North American Free Trade Association (NAFTA). Caliendo et al. ( 2017 ) used a quantitative gravity model to evaluate the impact of 20 years of tariff changes through the GATT/WTO and trade agreements. Felbermayr, Gröschl, and Steininger ( 2018 ) used a quantitative trade model to evaluate the impact of Brexit.

Since the advent of new trade theory, there has been an interest in linking trade, firm location, and economic geography. Early theoretical and empirical examples are Fujita et al. ( 1999 ) and Redding and Venables ( 2004 ). These papers focused on market access and supplier access. As more data and better data have become available, these models have used the gravity framework to address how market access and supplier access has impacted different areas (see, e.g., Donaldson & Hornbeck, 2016 ; Donaldson, 2018 ; Allen & Arkolakis, 2014 ; Ahlfeldt, Redding, Sturm, & Wolf, 2015 ).

In the future, several areas need to be addressed. As pointed out by Lai and Trefler ( 2002 ), the gravity equation does an excellent job of explaining cross-sectional variation in trade flows but does not perform as well in explaining the growth of trade. The reason for this is somewhat obvious: for much of the post–World War II period, trade has increased faster than income. In order for the gravity model to explain the growth in trade, there must be changes in the trade costs that have led to an increase in trade. In most specifications, on the right-hand side, bilateral control variables are constant over time and thus cannot explain the growth in trade. A related area of research would be able to provide a dynamic model of international trade both at the aggregate level and incorporating firm dynamics. Anderson, Larch, and Yotov ( 2015 ) used an Armington framework with capital accumulation to develop a dynamic general equilibrium gravity model. Sampson ( 2016 ) and Perla, Tonetti, and Waugh ( 2015 ) provided a dynamic model of trade and growth with heterogeneous firms. Finally, an area of future research is to have a better understanding of trade costs that are derived from first principles. Most theoretical developments have been in terms of firm production and preferences of the individual. In almost all examples, trade costs are simply assumed to be iceberg trade costs, and the functional form of the trade costs is log-linear. Chaney ( 2018 ) provided a model that helps to explain the role of distance in the gravity equation.

Further Reading

  • Allen, T. , & Arkolakis, C. (2015). Elements of advanced international trade . Graduate Trade Notes .
  • Alvarez, F. , & Lucas Robert, E. J. (2007). General equilibrium analysis of the Eaton-Kortum Model of international trade. Journal of Monetary Economics , 54 (6), 1726–1768.
  • Anderson, J. E. , & van Wincoop E. (2003). Gravity with gravitas: A solution to the border puzzle. American Economic Review , 93 (1), 170–192.
  • Baier, S. L. , & Bergstrand, J. H. (2007). Do free trade agreements really increase members’ trade. Journal of International Economics 71 (1), 72–95.
  • Baier, S. L. , & Bergstrand, J. H. (2009). Bonus vetus OLS: A simple method for approximating international trade-cost effects using the gravity equation. Journal of International Economics , 77 (1), 77–85.
  • Baier, S. L. , Kerr, A. , & Yotov, Y. V. (2018). Gravity distance and international trade. In B. Blonigen & W. Wilson (Eds.), Handbook of international trade and transportation (pp. 15–78). Northampton, MA: Edward Elgar.
  • Bergstrand, J. H. , & Egger, P. (2011). Gravity equations and economic frictions in the word economy. In D. Bernhofen , R. Falvey , D. Greenaway , & U. Kreickemeierm (Eds.), Palgrave handbook of international trade . London: Palgrave-Macmillan.
  • Caliendo, L. , Feenstra, R. C. , University, Y. , Davis, N. U. , Romalis, J. , & Taylor, A. M. (2017). Tariff reductions, entry, and welfare: Theory and evidence for the last two decades. National Bureau of Economic Research, Working paper series No. 21768.
  • Eaton, J. , & Kortum, S. (2002). Technology, geography and trade. Econometrica , 70 (5), 1741–1779.
  • Felbermayr, G. , Gröschl, J. , & Steininger, M. (2018). Brexit through the lens of new quantitative trade theory. In Annual Conference on Global Economic Analysis at Purdue University .
  • Hallak, J. C. (2010). A product-quality view of the Linder hypothesis. Review of Economics and Statistics , 92 (3), 453–466.
  • Head, K. , & Mayer, T. (2014). Gravity equations: Workhorse, toolkit, and cookbook. In G. Gopinath , E. Helpman , & K. Rogoff (Eds.), Handbook of International Economics (Vol. 4, pp. 131–195). North Holland: Elsevier.
  • Helpman, E. , Melitz, M. , & Rubinstein, Y. (2008). Trading partners and trading volumes. Quarterly Journal of Economics , 123 (2), 441–487.
  • Krugman, P. R. (1979). Increasing returns, monopolistic competition, and international trade. Journal of International Economics , 9 (4), 469–479.
  • Krugman, P. R. (1980). Scale economies, product differentiation, and the pattern of trade. American Economic Review , 70 (5), 950–959.
  • Melitz, M. J. (2003). The impact of trade on intra-industry reallocations and aggregate industry productivity. Econometrica , 71 (6), 1695–1725.
  • Melitz, M. , & Redding, S. (2014). Heterogeneous firms and trade. In G. Gopinath , E. Helpman , & K. Rogoff (Eds.), Handbook of International Economics (Vol. 4, pp. 131–195). North Holland: Elsevier.
  • Piermartini, R. , & Yotov, Y. (2016). Estimating trade policy effects with structural gravity . School of Economics Working Paper Series. LeBow College of Business, Drexel University.
  • Redding, S. J. (2011). Theories of heterogeneous firms and trade. Annual Review of Economics , 3 (1), 77–105.
  • Redding, S. , & Venables, A. J. (2004). Economic geography and international inequality. Journal of International Economics , 62 (1), 53–82.
  • Redding, S. , & Weinstein, D. (2019). Aggregation and the gravity equation. American Economic Review Papers and Proceedings , 109 , 450–455.
  • Sampson, T. (2016). Dynamic selection: An idea flows theory of entry, trade, and growth. The Quarterly Journal of Economics , 131 (1), 315–380.
  • Santos Silva, J. M. C. , & Tenreyro, S. (2015). Trading partners and trading volumes: Implementing the Helpman-Melitz-Rubinstein Model empirically. Oxford Bulletin of Economics and Statistics , 77 (1), 93–105.
  • Silva, J. M. C. S. , & Tenreyro, S. (2006). The log of gravity. Review of Economics and Statistics , 88 (4), 641–658.
  • Ahlfeldt, G. M. , Redding, S. J. , Sturm, D. M. , & Wolf, N. (2015). The economics of density: Evidence from the Berlin Wall. Econometrica , 83 (6), 2127–2189.
  • Allen, T. , & Arkolakis, C. (2014). Trade and the topography of the spatial economy. Quarterly Journal of Economics , 129 (3), 1085–1140.
  • Alvarez, F. , & Lucas Robert, E. J. (2007). General equilibrium analysis of the Eaton-Kortum model of international trade. Journal of Monetary Economics , 54 (6), 1726–1768.
  • Anderson, J. E. (1979). A theoretical foundation for the gravity equation. The American Economic Review , 69 (1), 106–116.
  • Anderson, J. E. , & van Wincoop, E. (2003). Gravity with gravitas: A solution to the border puzzle. American Economic Review , 93 (1), 170–192.
  • Anderson, J. E. , & Yotov, Y. V. (2016). Terms of trade and global efficiency effects of free trade agreements, 1990–2002. Journal of International Economics , 99 , 279–298.
  • Anderson, J. , Larch, M. , & Yotov, Y. (2015). Growth and trade with frictions: A structural estimation framework . National Bureau of Economic Research, Working paper series No. 21377.
  • Arkolakis, C. , Costinot, A. , Donaldson, D. , & Rodríguez-Clare, A. (2015). The elusive pro-competitive effects of trade. National Bureau of Economic Research, Working paper series No. 21370.
  • Arkolakis, C. , Costinot, A. , & Rodríguez-Clare, A. (2012). New trade models, same old gains? American Economic Review , 102 (1), 94–130.
  • Baier, S. L. , & Bergstrand, J. H. (2001). The growth of world trade: Tariffs, transport costs, and income similarity. Journal of International Economics , 53 (1), 1–27.
  • Baier, S. L. , & Bergstrand, J. H. (2007). Do free trade agreements actually increase members’ international trade? Journal of International Economics , 71 (1), 72–95.
  • Baldwin, R. E. , & Harrigan, J. (2011). Zeros, quality, and space: Trade theory and trade evidence. American Economic Journal: Microeconomics , 3 (2), 60–88.
  • Baldwin, R. , & Taglioni, D. (2006). Gravity for dummies and dummies for gravity equations . National Bureau of Economic Research, Working paper series No. 12516.
  • Behrens, K. , & Murata, Y. (2012). Globalization and individual gains from trade. Journal of Monetary Economics , 59 (8), 703–720.
  • Bergstrand, J. H. (1985). The gravity equation in international trade: Some microeconomic foundations and empirical evidence. The Review of Economics and Statistics , 67 (3), 474.
  • Bergstrand, J. H. , & Egger, P. (2011). Gravity equations and economic frictions in the world economy. In D. Bernhofen , R. Falvey , D. Greenaway , & U. Kreickemeierm (Eds.), Palgrave handbook of international trade . London: Palgrave-Macmillan.
  • Bertoletti, P. , Etro, F. , & Simonovska, I. (2018). International trade with indirect additivity. American Economic Journal: Microeconomics , 10 (2), 1–57.
  • Caliendo, L. , & Parro, F. (2015). Estimates of the trade and welfare effects of NAFTA. The Review of Economic Studies , 82 (1), 1–44.
  • Caron, J. , Fally, T. , & Markusen, J. R. (2014, May). International trade puzzles: A solution linking production and preferences. Quarterly Journal of Economics , 129 (3), 1501–1552.
  • Chaney, T. (2008). Distorted gravity: The intensive and extensive margins of international trade. American Economic Review , 98 (4), 1707–1721.
  • Chaney, T. (2018). The gravity equation in international trade: An explanation. Journal of Political Economy , 126 (1), 150–177.
  • Cheng, I.-H. , & Wall, H. J. (2005). Controlling for heterogeneity in gravity models of trade and integration. Federal Reserve Bank of St. Louis Review , 87 (1), 49–63.
  • Deardorff, A. V. (1998). Determinants of bilateral trade: Does gravity work in a neoclassical world? In J. Frankel (Ed.), The regionalization of the world economy (pp. 7–32). Chicago: University of Chicago Press.
  • De Benedictis, L. , & Taglioni, D. (2010) The Gravity Model in International Trade. In L. De Benedictis and L. Salvatici (Eds.), The Trade Impact of European Union Preferential Policies . Berlin: Springer-Verlag.
  • Disdier, A.-C. , & Head, K. (2008). The puzzling persistence of the distance effect on bilateral trade. Review of Economics and Statistics , 90 (1), 37–48.
  • Donaldson, D. (2018). Railroads of the Raj: Estimating the impact of transportation infrastructure. American Economic Review , 108 (4–5), 899–934.
  • Donaldson, D. , & Hornbeck, R. (2016). Railroads and American economic growth: A ‘market access’ approach. The Quarterly Journal of Economics , 131 (2), 799–858.
  • Dornbusch, R. , Fischer, S. , & Samuelson, P. (1977). Comparative advantage, trade, and payments in a Ricardian model with a continuum of goods. American Economic Review , 67 (5), 823–839.
  • Eaton, J. , Kortum, S. S. , & Sotelo, S. (2012). Series international trade: Linking micro and macro. National Bureau of Economic Research, Working paper series No. 17864.
  • Eaton, J. , & Tamura, A. (1995). Bilateralism and regionalism in Japanese and U.S. trade and direct foreign investment patterns. National Bureau of Economic Research, Working paper series No. 4758.
  • Egger, P. , Larch, M. , Staub, K. E. , & Winkelmann, R. (2011). The trade effects of endogenous preferential trade agreements. American Economic Journal: Economic Policy , 3 (3), 113–143.
  • Fally, T. (2015). Structural gravity and fixed effects. Journal of International Economics , 97 (1), 76–85.
  • Feenstra, R. C. (2004). Advanced international trade: Theory and evidence by Robert C. Feenstra, 2004 . Princeton, NJ: Princeton University Press.
  • Frankel, J. A. (1997). Regional trading blocs . Washington, DC: Institute for International Economics.
  • Fujita, M. , Krugman, P. R. , & Venables, A. J. (1999). The spatial economy—cities, regions, and international trade . Cambridge, MA: MIT Press.
  • Hallak, J. C. (2006). Product quality and the direction of trade. Journal of International Economics , 68 (1), 238–265.
  • Hallak, J. C. (2010). A product-quality view of the Linder Hypothesis. Review of Economics and Statistics , 92 (3), 453–466.
  • Harrigan, J. (2003). Specialization and the volume of trade: Do the data obey the laws. In K. E. Choi & J. Harrigan (Eds.), Handbook of international trade (1st ed., pp. 85–118). Oxford, U.K.: Blackwell.
  • Head, K. , & Mayer, T. (2014). Gravity equations: Workhorse, toolkit, and cookbook. In G. Gopinath , E. Helpman , & K. Rogoff (Eds.) Handbook of International Economics (Vol. 4, pp. 131–195). North Holland: Elsevier.
  • Head, K. , Mayer, T. , & Reis, J. (2010). The erosion of colonial trade linkages after independence. Journal of International Economics , 81 , 1–14.
  • Helliwell, J. F. (1997). National borders, trade and migration. Pacific Economic Review , 2 (3), 165–185.
  • Helpman, E. (1987). Imperfect competition and international trade: Evidence from fourteen industrial countries. Journal of the Japanese and International Economies , 1 (1), 62–81.
  • Helpman, E. , & Krugman, P. R. (1989). Trade policy and market structure . Cambridge, MA: MIT Press.
  • Hummels, D. (2007). Transportation costs and international trade in the second era of globalization. The Journal of Economic Perspectives , 21 (3), 131–154.
  • Lai, H. , & Trefler, D. (2002). The gains from trade with monopolistic competition: Specification, estimation and mis-specification. National Bureau of Economic Research, Working paper series No. 9169.
  • Leamer, E. E. , & Stern, R. M. (1970). Quantitative International Economics (First) . Boston: Allyn and Bacon.
  • Limao, N. , & Venables, A. J. (2001, September). Infrastructure, geographical disadvantage, transport costs, and trade. The World Bank Economic Review , 15 (3), 451–479.
  • Linder, S. B. (1961). An essay on trade and transformation . Stockholm, Sweden: Almqvist & Wicksells.
  • Linnemann, H. (1966). An econometric study of international trade flows . Amsterdam, The Netherlands: North-Holland.
  • Magee C. S. (2003). Endogenous preferential trade agreements: An empirical analysis. The B.E. Journal of Economic Analysis & Policy , 2 (1), 1–19.
  • McCallum, J. (1995). National borders matter. American Economic Review , 85 (3), 615–623.
  • Novy, D. (2013). International trade without CES: Estimating translog gravity. Journal of International Economics , 89 (2), 271–282.
  • Perla, J. , Tonetti, C. , & Waugh, M. (2015). Equilibrium technology diffusion, trade, and growth. National Bureau of Economic Research, Working paper series No. 20881.
  • Piermartini, R. , & Yotov, Y. (2016). Estimating trade policy effects with structural gravity . School of Economics Working Paper Series. Philadelphia, PA: LeBow College of Business, Drexel University.
  • Wei, S.-J. (1996). Intra-national versus international trade: How stubborn are nations in global integration? National Bureau of Economic Research, Working paper series No. 5531.
  • Yotov, Y. V. (2012). A simple solution to the distance puzzle in international trade. Economics Letters , 117 (3), 794–798.

1. This review covers much of the evolution of the gravity equation. Like any review, the choice of topics covered is selective. The interested reader may also want to review other surveys: in particular, Head and Mayer ( 2014 ), Allen and Arkolakis ( 2015 ) Bergstrand and Egger ( 2011 ), De Benedictis and Taglioni ( 2010 ), Scott L. Baier, Kerr, and Yotov ( 2018 ), and Piermartini and Yotov ( 2016 ).

2. The frictionless gravity model is derived in the next section. Leamer and Stern ( 1970 ) and Anderson ( 1979 ) and Deardorff ( 1998 ) are some of the earliest theoretical contributions to derive a frictionless gravity model.

3. The correlation between the natural logarithm of expenditure shares and production shares is 0.67.

4. Conditioning on the pair being contiguous, the elasticity of trade costs with respect to distance falls to 1.12.

5. Head, Mayer, and Reis ( 2010 ) highlighted this relationship.

6. Baier et al. ( 2018 ) highlight this more rigorously and develop a model that highlights these interactions between trade agreements and other trade costs.

7. This section ignores the specific taste parameter for a country’s good ( β ), allowing one to relate factor prices to a country’s production technology.

8. It is assumed that there are no internal trade costs so that the landed price in country i is equal to the factory gate price.

9. More recently, Bertoletti, Etro, and Simonovska ( 2018 ) derive a gravity-like equation when agents have indirectly additive preferences.

10. Notable exceptions were Eaton and Tamura ( 1995 ) and Frankel ( 1997 ).

11. PPML, GPML, and NLS are in the same class of estimators commonly referred to as the generalized linear models (GLM).

12. For the reasons outlined in Cheng and Wall ( 2005 ) and Baier and Bergstrand ( 2007 ), five-year intervals are used.

13. As discussed earlier, these terms were typically approximated using “remoteness” variables that may be poor proxies for the multilateral resistance.

14. Another appealing feature of the PPML model is that the coefficient estimates on the exporter-year and importer-year fixed effects are the theoretically consistent multilateral resistance terms derived in the previous section (see Fally, 2015 ).

15. Egger, Larch, Staub, and Winkelmann ( 2011 ) examined the small sample properties of the different GLM estimators.

16. Eaton, Kortum, and Sotelo ( 2012 ) use a Multinomial Pseudo Maximum Likelihood estimator to estimate a gravity model using export shares. Head and Mayer ( 2014 ) find that this model performs well in the presence of zeros and when the variance of the error term is proportionate to the mean. However, it performs less well when the variance is proportional to the square of the mean.

17. Baier and Bergstrand ( 2007 ) attributed this instability to the endogeneity of the trade agreements.

Related Articles

  • The Economics of Innovation, Knowledge Diffusion, and Globalization
  • Globalization, Trade, and Health Economics
  • The Law and Political Economy of International Trade Agreements

Printed from Oxford Research Encyclopedias, Economics and Finance. Under the terms of the licence agreement, an individual user may print out a single article for personal use (for details see Privacy Policy and Legal Notice).

date: 25 April 2024

  • Cookie Policy
  • Privacy Policy
  • Legal Notice
  • Accessibility
  • [66.249.64.20|81.177.182.174]
  • 81.177.182.174

Character limit 500 /500

IMAGES

  1. La investigación empírica: Definición, métodos, tipos y ejemplos

    empirical research in economics

  2. Empirical Research: Definition, Methods, Types and Examples

    empirical research in economics

  3. What Is Empirical Research? Definition, Types & Samples

    empirical research in economics

  4. Empirical Research: Definition and Examples

    empirical research in economics

  5. (PDF) Methods Used in Economic Research: An Empirical Study of Trends

    empirical research in economics

  6. What is empirical research

    empirical research in economics

VIDEO

  1. What is Empirical Research Methodology ? || What is Empiricism in Methodology

  2. 2022 Methods Lecture, Christopher Walters, "Empirical Bayes Applications"

  3. How to do empirical research in Economics: 3

  4. Epistemology

  5. Uniswap

  6. DDE

COMMENTS

  1. Methods Used in Economic Research: An Empirical Study of Trends and Levels

    The methods used in economic research are analyzed on a sample of all 3,415 regular research papers published in 10 general interest journals every 5th year from 1997 to 2017. The papers are classified into three main groups by method: theory, experiments, and empirics. The theory and empirics groups are almost equally large. Most empiric papers use the classical method, which derives an ...

  2. Home

    Empirical Economics publishes high-quality papers that apply advanced econometric or statistical methods to confront economic theory with observed data.. Exemplary topics are treatment effect estimation, policy evaluation, forecasting, and econometric methods. Contributions may focus on the estimation of established relationships between economic variables or on the testing of hypotheses.

  3. PDF Method and Applications Experimental Economics

    Over the past two decades, experimental economics has moved from a fringe activity to become a standard tool for empirical research. With experimental economics now regarded as part of the basic tool-kit for applied economics, this book demonstrates how controlled experiments can be useful in providing evidence relevant to economic research.

  4. The case for economics

    A multidecade study shows economics increasingly overlaps with other disciplines, and has become more empirical in nature. A new study examines 140,000 economics papers published from 1970 to 2015, tallying the "extramural" citations that economics papers received in 16 other academic fields, including sociology, medicine, and public health.

  5. Empirical Research in Economics : Growing Up with R

    Empirical Research in Economics: Growing Up with R. Empirical Research in Economics. : Changyou Sun. Pine Square LLC, Jul 28, 2015 - Business & Economics - 580 pages. The major motivation behind the book is that there has been a lack of systematic approach in teaching students how to conduct empirical studies.

  6. PDF The Credibility Revolution in Empirical Economics: How Better Research

    The Credibility Revolution in Empirical Economics: How Better Research Design is Taking the Con out of Econometrics Joshua Angrist and Jörn-Steffen Pischke NBER Working Paper No. 15794 March 2010 JEL No. C20,C30,C50,C51,D00 ABSTRACT This essay reviews progress in empirical economics since Leamer's (1983) critique.

  7. Empirical Strategies in Economics: Illuminating the Path From Cause to

    The view that empirical strategies in economics should be transparent and credible now goes almost without saying. By revealing for whom particular instrumental variables (IV) estimates are valid, the local average treatment effects (LATE) framework helped make this so.

  8. The Credibility Revolution in Empirical Economics: How Better Research

    eempirical work in economics. He urged empirical researchers to "take the con mpirical work in economics. He urged empirical researchers to "take the con oout of econometrics" and memorably observed (p. 37): "Hardly anyone takes ut of econometrics" and memorably observed (p. 37): "Hardly anyone takes ddata analysis seriously.

  9. PDF Redalyc.How to do empirical economics

    empirical research in economics. The participants discuss their reasons for starting research projects, data base construction, the methods they use, the role of theory, and their views on the main alternative empirical approaches. The article ends with a discussion of a set of articles which exemplify best practice in empirical work.

  10. Empirical Strategies in Economics: Illuminating the Path from Cause to

    Joshua D. Angrist, 2022. "Empirical Strategies in Economics: Illuminating the Path From Cause to Effect," Econometrica, Econometric Society, vol. 90 (6), pages 2509-2539, November. citation courtesy of. Founded in 1920, the NBER is a private, non-profit, non-partisan organization dedicated to conducting economic research and to disseminating ...

  11. An empirical turn in economics research

    An empirical turn in economics research. A table of results in an issue of the American Economic Review. Over the past few decades, economists have increasingly been cited in the press and sought by Congress to give testimony on the issues of the day. This could be due in part to the increasingly empirical nature of economics research.

  12. Research in Economics

    About the journal. Established in 1947, Research in Economics is one of the oldest general-interest economics journals in the world and the main one among those based in Italy. The purpose of the journal is to select original theoretical and empirical articles that will have high impact on the debate in the social …. View full aims & scope.

  13. Empirical Literature on Economic Growth, 1991-2020: Uncovering Extant

    The factors required to achieve sustainable economic growth in a country are debated for decades, and empirical research in this regard continues to grow. Given the relevance of the topic and the absence of a comprehensive, systematic literature review, we used bibliometric techniques to examine and document several aspects in the empirical ...

  14. The Credibility Revolution in Empirical Economics: How Better Research

    The Credibility Revolution in Empirical Economics: How Better Research Design Is Taking the Con out of Econometrics by Joshua D. Angrist and Jörn-Steffen Pischke. Published in volume 24, issue 2, pages 3-30 of Journal of Economic Perspectives, Spring 2010, Abstract: Since Edward Leamer's memorable...

  15. How to do empirical economics

    Downloadable! This article presents a discussion among leading economists on how to do empirical research in economics. The participants discuss their reasons for starting research projects, data base construction, the methods they use, the role of theory, and their views on the main alternative empirical approaches. The article ends with a discussion of a set of articles which exemplify best ...

  16. PDF Economics 191 Topics in Economic Research

    Research papers • Length: approximately 25 double-spaced pages. -More on this in a moment… • Content: new theoretical or empirical research. -Not simply a survey of existing research -More than speculation • How to find a topic: start with the topics covered in this course. -You need a well-defined question or hypothesis

  17. PDF How to get started on research in economics

    Tradeoff: the more novel it is what you are doing, the lower the standards for execution you will get away with. Three broad categories of research in economics: • real theory: contribute a mechanism for others. • applied theory: illuminate the economics of a particular issues. • empirical work: test a model or estimate a parameter.

  18. PDF Writing Tips For Economics Research Papers

    rooted in economic theories or current economic affairs, (2) an insightful assessment of how the current study adds value to the existing body of research on the same topic, and (3 a keen understanding of the empirical challenges surrounding the cause-and-effect issues related to a chosen

  19. Empirical Research in Transaction Cost Economics:

    AA Review and Assessment. Howard A. Shelanski. Kellogg, Huber, Hansen, Todd, and Evans, Washington, D.C. Peter G. Klein. University of Georgia. This article summarizes and assesses the growing body of empirical research. in transaction cost economics (TCE). Originally an explanation for the scale and.

  20. Design-Based Research in Empirical Microeconomics

    J53 Labor-Management Relations; Industrial Jurisprudence. Design-Based Research in Empirical Microeconomics by David Card. Published in volume 112, issue 6, pages 1773-81 of American Economic Review, June 2022, Abstract: I briefly review the emergence of "design-based" research methods in labor economics in the 1980s and early 1990s.

  21. Empirical Research in Economics

    Home Science Vol. 218, No. 4577 Empirical Research in Economics. Back To Vol. 218, No. 4577. Full access. Letter. Share on. Empirical Research in Economics. Jacob Cohen Authors Info & Affiliations. ... research, and educational use. Purchase this issue in print. Buy a single issue of Science for just $15 USD. Media Figures Multimedia. Tables ...

  22. Empirical methods in the economics of education

    Empirical research in the economics of education often addresses causal questions. Does an educational policy or practice cause students' test scores to improve? Does more schooling lead to higher earnings? This article surveys the methods that economists have increasingly used over the past two decades to distinguish accidental association ...

  23. Gravity Models and Empirical Trade

    Summary. The gravity model of international trade states that the volume of trade between two countries is proportional to their economic mass and a measure of their relative trade frictions. Perhaps because of its intuitive appeal, the gravity model has been the workhorse model of international trade for more than 50 years.