Working Papers
Multi-Sector Model of Tradable Emission Permits
Working Paper: GSPP10-005 (May 2010)
This paper presents a multi-sector model of tradable emission permits, which includes oligopolistic and perfectly competitive industries. The firms in oligopolistic industries are assumed to exercise market power in the tradable permit market as well as in the product market. Specifically, we examine the effects of the initial permit allocation on the equilibrium outcomes, focusing on the interaction among these product and permit markets. It is shown that raising the number of initial permits allocated to one firm in an oligopolistic industry increases the output produced by that…firm. Under certain conditions, raising a “clean” (less-polluting) firm’s share of the initial permits can lead to reductions in both the product and permit prices. We discuss criteria for the socially optimal allocation of initial permits, considering the trade-off between production inefficiency and consumer benefit.
Invisible Students Bridging the Widest Achievement Gap
Working Paper: GSPP10-003 (April 2010)
African-American boys have long fared worse in school. This paper documents this achievement gap, then assesses a number of evidence-based strategies that hold promise of bridging that gap. Those strategies range from high-quality early education and skill-building reading programs to mentoring initiatives and interventions that address stereotype vulnerability. Much of the existing research has not isolated the effects on black males, and the paper offers new data that demonstrates those impacts. A sequence of interventions, which begin before kindergarten and continue during college, is recommended.
The New (Commercial) Open Source: Does it Really Improve Social Welfare?
Working Paper: GSPP10-001 (January 2010)
The number of open source (“OS”) software projects has grown exponentially for at least a decade. Unlike early open source projects, much of this growth has been funded by commercial firms that expect to earn a profit on their investment. Typically, firms do this by selling bundles that contain both OS software and proprietary goods (e.g. cell phones, applications programs) and services (custom software). We present a general two-stage Cournot model in which arbitrary numbers of competing OS and closed source (“CS”) firms decide how much software to create in Stage 1 and how many bundles to supply in Stage 2. We find that the amount of OS software delivered depends on (a) the degree of substitutability between proprietary products, (b) the number of OS and CS firms competing in the market, and (c) the savings available to OS firms from cost-sharing. However, code-sharing also guarantees that no OS firm can offer better software than any other OS firm. This suppresses quality competition between OS firms and restricts their output much as an agreement to suppress competition on quality would.
Competition from CS firms weakens this quality-cartel effect, thus mixed industries often offer higher welfare. We find that Pure-OS (Pure-CS) markets are sometimes stable against CS (OS) entry so that the required OS/CS state never occurs. Even where mixed OS/CS industries do exist, moreover, the proportion of OS firms needed to stabilize the market against entry is almost always much larger than the target ratio required to optimize welfare. We examine various policy options for addressing this imbalance with tax policy, funding of OS development, and procurement preferences. We find that the first-best solution in our model is to tax OS firms and grant tax breaks to CS firms. Conversely, government policies that fund OS development or establish procurement preferences for OS software actually increase the gap between desired and actual OS/CS ratios still further. Despite this, funding OS development can still improve welfare by boosting total (private government) OS investment above the levels that a private cartel would deliver.
Measuring Emissions Against an Alternative Future: Fundamental Flaws in the Structure of the Kyoto Protocol’s Clean Development Mechanism
Working Paper: ERG09-01 (December 2009)
Default Risk Evaluation in the Single-Family Mortgage Market
Working Paper: GSPP09-102 (October 2009)
This Federal Housing Finance Agency (FHFA) report fulfills the requirement of Section
1602 of the Housing and Economic Recovery Act of 2008 that FHFA conduct a study of
ways to improve the overall default risk evaluation used with respect to residential
mortgage loans and report to Congress on the results of that study. To aid in the
preparation of the report, FHFA and the Federal Deposit Insurance Corporation’s
(FDIC’s) Center for Financial Research jointly selected seven papers for a public
symposium held on September 16th, 2009. This report summarizes and evaluates those
papers in the context of previous research and the comments provided by discussants at
that symposium. The appendices to the report provide the papers themselves.
FHFA is grateful to the FDIC’s Center for Financial Research and its director, Paul
Kupiec, for co-sponsoring the September 2009 symposium. FHFA also gratefully
acknowledges the contribution of Professor John Quigley of the University of California,
Berkeley to the preparation of the report.
Openness, Open Source, and the Veil of Ignorance
Working Paper (September 2009)
Open source collaborations are increasingly among commercial firms whose interest is profit. Why would profit-motivated firms voluntarily share code? One reason is that cost reductions can outweigh increases in rivalry. This is especially persuasive when the contributors make complementary products. However, cost reductions do not explain why open source is a more profitable way of sharing than other forms of licensing. Why would firms use an inflexible contract like the GPL? I present a model that shows how open source licensing can lead to higher industrywide profit than would result if a first innovator could choose the most profitable license once it finds itself in the position of first innovator. From behind a veil of ignorance, that is, not knowing which firm will be first, open source licensing creates higher expected profit for the industry as a whole, and thus for each firm, than if first innovators were allowed to choose.
Political and Public Acceptability of Congestion Pricing: Ideology and Self Interest
Working Paper: GSPP09-010 (September 2009)
Studies of the “stated preferences” of households generally report public and political opposition by urban commuters to congestion pricing. It is thought that this opposition inhibits or precludes tolls and pricing systems that would enhance efficiency in the use of scarce roadways. This paper analyzes the only case in which road pricing was decided by a citizen referendum on the basis of experience with a specific pricing system. The city of Stockholm introduced a toll system for seven months in 2006, after which citizens voted on its permanent adoption. We match precinct voting records to citizen commute times and costs by traffic zone, and we analyze patterns of voting in response to economic and political incentives. We document political and ideological incentives for citizen choice, but we also find that the pattern of time savings and incremental costs exerts a powerful influence on voting behavior. In this instance, at least, citizen voters behave as if they value commute time highly. When they have experienced first-hand the out-of-pocket costs and time-savings of a specific pricing scheme, they are prepared to adopt freely policies which reduce congestion on urban motorways.
How Housing Busts End: Home Prices, User Cost, and Rigidities During Down Cycles
Working Paper: GSPP08-101 (September 2009)
Property markets have always been cyclical, and many economists have explored
the causes and consequences of cyclicality in housing and commercial real estate.
Indeed, for more than a half century after the great depression, the National Bureau
of Economic Research (NBER) regularly explored linkages among real estate investment, mortgage credit, and aggregate business cycles (see, e.g., NBER volumes by Wickens and Foster 1937; Blank 1954; Abramovitz 1964; Zarnowitz 1992). The
regular boom and bust cycles in real property were important in their own right,
but also as key components of the aggregate business cycle.
In previous work we have analyzed the way housing booms at the top of the
business cycle tend to unwind, relying upon the experience of the USA over the
past 35 years (Case and Quigley 2008). In that analysis we sought to emphasize
the unique aspects of housing markets that contributed to the end of the boom in
the US economy in the twenty-first century.
But by 2008, however, the decline in the US housing and mortgage markets had
moved far beyond the unwinding of a traditional and well-understood housing boom.
We are in the midst of an unprecedented decline. Housing starts and existing sales
are at record low levels, and the huge US mortgage market has collapsed in a sea
of defaults and foreclosures, sending shock waves through the world financial
system. Trillions of dollars in what were thought to be “safe” fixed-income investments have been wiped out in a short period of time.
Now the questions are: When and how will the current severe decline be
arrested? When will the market return to some sense of normalcy? How far will
prices decline? How large will financial losses be? Who will ultimately bear those
losses? What can we do to avoid a disaster like this in the future?
This paper does not pretend to answer all of those questions, but instead to provide a framework emphasizing the economic factors that will ultimately determine those answers.
Our focus will be on the housing market and home prices. The second section
presents a quantitative history of the movement of home prices in the USA
between 1975 and 2008, including the impact of the boom-and-bust cycle of
2000–2008 on the national balance sheet, as well as the historical relationship between
home prices and household income over the cycle.
We then go on to describe the traditional process of disequilibrium adjustment
which is unique to the housing market, and which has played itself out during every
previous recovery period. This housing bust of 2005–2008, however, is different
in a variety of ways which make the task of predicting the timing and the character
of the ultimate bottom more difficult.
The following section presents a perspective that helps integrate the effects of the
important, but seemingly disparate, aspects of the current housing crisis in the USA.
These aspects include home price changes, expectations about price changes, the
demand for housing, and the diffusion of relaxed mortgage underwriting standards
in the USA during the period leading up to the crash in the housing market. This
perspective is the annual user-cost of housing capital – which drives the demand
for housing and homeownership, the demand for housing finance, and the demand
for liquidity in the housing market. This perspective also reconciles the demand for
relaxed standards of mortgage finance and the profitability of those alternative
mortgages to financial institutions. The final section is a brief conclusion.