Stephen M. Maurer is a Full Adjunct Professor of Public Policy and Director of the Goldman School Project on Information Technology and Homeland Security (“ITHS”). ITHS serves as a focal point for the School's science, innovation, technology initiatives. Maurer teaches and writes in the fields of homeland security, innovation policy, and the new economy.
From 1982 to 1996, Maurer practiced high technology and intellectual property litigation at leading law firms in Arizona and California. During that time he represented such diverse clients as IBM, Apple, Aerojet General Corporation, and the Navajo Nation.
Maurer has been associated with the Goldman School since 1999. During that time he has written extensively on a variety of topics including database policy, IP theory, antitrust, neglected disease policy, and commercial open source. His research has appeared in numerous journals including Nature, Science, Bulletin of the Atomic Scientists and Economica. Maurer teaches courses on the the New Economy (“Cyberlife,”), Science Policy, and Information Technology.
Maurer's current research interests include self-governance in scientific communities and the impact of copyright law on culture. He currently teaches courses on (a) innovation and (b) homeland security policy.
Maurer holds a B.A.degree from Yale University and a J.D. in law from Harvard University.
- Information Technology and Homeland Security Project
Download a PDF (150KB, updated 04-09-2018)
Areas of Expertise
- Homeland Security
- Intellectual Property, Open Source, and Innovation
- WMD Terrorism
- Phramaceutical Innovation
- Database policy
Last updated on 04/09/2018
GSPP Working Paper (April 2018)
Scientists have long recognized two distinct forms of human thought. “Type 1” reasoning is unconscious, intuitive, and specializes in finding complex patterns. It is typically associated with the aesthetic emotion that John Keats called “beauty.” “Type 2” reasoning is conscious, articulable, and deductive. Scholars usually assume that legal reasoning is entirely Type 2. However, critics from Holmes to Posner have protested that unconscious and intuitive judgments are at least comparably important. This article takes the conjecture seriously by asking what science can add to our understanding of how lawyers and judges interpret legal texts.
This is a good time to take stock. Recent advances in cognitive psychology, brain imaging, and neural network theory have already pushed many humanities scholars to rethink postmodern interpretations that privilege politics and culture over texts. This article argues that a parallel shift is overdue in law and that Type 1 reasoning, which specializes in pattern recognition, provides a natural explanation for how judges choose among competing legal theories. Finally, and most surprisingly, the article documents cognitive psychology evidence showing that Type 1 judgments show significant universality, i.e. that humans who study subjects for long periods often make similar choices without regard to the societies they were born into. This solves a long-standing difficulty in jurisprudence, which often struggles to explain why one legal interpretation should be more convincing than another.
The rest of the article analyzes how Type 1 thinking enters into legal reasoning and outcomes. It begins by reviewing 19th Century theories that claimed a leading role for intuitive reasoning in public policy. It then updates these theories to accommodate the relatively weak statistical correlations that psychologists have documented, arguing that modern court systems amplify these signals in approximately determinate ways. It also explains why advocates should rationally prefer formalist judges to pragmatic ones. Crucially, the existence of universality implies a measure of agreement across all lawyers regardless of personal bias or politics. This common ground gives judges a reliably neutral basis for deciding cases.
Full paper here. (417KB)
GSPP Working Paper: GSPP15-002 (May 2015)
Industry has organized increasingly effective self-governance initiatives since the 1980s. Almost all of these are based on large retailers’ economic leverage over global supply chains. This article documents commonalities in six of the best-studied examples – coffee, dolphin-safe tuna, fisheries, lumber, food processing, and artificial DNA – and offers straightforward economic and political theories to explain them. The theories teach that oligopoly competition can strongly constrain private power so that firms are answerable to a shadow electorate of consumers. Furthermore, rational retailers will find cede significant power to suppliers and NGOs. The arguments generalize traditional claims that free markets constrain private power and suggest an explicit framework for deciding when private politics are legitimate.
Available at SSRN (4KB)
GSPP Working Paper: GSPP15-001 (April 2015)
Legal scholars usually analyze copyright as an incentive and sometime obstacle to creation. This encourages us to see publishers as middlemen who siphon off rents that would be better spent on authors. By comparison, recent social science research emphasizes that word-of-mouth markets are highly imperfect. This means that many deserving titles will never find readers unless some publisher takes the trouble to market them. But this second view is deeply subversive. After all, the need for publishers – and reward – does not end when a book is published. At least in principle, copyright should last forever.
The trouble with this argument is that it assumes what ought to be proven. How much effort do publishers really invest in finding forgotten titles? And does vigorous marketing attract more readers than high copyright prices deter? This article looks for answers in the history of 20th Century print publishers and today’s Print-on-Demand and eBook markets. We argue that, far from promoting dissemination, copyright frequently operates to suppress works that would otherwise erode the price of new titles. This pathology has gotten dramatically worse in the Age of eBooks. Meanwhile, public domain publishers are facing their own crisis. Mid-20th Century books had large up-front costs. This deterred copyists. By comparison, digital technologies make it easy for copyists to enter the market. This has suppressed profits to the point where many public domain publishers spend little or nothing on forgotten titles.
The article concludes by reviewing possible reforms. Partial solutions include clarifying antitrust law so that firms have more freedom to implement price discrimination; modifying copyright so that consumers can re-sell used eBooks; letting on-line markets limit the number of publishers allowed to post redundant public domain titles on their sites; and strengthening non-commercial institutions for finding, curating, and delivering quality titles to readers.
Available at SSRN (4KB)
GSPP Working Paper: GSPP14-002 (June 2014)
Copyright theorists often ask how incentives can be designed to create better books, movies, and art. But this is not the whole story. As the Roman satirist Martial pointed out two thousand years ago, markets routinely ignore good and even excellent works. The insight reminds us that incentives to find content are just as necessary as incentives to make it. Recent social science research explains why markets fail and how timely interventions can save deserving titles from oblivion. This article reviews society’s long struggle to fix the vagaries of search since the invention of literature. We build on this history to suggest policies for the emerging world of online media.
Homeric literature was produced and disseminated through direct interactions between audiences and authors. Though appealing in many ways, the process was agonizingly slow. By the 1st Century AD commercial publishers had moved to the modern model of charging readers above-cost prices to pay for search and marketing. Crucially, the new model was only sustainable so long as firms could suppress copying. We argue that Roman and early modern publishers developed remarkably successful self-help strategies to do this. However, their methods did little to suppress copying after the first edition. This seemingly modest defect made publishers profoundly risk averse. Ancient best-seller lists were invariably dominated by authors who had been dead for centuries.
Publishers’ self-help systems collapsed under a wave of piracy in the mid-17th Century. This led to the first modern copyright statutes. Crucially, the new laws extended protection beyond the first edition. This encouraged modern business models in which publishers gamble on a dozen titles for each that succeeds. The ensuing proliferation of titles helped fuel the Enlightenment. It also promoted a rich new ecosystem of search institutions including libraries, newspaper critics, and editors.
The Digital Age has changed everything. As copyright fades, the old institutions for finding titles are drying up. We explore several possible responses. First, society can shore up current publishing models by expanding copyright and technical protections. We argue that these methods cannot save book search but might be adequate for music and movies. Second, search engines could pay for editors. We argue that an on-line Digital Bookstore can suppress copyists long enough to fund reasonable search efforts. Finally, society can return to the Homeric pattern of harvesting advice directly from audiences. We explore various commercial and open source institutions for organizing the work.
GSPP Working Paper: GSPP14-001 (February 2014)
For the past twenty years, large corporations have routinely developed and enforced industry-wide standards to address problems that are only distantly related to earning a profit. This includes writing detailed private regulations for environmental protection, national security, working conditions, and other topics formerly reserved to governments. At the same time, the US Supreme Court has said that the Sherman Act forbids any “extra-governmental agency” that “provides extra-judicial tribunals for the determination and punishment of violations.” This seems to ban enforceable rules. Despite this, many US policymakers continue to argue that private standards are efficient and desirable. Many corporations are sympathetic but fear legal liability and are reluctant to participate unless and until the law is clarified.
This article asks how existing law can be reformed to arrive at principled rules for deciding when private standards violate the Sherman Act. We begin with an historical account of recent private initiatives to regulate food processing, fisheries, forestry, and coffee production. We argue that these private rules are often just as effective – and burdensome – as government regulation. We then generalize from this evidence to explain when and how large corporations are able to impose their preferences through industry-wide standards. We also describe the politics that determines how large corporations use their power. We argue that the need to earn positive profit and defend market share frequently encourages – and sometimes forces – large companies to choose standards that please consumers. In these cases, consumers act as a shadow electorate that constrains private power in much the same way that real voters constrain elected officials. Finally, our examples show that big corporations often decide to share power with smaller rivals, suppliers, NGOs, and other stakeholders. We argue that these delegations are genuine and make private standards more accountable.
The article concludes by asking how current law can be reformed. We argue that the Sherman Act serves two goals. The first is economic efficiency. We argue that private standards advance this goal by addressing problems (“externalities”) that lack well-defined market prices. We argue that private bodies should be allowed to address such problems in the first instance knowing that government may later step in to change or supplement policy. The second goal is to protect democracy from private power. We argue that this danger is minimal so long as (a) market structure encourages corporations to make choices that please consumers and other shadow electorates, (b) the standard setting body represents a wide range of affected stakeholders, or (c) industry selects the prevailing standard from multiple competing proposals. Significantly, all of these tests can be determined from objective evidence without obscure metaphysical inquiries into when private power becomes “illegitimate” or “poses a threat” to democratic politics.
GSPP Working Paper: GSPP12-005 (December 2012)
Download a PDF (576KB)
GSPP Working Paper: GSPP12-003 (November 2012)
The US Supreme Court’s decision in Graham v. John Deere (1966) placed neoclassical economic insights at the heart of modern patent law. But economic theory has moved on. Since the 1990s, legal scholars have repeatedly mined the discipline to propose ad hoc rules for individual industries like biotech and software. So far, however, they have almost always ignored the literature’s broader lessons for doctrine. This article asks how well today’s patent doctrine follows and occasionally departs from modern economic principles.
The article starts by reviewing what innovation economists have learned since the 1970s. While it is conventional for legal scholars to divide the neoclassical literature into multiple competing “theories,” shared mathematical methods and modeling assumptions (e.g. profit-maximization) guarantee that the neoclassical literature’s various strands cannot disagree in any fundamental way. For this reason, whatever differences exist in the neoclassical literature are more accurately seen as special cases of a single underlying theory. We argue that this underlying theory must include at least three principles. The first limits reward to non-obvious inventions and explicitly entered the law through Graham’s PHOSITA standard. The second principle holds that patent breadth should be chosen to balance the benefits of innovation against the costs of monopoly. Though widely recognized by judges and scholars, the principle’s influence on doctrine remains remarkably incoherent. The final principle prescribes rules for allocating patent rewards where multiple inventors contribute to a shared technology. Unlike the first two principles, this insight was unknown in the 1960s and has yet to enter the law.
Remarkably, patent doctrine uses a single concept – Graham’s “Person Having Ordinary Skill in the Art” or “PHOSITA” – to address all three principles. This means that doctrinal solutions for one principle can have unintended impacts on the others. In some cases, this link is optional. For example, the article shows how the PHOSITA concept could be generalized to provide separate and distinct tests for, say, non-obviousness and patent breadth. However, other links are mandatory. In particular, the article shows that any doctrinal architecture built on Graham’s PHOSITA test automatically allocates reward among successive inventors. Though reasonable, these default outcomes fall short of the economic ideal. The article analyzes how changes in the Utility, Blocking Patents, Reverse Doctrine of Equivalents, and the Written Description doctrines can mitigate this problem. However, other gaps are inherent and cannot be eliminated without abandoning Graham itself. This radically revised architecture would probably cause more problems than it solves.
Download a PDF (634KB)
GSPP Working Paper: GSPP12-003 (November 2012)
Synthetic biologists have vigorously debated the need for community-wide biosecurity standards for the past decade. Despite this, the US government’s official response has been limited to weak and entirely voluntary Guidelines. This article describes attempts by journal editors, academic scientists, and commercial firms to organize private alternatives at the grassroots level. Private commercial standards, in particular, are significantly stronger than federal Guidelines and currently operate across more than eighty percent of the synthetic DNA industry. The paper generalizes from these examples by asking when strong private standards are both feasible and likely to produce outcomes that are comparably democratic to conventional agency regulation. It closes by describing interventions that government can use to promote and manage grassroots standards initiatives.
Download a PDF (96KB)
GSPP Working Paper: GSPP10-011 (November 2010)
Many observers are skeptical of claims that private entrepreneurs can perform traditional governmental functions like supporting basic research, keeping WMD away from terrorists, or protecting public health. This article presents five recent counterexamples. These include initiatives designed to establish new health and safety standards in nanotechnology; build a central repository for worldwide mutations data; use on-line volunteers to find cures for tuberculosis; and require biotech companies to screen customer orders for products that can be used to make weapons. In principle, many more initiatives are both possible and desirable. Historically, however, government done little to promote private initiatives and sometimes destabilized them. The article suggests strategies for this overcoming this problem.
Download a PDF (146KB)
GSPP Working Paper: GSPP10-010 (November 2010)
WMD technologies are increasingly available from commercial firms located all over the world. Scholars point out that traditional political initiatives based on regulation and treaty will have difficulty controlling this complex environment. By comparison, market forces routinely impose uniform, worldwide standards (e.g. Windows software, Blu-Ray video players) in many high tech industries. Recently, the companies in one such industry (artificial DNA) used these same economic forces to develop and implement a biosecurity standard. Surprisingly, the resulting standard is more stringent – and at least arguably more enforceable – than the US government’s own official guidelines. This article begins by presenting a short history of how private and public standards evolved in the artificial DNA industry. It then goes beyond this motivating example to ask whether we can expect private non-proliferation standards to be similarly effective in other industries. Next, it reviews what modern theories have to say about standard-setting in both government and the private sector. This analysis suggests that private standards should be reasonably feasible, stringent, and enforceable for many dual use industries. Furthermore, theory suggests that private standards will often reflect society’s risk preferences at least as well as public regulation. The article concludes by suggesting specific reforms for improving private and public standards-setting still further.
Download a PDF (134KB)
GSPP Working Paper: GSPP10-006 (August 2010)
We discuss welfare and various policy interventions for mixed ICT markets where firms use either 'open source' (OS) or 'closed source' (CS) business models. We find that the existence of OS business models improves social welfare compared to all-CS industries by letting firms share costs and avoid duplication. However, code sharing also establishes a de facto quality-cartel that suppresses OS firms' incentives to invest. Competition from CS firms weakens this cartel and improves welfare. That said, market forces alone provide too little CS competition. We find no support for various government interventions based on tax breaks for OS-based firms and pro-OS procurement preferences by government. However, policies that directly target the
supply of OS code have a positive impact.
Download a PDF (490KB)
GSPP Working Paper: GSPP10-001 (January 2010)
The number of open source (“OS”) software projects has grown exponentially for at least a decade. Unlike early open source projects, much of this growth has been funded by commercial firms that expect to earn a profit on their investment. Typically, firms do this by selling bundles that contain both OS software and proprietary goods (e.g. cell phones, applications programs) and services (custom software). We present a general two-stage Cournot model in which arbitrary numbers of competing OS and closed source (“CS”) firms decide how much software to create in Stage 1 and how many bundles to supply in Stage 2. We find that the amount of OS software delivered depends on (a) the degree of substitutability between proprietary products, (b) the number of OS and CS firms competing in the market, and (c) the savings available to OS firms from cost-sharing. However, code-sharing also guarantees that no OS firm can offer better software than any other OS firm. This suppresses quality competition between OS firms and restricts their output much as an agreement to suppress competition on quality would.
Competition from CS firms weakens this quality-cartel effect, thus mixed industries often offer higher welfare. We find that Pure-OS (Pure-CS) markets are sometimes stable against CS (OS) entry so that the required OS/CS state never occurs. Even where mixed OS/CS industries do exist, moreover, the proportion of OS firms needed to stabilize the market against entry is almost always much larger than the target ratio required to optimize welfare. We examine various policy options for addressing this imbalance with tax policy, funding of OS development, and procurement preferences. We find that the first-best solution in our model is to tax OS firms and grant tax breaks to CS firms. Conversely, government policies that fund OS development or establish procurement preferences for OS software actually increase the gap between desired and actual OS/CS ratios still further. Despite this, funding OS development can still improve welfare by boosting total (private government) OS investment above the levels that a private cartel would deliver.
Download a PDF (658KB)
GSPP Working Paper (April 2006)
Open source methods for creating software rely on developers who voluntarily reveal code in the expectation that other developers will reciprocate. Open source incentives are distinct from earlier uses of intellectual property, leading to different types of inefficiencies and different biasesin R&D
investment. Open source style of software development remedies a defect of intellectual property protection, namely, that it does not generally require or encourage disclosure of source code. We review a considerable body of survey evidence and theory that seeks to explain why developers participateinopensource collaborationsinsteadofkeepingtheir codeproprietary, andevaluatesthe extent to which open source may improve welfare compared to proprietary development.
Download a PDF (336KB)
GSPP Working Paper (May 2004)
There is growing public interest in alternatives to intellectual property including, but not limited to, prizes and government grants. We collect various
historical and contemporary examples of alternative incentives, and show when they are superior to intellectual property. We also give an explanation for why federally funded R&D has moved from an intramural activity to largely a grant process. Finally, we observe that much research is supported by a hybrid system of public and private sponsorship, and explain why this makes sense in some circumstances.
Download a PDF (216KB)
GSPP Working Paper (July 2002)
Patents differ from other forms of intellectual property in that independent invention is not a defense to infringement. We argue that the patent rule is inferior. First, the threat of entry by independent invention would induce patentholders to license the technology, lowering the market price. Provided independent invention is as costly as the original cost of R&D, the market price will still be high enough to cover the patentholder's costs. Second, a defense of independent invention would reduce the wasteful duplication of R&D effort that occurs in patent races. In either case, the threat of independent invention creates a mechanism that limits patentholders' profits to levels commensurate with their costs of R&D.
Download a PDF (476KB)
621 Sutardja Dai