Tuesday, May 29, 2007

Deming on Regional Development

Applying statistical thinking to thinking about economic development: today, I offer a simple list that takes a page from Deming’s approach to business. This list provides a follow-on to the previous post on regional metrics. Based on a few emails I received, I realize I should have been to be clearer: Something needs to be measured, but my main point is that its worth thinking carefully about how statistics can lie to us if we don’t first recognize the truths behind numbers. So, in today’s blog, I offer three examples of metrics pitfalls.

1. Absolute success does not necessarily translate to competitive advantage.
Regional metrics commonly examine growth trends in a variety of industries and technology areas, then focus investment recommendations on the largest (by absolute numbers) or fastest growing economies. Even if a region or company in a region measures well on an activity, industry, or business model, that does not make it the platform for growth. Here’s a straw man: we have all seen consulting reports noting (paraphrased) “Region X has a rapid growth in green chemistry patents and employees, among the top 10% in MSA’s. Thus, green chemistry is a central investment opportunity.”

Another straw man example we’re all familiar with: regions are chasing biotechnology in all its forms. In conversations with policy makers, elected officials and foundations in cities across the U.S., I consistently hear something along the lines of “We’re creating a biotech corridor built on our (large medical school/local drug companies/excellent reputation for health care/etc.,.).” A major medical school with a specialty area is insufficient if (a) other regions have similar levels of expertise (thus competitors for your resources) and (b) the particular technology area does not create agglomeration economies.

These observations tells us very little. Rather, to be a competitive advantage and a platform for growth, a region must exceed other organizations in other regions, AND the regional activity or asset cannot be competed away in the medium-term (3-10years). Metrics on which industries are driving a region today have to categorically respect which industries and opportunities are easily competed away and should qualitatively capture likelihood of outsourced jobs, codified (rather than tacit) knowledge base, and specialized complementary assets all create situations where industries can be competed away.

2. A misunderstanding of “the direction of causality”.
We use this term in economics frequently to describe research proposing a conclusion about an outcome being caused by a specific activity, when in fact it is the activity that is being caused by the outcome. Richard Florida’s “Creative Class” argument has been criticized on these grounds: Do creative innovators move to regions and thus cause economic prosperity, or do creative types move to regions where there is prosperity because prosperity means more resources for artistic, scientific, and other creative endeavors to be creative? Just correlation alone does not prove a good econ dev strategy. Good metrics have to respect that one can plausibly assume away or control for “reverse causality”.

3. You can be fooled by the success of a start-up
A success start-up or two does not guarantee regional growth. Regional growth requires new firms generate positive externalities (wealth, jobs created that are not internalized by the company) for the region. Has Ann Arbor, Michigan (Headquarters of Domino’s Pizza) turned into the economic center of pizza making? Why not? Because while efficient delivery was valuable business innovation, pizza franchising does not generate the externalities necessary for agglomeration. In fact, there has been recent academic debate over whether such common drivers of externalities (knowledge spillovers) truly exist, and what determines them.


Pizza is a trite example to prove a point, but the same type of thinking holds true for semiconductors, agile robotics, biotechnology, tissue engineering, and so on. Simply establishing metrics based on counts, trends, etc., which I most commonly see regions engaging in these days--without implicating any one in particular, fails to recognize the competitive characteristics needed. We would rather have a moderately growing industry with promise of agglomeration than a rapidly growing, but unprotectable (easily moved) industry.

With these ideas laid out, future blogs will address the apparent elements of agglomeration economies (if such a thing truly exists).


Monday, May 28, 2007

OK, I skipped a week. But, I had an excuse...

A number of folks emailed me last week. Some to prod along another blog entry; some to tell me they told me so. When I restarted an effort for a weekly blog on economic development in April, there were naysayers: "You won't keep up with regular blog entries".

Then, my lack of attention to my recently refounded blogging effort became public:
http://www.post-gazette.com/pg/07146/789186-96.stm

So, now fueled by prodding emails and Ms. Shropshire's article, I hereby pledge to pick up where I left off-- with Tuesday blog entries on entrepreneurship and tech-based economic development. Keep your eyes peeled tomorrow!

Tuesday, May 15, 2007

The Unintended Consequences of Metrics

Among economic development circles, we have heard the phrase “unintended consequences” a lot over that past few years. My favorite recent topic that is subject to unintended consequences is metrics. Matt Hamilton, Anne Swift and I, all of Carnegie Mellon, are researching the impact of technology-based economic development programs on the regional economy. So far, this project reveled to us the seemingly endless number of cities and regions asking the inevitable question “what should we track to know if we’re on track?”

The answer is complicated beyond imagination because for benchmarking, trend analysis, and the like, no two regions face the same supply, demand, and political economy conditions. Even with a sophisticated multivariate regression, we currently lack the economic understanding (theory and evidence) to make meaningful comparisons.

But the grander issue, the “big picture” if you will, is: what happens when we start tracking data… and publishing it? On a regular basis? That’s where unintended consequences come in.

We all recall the famed Frederick Taylor studies, the grandfather of efficiency and operations management. Taylor, it is recounted, would time workers with a stopwatch as they performed various tasks, only to later realize that workers changed their practices when they knew they were being timed.

This happens everyday in economic development. For example, everyone’s favorite yardstick, patent counts, is an oft used metric for regional innovation output… or capacity, depending on the consulting report. And, I’ve seen several consulting reports by the same company that – at least pick one for consistency, but, it can’t be equally both. Anyway, if you start measuring patents, then the investment strategy of a region quickly turns to favor patent-intensive industries, like pharma, biotech, and chem. Twenty years of economic research by Wes Cohen, Dick Nelson, John Walsh, Sid Winter and others has shown that these industries focus on patents far more than semiconductors, software, and other areas. As a result, measuring patents only measures the mix of biotech and chem industries in a region and rarely represents actual performance, thereby veiling deeper economic trends.

Patent counts are perhaps the most obvious, but represent just one of many metrics that fall into this trap. Recognizing this problem, econ dev gurus propose a “multi-tiered” or “multi-layered” measurement approach drawing from a basket of measures. Now we take one bad measure, and multiply it by ten. We’re only increasing our “measurement error” in statistical terms.

The bottom line: focusing on econ dev metrics without nuanced understanding of what popular measures are really capturing dangerously misguides policy and business leaders. Dangerous is a heavy-handed word to use, but I mean it. And, this concern is compounded when one considers the temptation to report metrics on a short-term (annual or quarterly) basis when we all know economic development is a long-term investment.

Tuesday, May 8, 2007

The Ivory Tower’s Children

Industrial cities are increasingly turning to universities as the source of economic expansion. Chicago, Cleveland, Columbus, Pittsburgh, and others have established formalized programs to support university-based start-ups. Often these programs are paid for by a combination of public and philanthropic (quasi-public) dollars. Thus, the policy question: are these good investments for the taxpayer?

The answer to this simple question includes numerous complexities beyond just one or two studies, but some data are worth point out.

Our first pass shows us that university start-ups turn out to have a higher rate of success than we all thought. In a recent study by myself and Arvids Ziedonis of the University of Michigan (Management Science, 2006 v 52: 173-186), we examined a novel data set of almost two decades of licensing activity at the University of California System. We compare the relative performance of start-ups and established firms in commercializing inventions discovered in the same university departments. We find little difference between start-ups and established firms in the time it takes to develop and introduce to the market a product based on a licensed invention. We also find that start-ups generate greater levels of licensing revenues for similar technologies than do established firms.

Moreover, >80% of the start-ups founded between 1986-1995 (all pre-dot com, “real companies”) were still operating companies by 2004.

The good news is that parallel examinations of Carnegie Mellon and Georgia Tech data yielded similar results. In personal discussions a few years ago with Ed Roberts at MIT, he told me he found the same for his university. Thus, of the few universities we know data from, we observe a really good success rate, certainly better that we all might expect.

This observation doesn’t definitively prove that university start-ups are the best use of public funds, but the data to date certainly support that they’re good investments. The larger policy question is what/how-should programs support this investment, and do university-based firms have a real pay-off for the regional economy.