"Political Economy", "Development", "Area Studies"--these are all inherently interdisciplinary subjects. They all occupy that bloody crossroads where economics, politics, history, and culture meet. Economics deals with the production and distribution of wealth. Politics deals with how we decide what our collective purposes are and implement them, or how we oppress each other. History is the record of what people have done and suffered, and tells us what sets of politico-economic arrangements have succeeded and failed in the past. And culture determines both what kinds of things people think are worth doing--what they value--and also what will be the modalities of human interaction on top of which politics and economics are built.
How are we to think about issues of "political economy" in the context of modern industrial societies--or, rather, in the context of all modern societies, developing or developed, for almost every society today is industrial or post-industrial? This course takes a look at how research in international and area studies is carried out.
Thus the core spine of an IAS 102 course consists of four different sections: (a) a look at hypothesis-testing methodologies, and how they work, (b) a look at textual-analysis or "understanding" based approaches that attempt through close reading and thick description to get inside the skulls of the people being studied, (c) a look at the limits of the hypothesis-testing methodology, and (d) a look at the interplay between statistical or comparative-control studies on the one hand and understanding-based or case studies or interpretive studies on the other--without the first you can't know whether your case studies are representative, without the second you can't know whether your statistical counts or comparative conclusions mean anything real.
Let me expand on these four sections a little:
Hypothesis-testing methodologies are, no one doubts, a key piece of intellectual progress in the social sciences and in historical studies. To conduct a research project in the hypothesis-testing line of work is to make a guess about what the important causes and effects are, to then figure out what the world would look like if you were wrong--what pieces of evidence or patterns of social interaction would convince you that your guess was wrong. The key to the hypothesis-testing methodology is that it does not stack the deck. You say beforehand what you think is the case, what the world would look like if you were right, and what the world would look like if you were wrong. And then you go and find out. The key is collecting data of some sort, whether observations for statistical studies or cases for comparative analysis, and then letting that data tell you whether your initial guesses were right or wrong (or perhaps which of your tentative initial guesses were right, and which were wrong).
But hypothesis-testing methodologies cannot be all of social science and historical study. They presume that you understand the situation well enough to make an initial guess. There is an alternative mode of analysis based not on looking at a situation from the outside but by looking at a situation from the inside. Trying to get inside the skulls of human beings either through close analysis of the texts they have written to uncover the--perhaps not fully conscious, and surely not fully reflected-upon--presuppositions of their thought, or through "thick descriptions" of places and episodes produces a different but equally valid kind of knowledge. And understanding-based methodologies can at times reach places that hypothesis-testing cannot go. If you want to understand an individual historical episode as an end in itself, how can you gather statistics or make comparisons? Without an initial degree of subjective understanding about human perceptions and motivations, how can you formulate hypothesis to test?
Moreover, hypothesis-testing methodologies have stringent limits. The first is that you need a way to determine whether the statistical results or the comparative conclusions you have reached mean what you think they mean. A certain degree of doubling-back to understand how the situations you study looked to the human actors on the ground at the time is essential to guard hypothesis-testing studies from soaring into outer space. The second is that so much in the conclusion of a hypothesis-testing study depends on where the burden of proof lies. When the evidence gets messy--and the evidence always gets very messy--a hypothesis-testing study will decide against whichever perspective has been assigned the greater part of the burden of proof. Stacking the deck--deliberately or accidentally, consciously or unconsciously--is a terrible vice of those who carry out hypothesis-testing studies. And readers and reviewers of such work need to be very aware of how to tell when the deck has been stacked.
On the other hand, understanding-based methodologies have equally stringent but different limits. The finest and deepest of textual analyses is useless if nobody (or only a trivial few) ever read the book in that particular way (Leo Strauss's analysis of Machiavelli comes to mind). The thickest of thick descriptions paints over rather than reveals knowledge if it is thick description of an atypical case (how many eighteenth-century Frenchmen told the story of the great cat massacre, anyway?). Without statistical, comparative, or other hypothesis-testing evidence you will inevitably have no clue whether your case study is representative or not--and in a world as big as this one is, you can find five examples of anything if you look hard enough.
Individual teachers of IAS 102 are, of course, free to focus on whichever of these sections they like. But all four sections teach important lessons about the methodology of the social sciences. And teaching each edition of IAS 102 so that all four sets of lessons are learned is important if we are to serve our undergraduates well as they proceed through the program.