hackr.de

die romantische komödie


Coursera: Model Thinking

model thinking
16. Juni 2012

Model Thinking
Instructor: Scott E. Page, University of Michigan
Zeitraum: 2012
Status: habe ich gemacht, inkl. Exams und Zertifikat

Video

Anmerkung: Model Thinking war mein erster MOOC und ich hatte Glück, er ist nämlich bis heute einer der besten geblieben und hat also meine Disposition bzgl. MOOCs sehr zum positiven geprägt.


Course Syllabus

Section 1: Introduction: Why Model?

In these lectures, I describe some of the reasons why a person would want to take a modeling course. These reasons fall into four broad categories:

To be an intelligent citizen of the world
To be a clearer thinker
To understand and use data
To better decide, strategize, and design
There are two readings for this section. These should be read either after the first video or at the completion of all of the videos.

The Model Thinker: Prologue, Introduction and Chapter 1 (pdf)
Why Model? (pdf) by Joshua Epstein

Section 2: Sorting and Peer Effects

We now jump directly into some models. We contrast two types of models that explain a single phenomenon, namely that people tend to live and interact with people who look, think, and act like themselves. After an introductory lecture, we cover famous models by Schelling and Granovetter that cover these phenomena. We follows those with a fun model about standing ovations that I wrote with my friend John Miller.

In this second section, I show a computational version of Schelling’s Segregation Model using NetLogo. Netlogo is free software authored by Uri Wilensky or Northwestern University. I will be using NetLogo several times during the course. It can be downloaded here:

NetLogo

The Schelling Model that I use can be found by clicking on the “File” tab, then going to “Models Library”. In the Models Library directory, click on the arrow next to the Social Science folder and then scroll down and click on the model called Segregation.

The readings for this section include some brief notes on Schelling’s model and then the academic papers of Granovetter and Miller and Page. I’m not expecting you to read those papers from start to end, but I strongly encourage you to peruse them so that you can see how social scientists frame and interpret models.

Notes on Schelling (pdf)
Granovetter Model (pdf)
Miller and Page Model (pdf)

Section 3: Aggregation

In this section, we explore the mysteries of aggregation, i.e. adding things up. We start by considering how numbers aggregate, focusing on the Central Limit Theorem. We then turn to adding up rules. We consider the Game of Life and one dimensional cellular automata models. Both models show how simple rules can combine to produce interesting phenomena. Last, we consider aggregating preferences. Here we see how individual preferences can be rational, but the aggregates need not be.

There exist many great places on the web to read more about the Central Limit Theorem, the Binomial Distribution, Six Sigma, The Game of LIfe, and so on. I’ve included some links to get you started. The readings for cellular automata and for diverse preferences are short excerpts from my books Complex Adaptive Social Systems and The Difference Respectively.

Central Limit Theorem
Binomial Distribution
Six Sigma
Cellular Automata1 (pdf)
Cellular Automata2 (pdf)
Diverse Preferences

Section 4: Decision Models

In this section, we study some models of how people make decisions. We start by considering multi criterion decision making. We then turn to spatial models of decision making and then decision trees. We conclude by looking at the value of information..

The reading for multi-criterion decision making will be my guide for the Michigan Civil Rights Initiative. It provides a case study for how to use this technique. For spatial voting and decision models, there exist many great power point presentations and papers on the web. The Decision Tree writings are from Arizona State University’s Craig Kirkwood.

Multi Criterion Decision Making Case Study (pdf)
Spatial Models (pdf)
Decision Theory

Section 5: Models of People: Thinking Electrons

In this section, we study various ways that social scientists model people. We study and contrast three different models. The rational actor approach,behavioral models , and rule based models . These lectures provide context for many of the models that follow. There’s no specific reading for these lectures though I mention several books on behavioral economics that you may want to consider. Also, if you find the race to the bottom game interesting just type “Rosemary Nagel Race to the Bottom” into a search engine and you’ll get several good links. You can also find good introductions to “Zero Intelligence Traders” by typing that in as well.

Here is a link to a brief primer on behavioral economics that has more references.

Short Primer on Behavioral Economics

Section 6: Linear Models

In this section, we cover linear models. We start by looking at categorical models, in which data gets binned into categories. We use this simple framework to introduce measures like mean, variance, and R-squared. We then turn to linear models describing what linear models do, how to read regression output (a valuable skill!) and how to fit nonlinear data with linear models. These lectures are meant to give you a “feel” for how linear models are used and perhaps to motivate you to take a course on these topics. I conclude this section by highlighting a distinction between what I call Big Coefficient thinking and New Reality thinking. The readings for this section consist of two short pieces written by me, but you can find abundant resources on the web on linear models, R-squared, regression, and evidence based thinking.

Categorical Models (pdf)
Linear Models (pdf)

Section 7: Tipping Points

In this section, we cover tipping points. We focus on two models. A percolation model from physics that we apply to banks and a model of the spread of diseases. The disease model is more complicated so I break that into two parts. The first part focuses on the diffusion. The second part adds recovery. The readings for this section consist of two excerpts from the book I’m writing on models. One covers diffusion. The other covers tips. There is also a technical paper on tipping points that I’ve included in a link. I wrote it with PJ Lamberson and it will be published in the Quarterly Journal of Political Science. I’ve included this to provide you a glimpse of what technical social science papers look like. You don’t need to read it in full, but I strongly recommend the introduction. It also contains a wonderful reference list.

Tipping Points (pdf)
DIffusion and SIS (pdf)
Lamberson and Page: Tipping Points (pdf)

Section 8: Economic Growth

In this section, we cover several models of growth. We start with a simple model of exponential growth and then move on to models from economics, with a focus on Solow’s basic growth model. I simplify the model by leaving out the labor component. These models help us distinguish between two types of growth: growth that occurs from capital accumulation and growth that occurs from innovation.

Growth Models (pdf)

Section 9: Diversity and Innovation

In this section, we cover some models of problem solving to show the role that diversity plays in innovation. We see how diverse perspectives (problem representations) and heuristics enable groups of problem solvers to outperform individuals. We also introduce some new concepts like “rugged landscapes” and “local optima”. In the last lecture, we’ll see the awesome power of recombination and how it contributes to growth. The readings for this chapters consist on an excerpt from my book The Difference courtesy of Princeton University Press.

Diversity and Problem Solving (pdf)

Section 10: Markov Processes

In this section, we cover Markov Processes. Markov Processes capture dynamic processes between a fixed set of states. For example, we will consider a process in which countries transition between democratic and dictatorial. To be a Markov Process, it must be possible to get from any one state to any other and the probabilities of moving between states must remain fixed over time. If those assumptions hold, then the process will have a unique equilibrium. In other words, history will not matter. Formally, this result is called the Markov Convergence Theorem. In addition to covering Markov Processes, we will also see how the basic framework can be used in other applications such as determining authorship of a text and the efficacy of a drug protocol.

Markov Processes (pdf)

Section 11: Lyapunov Functions
Models can help us to determine the nature of outcomes produced by a system: will the system produce an equilibrium, a cycle, randomness, or complexity? In this set of lectures, we cover Lyapunov Functions. These are a technique that will enable us to identify many systems that go to equilibrium. In addition, they enable us to put bounds on how quickly the equilibrium will be attained. In this set of lectures, we learn the formal definition of Lyapunov Functions and see how to apply them in a variety of settings. We also see where they don’t apply and even study a problem where no one knows whether or not the system goes to equilibrium or not.

Lyapunov Functions (pdf)

Section 12: Coordination and Culture
In this set of lectures, we consider some models of culture. We begin with some background on what culture is and why it’s so important to social scientists. In the analytic section, we begin with a very simple game called the pure coordination game In this game, the players win only if they choose the same action. Which action they choose doesn’t matter — so long as they choose the same one. For example, whether you drive on the left or the right side of the road is not important, but what is important is that you drive on the same side as everyone else. We then consider situations in which people play multiple coordination games and study the emergence of culture. In our final model, we include a desire consistency as well as coordination in a model that produces the sorts of cultural signatures seen in real world data. The readings for this section include some of my notes on coordination games and then the Bednar et al academic paper. In that paper, you see how we used Markov Processes to study the model. There is also a link to the Axelrod Net Logo Model.

Coordination Games (pdf)
Axelrod Culture Model in Netlogo

Section 13: Path Dependence

In this set of lectures, we cover path dependence. We do so using some very simple urn models. The most famous of which is the Polya Process. These models are very simple but they enable us to unpack the logic of what makes a process path dependent. We also relate path dependence to increasing returns and to tipping points. The reading for this lecture is a paper that I wrote that is published in the Quarterly Journal of Political Science

Path Dependence (pdf)

Section 14: Networks

In this section, we cover networks. We discuss how networks form, their structure — in particular some common measures of networks — and their function. Often, networks exhibit functions that emerge, but that we mean that no one intended for the functionality but it arises owing to the structure of the network. The reading for this section is a short article by Steven Strogatz.

Strogatz: Exploring complex networks

Section 15:Randomness and Random Walks

In this section, we first discuss randomness and its various sources. We then discuss how performance can depend on skill and luck, where luck is modeled as randomness. We then learn a basic random walk model, which we apply to the Efficient Market Hypothesis, the ideas that market prices contain all relevant information so that what’s left is randomness. We conclude by discussing finite memory random walk model that can be used to model competition. The reading for this section is a paper on distinguishing skill from luck by Michael Mauboussin.

Mauboussin: Skill vs Luck (pdf)

Section 16: The Colonel Blotto Game

In this section, we cover the Colonel Blotto Game. This game was originally developed to study war on multiple fronts. It’s now applied to everything from sports to law to terrorism. We will discuss the basics of Colonel Blotto, move on to some more advanced analysis and then contrast Blotto with our skill luck model from the previous section. The readings for this section are an excerpt from my book The Difference and a paper that I wrote with Russell Golman of Carnegie Mellon. You need only read the first four pages of the Golman paper.

Blotto from The Difference (pdf)
Golman Page: General Blotto (pdf)

Section 17:The Prisoners’ Dilemma and Collective Action

In this section, we cover the Prisoners’ Dilemma, Collective Action Problems and Common Pool Resource Problems. We begin by discussion the Prisoners’ Dilemma and showing how individual incentives can produce undesirable social outcomes. We then cover seven ways to produce cooperation. Five of these will be covered in the paper by Nowak and Sigmund listed below. We conclude by talking about collective action and common pool resource problems and how they require deep careful thinking to solve. There’s a wonderful piece to read on this by the Nobel Prize winner Elinor Ostrom

The Prisoners’ Dilemma in the Stanford Encyclopedia of Philosophy (pdf)
Nowak and Sigmund: Five Ways to Cooperate (pdf)
Ostrom: Going Beyond Panaceas

Section 18: Mechanism Design: Auctions

In this section, we cover mechanism design. We begin with some of the basics: how to overcome problems of hidden action and hidden information. We then turn to the more applied question of how to design auctions. We conclude by discussion how one can use mechanisms to make decisions about public projects. The readings for this section consist of a piece by the Eric Maskin who won a Nobel Prize for his work on mechanism design and some slides on auctions by V.S. Subrahmanian. The Maskin article can be tough sledding near the end. Don’t worry about necessarily understanding everything. Focus on the big picture that he describes.

Maskin: Mechanism Design (pdf)
V.S. Subrahmanian’s auction slides (pdf)

Section 19: Learning: Replicator Dynamics

In this section, we cover replicator dynamics and Fisher’s fundamental theorem. Replicator dynamics have been used to explain learning as well as evolution. Fisher’s theorem demonstrates how the rate of adaptation increases with the amount of variation. We conclude by describing how to make sense of both Fisher’s theorem and our results on six sigma and variation reduction. The readings for this section are very short. The second reading on Fisher’s theorem is rather technical. Both are excerpts from Diversity and Complexity

The Replicator Equation (pdf)
Fisher’s Theorem (pdf)

Section 20: The Many Model Thinker: Diversity and Prediction

In our final section, we cover the value of ability and diversity to create wise crowds when making predictions. We start off by talking about category models and linear models and how they can be used to make predictions. We then cover the Diversity Prediction Theorem, which provides basic intuition for how collective prediction works. We conclude by talking about the value of having lots of models. The reading for this section is a short explanation of the diversity prediction theorem.

Diversity Prediction Theorem


Lektionen und Annahmen

  • Wer ein Modell hat schlägt den, der kein Modell hat.
  • Wer ein besseres Modell hat schlägt den, der ein schlechteres Modell hat.
  • Wer mehre Modelle hat, schlägt den, der nur ein Modell hat.
  • Wer ein Modell hat und das auch menschlich interpretiert schlägt den, der nur ein Modell hat.
  • Am besten sind mehrere sehr gute Modelle die auch interpretiert werden

Vorteile von Modellen

  • sie helfen bei der Klärung des Problems
  • man kann aus der makroperspektive nur bedingt auf micro-preferences schliessen. schon sehr kleine preferenzen führen aggregiert zu massiven segmentierungen.
  • sie präzisieren die beschreibbarkeit von dingen.
    teilweise machen sie auch annahmen falsifizierbar: tipping points sind zb etwas anderes als jene punkte auf wachsumskurven, an denen das wachstum oder die bremse losgeht. tipping points sind wesentlich spezifischer.
  • sie wollen nicht das denken und den common sense ersetzen.
    z.b. muss man sich immer überlegen, ob man nach dem big koefficient muster agiert oder ob man sich nicht schon in einer ‘new reality’ befindet, die das ganze problem überflüssig macht.
  • es sind teilweise wirklich komplexe phänomene konkret bescheibbar.
    etwa modelle, die auch den modellübergang beschreiben (web von einem organischen zustand zu einem spekulativen usw.)
  • manchmal muss man selbst ‘game theory’ betreiben – im sinne von gewinnen wollen usw. – um andere davon abzuhalten, es zu tun.
    sprich: in diesen fällen beginnen leute ein spiel zu spielen, für das sie kriterien haben, nach denen sie es gewinnen wollen. spielt man nicht tun sie es usw.
  • sie erklären manchmal unerwartet dinge.
  • sehr wichtig der unterschied big koefficient (bzw. oft auch small, stupid oder irrelevant koefficient) und new reality.
    man kann noch so optimieren, man tut das falsche wenn die welt schon woanders ist.
    beispiel natürlich der kulturkampf rund um das urheberrecht.

Threshold

Das Konzept des Thresholds ist zb wirklich nützlich. Viele Dinge lassen sich viel leichter und besser erklären / modellieren, wenn man so etwas wie einen Threshold berücksichtigt.

z.b.: was ist der Threshold an Freunden, die auf einem Dienst sein müssen, bevor ich mich auch anmelde.

Es gibt immer Leute mit Threshold 0 i.e. die sich auch als erste anmelden.

Und viele haben halt Thereshold 5, oder einen Threshold an bestimmten Leuten, auf die sie achten etc.

Jedenfalls läuft Adaption von Anwendungen und überhaupt eindeutig Threshold-basiert – wo viele andere sind, da bin ich auch.

Man sieht es aber auch an Konzepten: sind einmal die Grenzen bei Dingen wie Kickstarter gefallen, brechen sie plötzlich auch woanders.