Visualization of the argument for free trade/open borders

Hat tip: Landon Schnabel, via Twitter.

 

Posted in Economics, Law | 6 Comments

Facebook 101

This fall, we are teaching a large undergraduate survey course (n > 800) on “the legal and ethical environment of business.” Instead of trying to cover everything, we will focus instead on the founding and subsequent explosive growth of Facebook–as depicted in the bestseller “The Accidental Billionaires” by Ben Mezrich (pictured below) and the movie “The Social Network”–in order to explore various areas of the legal and ethical environments of business, including such areas as the law of contracts (think of Facebook’s “terms of use”), intellectual property (think of Facebook’s logos, brand, and “Like” symbols), choice of business entity (think of Facebook’s evolution from a two-man partnership into a Florida limited liability company before incorporating in the State of Delaware), and many other relevant legal and ethical topics, such as the ethics of Facebook’s privacy policies. Although the Professor is not a big fan of Facebook, we think our focus on the founding of Facebook makes good sense for several reasons. First of all, our target audience consists of undergraduates, most of whom use some form of social media to connect with the wider world, and furthermore, it was a motley crew of college students who ended up creating one of the most successful Internet platforms in the world today. Mark Zuckerberg literally changed the world, so why not learn from his successes … and from some of his mistakes?

Posted in Ethics, Law, Uncategorized | Leave a comment

The Simple Sabotage Field Manual

Via kottke, we found this 20-page government-issued, World War II era guidebook called the Simple Sabotage Field Manual.  University administrators and business managers take note, here is tip #3:

Organizations and Conferences: When possible, refer all matters to committees, for “further study and consideration.” Attempt to make the committees as large and bureaucratic as possible. Hold conferences when there is more critical work to be done.

Simple Sabotage Field Manual

Use at your own risk

Posted in Uncategorized | Leave a comment

The spatial physics of cancer cells

Posted in Nature, Science, Traffic | Leave a comment

Ethical machines (part 3 of 3)

In our previous posts, we presented Brett Frischmann’s novel idea of a Reverse Turing Test, i.e. the idea of testing the ability of humans to think like a machine or a computer. But, how would we create such a test? For his part, Frischmann proposes four criteria (pictured below, via John Danaher) for creating a Reverse Turing Test. Here, we consider Frischmann’s fourth factor: rationality. ((His first two criteria–mathematical computation and random number generation–do not appear to carry any moral significance, while his third criterion–common sense or folk wisdom–seems better suited for Alan Turing’s original test rather than a reverse one.))

By rationality, Frischmann means instrumental or ends-means rationality. Consider the rational actor/utility-maximization model in economics (homo economicus) or the assumption of hyper-rationality in traditional (i.e. non-evolutionary) game theory: “I know that you know that I know …” Many human decisions, however, are often emotive or irrational in nature, such as falling in love, overeating, suicide, etc. Given this disparity between machine-like rationality and human-like emotions, we should in principle be able to create a Reverse Turing Test to measure how rational or machine-like a person is. The more instrumental and less emotional a person is, the closer he or she would be to passing Frischmann’s hypothetical Reverse Turing Test.

Does the rationality component of the Reverse Turing Test have any ethical implications? John Danaher thinks so: “This Reverse Turing Test has some ethical and political significance. The biases and heuristics that define human reasoning are often essential to what we deem morally and socially acceptable conduct. Resolute utility maximisers few friends in the world of ethical theory. Thus, to say that a human is too machine-like in their rationality might be to pass ethical judgment on their character and behavior.” (See his 21 July blog post.) We, however, are not so sure what the ethical implications of Frischmann’s rationality criterion are. John Rawls’s famous “original position” thought-experiment, for example, is premised on the rational actor model, and theories of consequentialism (such as rule-utilitarianism) form a major tributary in the infinite river of moral philosophy. In other words, to the extent machines are far less emotional and more instrumentally rational than humans, might machines potentially have a greater ethical capacity than humans?

Credit: John Danaher

Posted in Ethics, Philosophy, Science | Leave a comment

Thinking like a machine (part 2 of 3)

In our previous post, we mentioned John Danaher’s excellent review of Brett Frischmann’s 2014 paper exploring the possibility of a Reverse Turing Test. One of the insightful contributions Frischmann makes to this voluminous literature is his idea of a Turing Line, or the fuzzy line that separates humans from machines. According to Frischmann, this line serves two essential functions: (1) it differentiates humans from machines (and machines from humans, we would add), and (2) it demarcates a “finish line” or goal. In other words, for a machine to pass Turing’s original test, it must be able to cross this imaginary line by deceiving us that it is human. Most of the literature in this area focuses on the human side of the line: will a machine ever be capable of crossing this boundary? Frischmann, however, focuses on the machine side of the line. (In the words of Danaher: “Instead of thinking about the properties or attributes that are distinctively human, [Frischmann is] thinking about the properties and attributes that are distinctly machine-like.”) In particular, Frischmann poses a different and far more original question: will a human ever be able to deceive another person (or another machine) that he or she is a machine? But what does it mean to “think like a machine”? We shall discuss that difficult question in our next post …

Credit: Brett Frischmann (via John Danaher)

Posted in Bayesian Reasoning, Philosophy, Science | 4 Comments

Reverse Turing Tests and Ethical Machines (part 1 of 3)

Our colleague John Danaher recently pondered the possibility of a “Reverse Turing Test” in this intriguing blog post dated 21 July 2016. That is, instead of testing for a machine’s ability to think like a human, what if we tested for a human’s ability to think like a machine? (This theoretical paper by law professor Brett Frischmann on “Human-Focused Turing Tests” is what led Danaher to pose this novel question.) Moreover, according to Danaher (and to Frischmann), the ability to think like a machine may have some serious ethical implications. For our part, we have often wondered whether ethical rules like Kant’s famous “Categorical Imperative” or the Golden Rule could be reduced to a simple computer program, and we have long been fascinated by the Turing Test; by way of example, we used the original version of Alan Turing’s famous test to develop the notion of “probabilistic verdicts” in this paper. Accordingly, we will be blogging about the ideas in Danaher’s post and in Frischmann’s paper over the next few days.

 

xkcd.com

Posted in Bayesian Reasoning, Philosophy, Science, Science Fiction | 1 Comment