Preview of my second ChatGPT-themed survey

As promised, I am sharing the second half of my new ChatGPT-themed survey for my upcoming business law course this fall. (The first half is posted here.) The second half of this graded survey contains four additional questions, all of which are posted below the fold:

SURVEY QUESTION #1

Broadly speaking, AI systems like the popular ChatGPT are considered “aligned” when such systems produce outputs that are consistent with the intended goals and ethical values of their creators. In your opinion, what is the best way of promoting AI alignment? Via direct government regulation, taxes, or by market forces/the invisible hand?

(A) DIRECT GOVERNMENT REGULATION

(B) TAXES

(C) MARKET FORCES/THE INVISIBLE HAND

SURVEY QUESTION #2

If the AI industry is going to regulated or taxed, which level of government regulation/taxes do you think would be best: local, national, or international?

(A) STATE/LOCAL (i.e each U.S. State or should be allowed to decide for itself whether it wants to regulate AI)

(B) FEDERAL/NATIONAL (i.e. AI should be regulated at the national level by the United States or at the transnational level by the European Union)

(C) INTERNATIONAL (i.e. AI should be regulated by an international regulatory body, perhaps one modelled after the World Trade Organization or the United Nations’ International Atomic Energy Agency)

SURVEY QUESTION #3

At the domestic (U.S.) level, which method of regulation would best promote the goal of AI safety without stifling progress and innovation: the courts, an existing government agency, or a completely new regulatory body?

(A) THE COURTS (i.e. regulation via the common law and existing copyright law, e.g. civil lawsuits and class action lawsuits decided on a case-by-case basis for any harms generated by AI systems)

(B) EXISTING GOVERNMENT AGENCY (i.e. direct regulation by an existing government agency like the Federal Communications Commission in Washington, D.C.)

(C) NEW REGULATORY BODY (i.e. direct regulation by a completely new domestic regulatory body)

SURVEY QUESTION #4

Many researchers and tech leaders, including computer scientists Yoshua Bengio and Stuart J. Russel Tesla as well as Tesla and SpaceX CEO Elon Musk and Apple co-founder Steve Wozniak, have warned that AI systems could pose an existential risk to humanity, and in March of this they signed an open letter (see here) urging AI developers to pause their research efforts for six months: “We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.” In your opinion, should AI researchers pause their work for at least six months?

(A) YES

(B) NO

Image credit: Alisa Stern, via Shutterstock
Unknown's avatar

About F. E. Guerra-Pujol

When I’m not blogging, I am a business law professor at the University of Central Florida.
This entry was posted in Uncategorized. Bookmark the permalink.

Leave a comment