Sign in to confirm you’re not a bot
This helps protect our community. Learn more
These chapters are auto-generated

Introduction

0:00

Presentation

2:46

Guiding Questions

4:36

Brief History

6:16

Graph

8:13

Risks

9:22

Mitigation

11:54

Who is getting the benefits

13:18

Unmanageable data

14:14

Value lock

17:53

Bias

20:24

How big is too big

22:09

Research trajectories

23:04

What is stochastic parrot

24:12

Risk management strategies

28:02

Challenges

29:39

Questions

31:19

Questions and comments

35:25

Can language models be too big

38:17

Building specificity

42:15

Incentives

48:57

Companies vs academia

51:23

Copilot

53:57

Comments

55:25

Are stochastic parrots agents

57:56
On the dangers of stochastic parrots: Can language models be too big? 🦜
308Likes
18,306Views
2021Jul 13
Keynote: Professor Emily M Bender Panellists: Dr Anjali Mazumder, Dr Zachary Kenton and Professor Ann Copestake Host: Dr Adrian Weller Website: https://www.turing.ac.uk/events/dange... About the event: Professor Emily M. Bender will present her recent (co-authored) paper On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜 In this paper, Bender and her co-authors take stock of the recent trend towards ever larger language models (especially for English), which the field of natural language processing has been using to extend the state of the art on a wide array of tasks as measured by leaderboards on specific benchmarks. In the paper, they take a step back and ask: How big is too big? What are the possible risks associated with this technology and what paths are available for mitigating those risks? The presentation was be followed by a panel discussion.

Follow along using the transcript.

The Alan Turing Institute

54.3K subscribers