February of that year we also had the headline: “OpenAI built a text generator so good, it’s considered too dangerous to release.” That was GPT-2. Not 3, not 3.5… 2.
February of that year we also had the headline: “OpenAI built a text generator so good, it’s considered too dangerous to release.” That was GPT-2. Not 3, not 3.5… 2. In 2020, Google made an AI-powered Pinterest clone, then in December fired Timnit Gebru, one of the leading voices in AI ethics, over a paper pointing out limits and dangers of the technology. To be fair, 2020 wasn’t a great year for a lot of people — with the notable exception of OpenAI, whose co-founder Sam Altman had to personally tamp down hype for GPT-3 because it had grown beyond tenable levels. 2021 saw the debut of Google’s own large language model, LaMDA, though the demos didn’t really sell it. Presumably they were still casting about for a reason for it to exist beyond making Assistant throw fewer errors.