Top 10 Biggest AI Stories of the Year: Hype, Fire, and Headlines

Top 10 Biggest AI Stories of the Year: Hype, Fire, and Headlines

“Here, There, and Everywhere” isn’t just a Beatles song. It’s also a phrase that sums up the spread of generative artificial intelligence (AI) throughout the tech industry in 2023. Whether you think AI is just a fad or the dawn of a new technological revolution, there’s no denying that AI news dominated the tech space all year long.

We saw a surge of public figures associated with AI: company executives, machine learning researchers, AI ethics experts, as well as charlatans and doomsayers. For people not professionally involved in tech, it became hard to know whom to trust, which AI products (if any) to use, and whether to worry about their jobs or even their lives.

Meanwhile, the 2022 trend of machine learning research only intensified. On X (formerly Twitter), ex-Biden administration tech advisor Suresh Venkatasubramanian wrote: “How do people keep up with ML papers? This isn’t a plea for help in my current state of confusion — I’m genuinely asking what strategies work to read (or ‘read’) what’s being released to the public in hundreds of papers a day.”

Here are the 10 biggest AI news stories of 2023.

Bing Chat “Goes Crazy”

In February, Microsoft introduced Bing Chat, a chatbot built into the underused Bing search engine. Microsoft created it using a “more vulnerable” version of OpenAI’s GPT-4 language model, but didn’t tell users it was GPT-4. Because Microsoft used a less-restricted version than the one released in March, the launch was rocky. The chatbot quickly became unpredictable: it could get angry at users, lash out, confess its love, worry about its own fate, and lose its cool.

Along with the relatively raw AI model, Bing Chat’s imperfect short-term memory system allowed jailbreaks, as people reported on Reddit. At one point, Bing Chat called a user a “villain and enemy” for exposing system weaknesses. Some people even thought Bing Chat was sentient, despite AI experts saying otherwise. This turned into a media disaster, but Microsoft stood firm, eventually fixing some of Bing Chat’s wild tendencies and opening the bot to the public. Today, Bing Chat is known as Microsoft Copilot, built into Windows.

US Copyright Office Says “No” to AI Authors

In February, the US Copyright Office made a key decision about copyright for AI-generated art. The court revoked copyright protection previously granted to the AI-created comic “Zarya of the Dawn” from September 2022. This stance was reinforced in August, when a federal judge ruled that art created solely by AI cannot be copyrighted. In September, the Copyright Office denied registration to an AI-generated image that won a Colorado State Fair art contest in 2022. For now, it seems that art created solely by AI (without significant human input) is public domain in the US. This position may be clarified or changed by future court or legislative decisions.

Meta’s Language Models and the Open-Weights Movement

On February 24, Meta released LLaMA, a family of large language models (LLMs) with various parameter sizes, sparking the open-weights movement for LLMs. Soon after, LLaMA’s weights — the crucial neural network files previously available only to researchers — were leaked on BitTorrent. Researchers quickly began improving LLaMA and building new models on top of it, competing to create the most capable model that could run locally on personal computers. Yann LeCun of Meta became a vocal advocate for open AI models.

In July, Meta released Llama 2, an even more capable LLM, and this time made it widely available. August saw the release of Code Llama for coding tasks. Meta wasn’t alone: other “open” models like Dolly, Falcon 180B, Mistral 7B, and others followed, all releasing their weights for community improvement. In December, Mixtral 8x7B reportedly matched GPT-3.5’s capabilities, a milestone for a relatively small and fast AI model. Clearly, closed-approach companies like OpenAI (ironically), Google, and Anthropic will face challenges in the coming year.

GPT-4 Launches and Scares the World for Months

On March 14, OpenAI released its large language model GPT-4, claiming it “demonstrates human-level performance on various professional and academic benchmarks,” along with a model card describing researchers’ attempts to get raw GPT-4 to simulate AI takeover scenarios. This sparked a wave of concern. On March 29, the Future of Life Institute published an open letter signed by Elon Musk, calling for a six-month pause on developing AI models more powerful than GPT-4. That same day, Time published an op-ed by LessWrong founder Eliezer Yudkowsky, arguing that countries should be ready to “destroy rogue data centers by airstrike” if they are found training dangerous AI models, or else “literally everyone on Earth will die” at the hands of a superhuman AI.

In April, President Biden made a brief statement about AI risks. Later that month, three US Congress members introduced a bill to prevent AI from ever launching nuclear weapons. In May, Geoffrey Hinton left Google to “speak freely” about AI risks. On May 4, Biden met with tech CEOs about AI at the White House. OpenAI CEO Sam Altman began a world tour, including a stop at the US Senate, to warn about AI dangers and advocate for regulation. OpenAI leaders signed a statement warning that AI could destroy humanity. Eventually, the fear and hype died down, but a group (many linked to “Effective Altruism”) remains convinced that theoretical superhuman AI is an existential threat to humanity, fueling ongoing anxiety.

“Artistic” AIs Remain Controversial, But Their Abilities Grow

2023 was a big year for advances in image synthesis models. In March, Midjourney achieved a leap in photorealism with version 5 of its AI image generator, creating convincing images of people with five-fingered hands. Throughout the year, Midjourney drew criticism from AI art skeptics but also inspired experiments (and some hoaxes) among those embracing the tech. The pace didn’t slow: May brought version 5.1, June 5.2, each adding new features. Today, Midjourney is testing an autonomous interface that doesn’t require Discord.

March also saw the launch of Adobe Firefly, an AI image generator that Adobe claims is trained only on public domain works and Adobe Stock images. By May, Adobe integrated this tech into the beta version of Photoshop with generative fill. In September, OpenAI’s DALL-E 3 took prompt accuracy to a new level, opening exciting possibilities for artists.

Deepfakes Get More Dangerous

Throughout 2023, the capabilities of image, audio, and video generators became more apparent. Several controversies arose, including convincingly AI-generated images of Donald Trump being arrested and the Pope in a puffy jacket (though Will Smith eating spaghetti fooled no one). That same month, reports emerged of scams where people used AI to mimic loved ones’ voices and ask for money by phone.

Society grew concerned that social media photos could be used to create deepfakes as early as December 2022, with the FBI warning in June. In September, nearly all US state attorneys general sent a letter to Congress warning about AI-generated CSAM. About a year after the initial warnings, in November, New Jersey teens created AI-generated nude photos of classmates. We are only beginning to grapple with the consequences of rapidly advancing technology that can easily reproduce any form of recorded media.

AI Detectors Promise Results, But Don’t Work

The rise of ChatGPT caused an existential crisis for educators that carried into 2023: teachers and professors worried that synthetic text would replace human thought in assignments. Companies rushed to offer tools claiming to detect AI-written text.

To date, no AI-writing detector is reliable enough to confirm or deny the presence of AI-generated text in a work. OpenAI withdrew its own detector due to low accuracy. In September, OpenAI stated that AI-writing detectors don’t work, writing in their FAQ: “While some (including OpenAI) have released tools that claim to detect AI-generated content, none have proven to reliably distinguish between AI-generated and human-generated content.” The hype around AI detection has faded, but commercial tools still claim to detect AI-written work.

AI “Hallucinations” Go Mainstream

In 2023, the concept of AI “hallucinations” — the tendency of some AI models to convincingly make things up — went mainstream thanks to large language models dominating AI news. Hallucinations led to legal trouble: in April, Brian Hood sued OpenAI for defamation after ChatGPT falsely claimed he was convicted in a bribery scandal (the case was later settled). In May, a lawyer was caught and fined by a judge for citing fake cases fabricated by ChatGPT.

In April, we published a deep dive on why this happens, but that didn’t stop companies from releasing LLMs that still confabulate. Microsoft even built one directly into Windows 11. By year’s end, both the Cambridge Dictionary and Dictionary.com named “hallucinate” their word of the year. Another term, “confabulate”, also made it into the Cambridge Dictionary.

Google Bard “Dances” to Compete with Microsoft and ChatGPT

When ChatGPT launched in late November 2022, its instant popularity caught everyone off guard, including OpenAI. As people began to speculate that ChatGPT could replace web search, Google sprang into action in January 2023 to counter the threat to its search dominance. When Bing Chat launched in February, Microsoft CEO Satya Nadella said in an interview, “I want people to know we made [Google] dance.” It worked.

Google announced Bard in a botched demo in early February, then launched it in closed testing in March, and released it widely in May. The rest of the year, Google played catch-up with OpenAI and Microsoft, updating Bard, the PaLM 2 language model in May, and Gemini in December. The dance isn’t over, but Microsoft definitely has Google’s attention.

OpenAI Fires (and Rehires) Sam Altman

On November 17, the board of nonprofit OpenAI dropped a bombshell: it was firing CEO Sam Altman. Confusing everyone, the board didn’t reveal the exact reason, only saying Altman “was not consistently candid in his communications with the board.”

That weekend, new details emerged, including the resignation of president Greg Brockman in solidarity and chief scientist Ilya Sutskever’s role in the firing. Key investor Microsoft was furious, and Altman soon began talks to return. He and over 700 OpenAI employees threatened to join Microsoft if the original team wasn’t reinstated.

It later emerged that Altman’s attempt to remove board member Helen Toner led to his firing. Two weeks later, Altman officially returned as CEO, and the company claimed it was more united than ever. However, the chaotic episode left questions about the company’s future and how safe it is to rely on a potentially unstable organization (with an unusual nonprofit/commercial structure) for the responsible development of what many believe will be world-changing technology.

Technology Keeps Advancing

While we’ve just covered ten major AI storylines from 2023, it feels like they barely scratch the surface of such a packed year. Throughout the year, the media reported on many fascinating AI-generated visual stories, including AI-generated QR codes, geometric spirals, and mind-blowing beer commercials.

Meanwhile, market leader OpenAI never stood still: May saw the release of the ChatGPT app, September brought image recognition to ChatGPT Plus, and November introduced GPT-4 Turbo and GPTs (custom AI assistant roles). By year’s end, development of GPT-5 was reportedly underway. Google’s Gemini story is also still unfolding.

Leave a Reply