Zadie Smith, Stephen King and Rachel Cusk’s pirated works used to train AI


Works by thousands of authors also including Margaret Atwood, Haruki Murakami and Jonathan Franzen fed into models run by firms including Meta and Bloomberg

Zadie Smith, Stephen King, Rachel Cusk and Elena Ferrante are among thousands of authors whose pirated works have been used to train artificial intelligence tools, a story in The Atlantic has revealed.

More than 170,000 titles were fed into models run by companies including Meta and Bloomberg, according to an analysis of “Books3” – the dataset harnessed by the firms to build their AI tools.

Death of an Author Prophesies the Future of AI Novels

photo: freepik

The first time I played the tabletop game Fiasco, it wasn’t the story my friends and I made that blew me away. It was the realization that I had just experienced the limitless possibilities of collaborative writing, that the novels I loved featured just one way their narratives could have played out. Alice could have transformed the Mad Tea Party into Wonderland’s first organic tea shop. Don Quixote could have devolved into a windmill-killer for hire.

2023 Women's Day statement

Computer woman
photo: chenspec, on Pixabay

This year's theme for the International Women’s Day is “DigitALL: Innovation and technology for gender equality.” Intellectual Property (IP) offices and organizations around the globe are coming together to support diversity within our offices and organizations and across the IP system.

Beewise: out-of-the-box thinking to save the world’s bees

Bees are the most important pollinators in the insect world and play a central role in ensuring the global food supply. Without pollination, many plants cannot reproduce. Saar Safra, CEO of Israeli start-up Beewise, is on a mission to save bees − and at scale − using artificial intelligence (AI), computer vision and robotics. Mr. Safra explains how Beewise’s high-tech solution is helping to save the world’s bees.

Artificial intelligence: deepfakes in the entertainment industry

photo: WIPO

Ever since the first Terminator movie was released, we have seen portrayals of robots taking over the world. Now we are at the beginning of a process by which technology—specifically, artificial intelligence (AI) — will enable the disruption of the entertainment and media industries themselves.

From traditional entertainment to gaming, we explore how deepfake technology has become increasingly convincing and accessible to the public, and how much of an impact the harnessing of that technology will have on the entertainment and media ecosystem.

X-rays, AI and 3D printing bring a lost Van Gogh artwork to life

The two wrestlers
photo: University College London

Using X-rays, artificial intelligence and 3D printing, two UCL researchers reproduced a “lost” work of art by renowned Dutch painter Vincent Van Gogh, 135 years after he painted over it.

PhD researchers Anthony Bourached (UCL Queen Square Institute of Neurology) and George Cann (UCL Space and Climate Physics), working with artist Jesper Eriksson, used cutting edge technology to recreate a long-concealed Van Gogh painting.

How an award-winning AI film was brought to life by text-to-video generation


If you’re impressed by the recent spate of text-to-image generators, get ready for the next step in AI artistry: text-to-video. While the huge compute costs and scarcity of text-to-video datasets have stunted the technique’s growth, recent research has brought the promise closer to reality. A computer artist called Glenn Marshall has given a glimpse at the potential.

The Belfast-based composer recently won the Jury Award at the Cannes Short Film Festival for his AI film The Crow.

Copyright infringement in AI art

Starry night - digital
photo: GDJ
AI is trained on data, in the case of graphic tools such as Imagen, Stable Diffusion, DALL·E, and MidJourney, the training sets consist of terabytes of images comprising photographs, paintings, drawings, logos, and anything else with a graphical representation. The complaint by some artists is that these models (and accompanying commercialisation) are being built on the backs of human artists, photographers, and designers, who are not seeing any benefit from these business models. The language gets very animated in some forums and chats rooms, often using terms such as “theft” and “exploitation”. So is this copyright infringement? Are OpenAI and Google about to get sued by artists and photographers from around the world?